HP XP7 Disk Array Configuration Guide Abstract This guide provides requirements and procedures for connecting an HP XP7 disk array to a host system, and for configuring the disk array for use with a specific operating system. This document is intended for system administrators, HP representatives, and authorized service providers who are involved in installing, configuring, and operating disk arrays.
© Copyright 2014 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents 1 Overview..................................................................................................8 What's in this guide..................................................................................................................8 Audience.................................................................................................................................8 Features and requirements.............................................................................................
NonStop.................................................................................................35 Installation roadmap...............................................................................................................35 Installing and configuring the disk array....................................................................................35 Defining the paths..............................................................................................................
7 Linux.......................................................................................................58 Installation roadmap...............................................................................................................58 Installing and configuring the disk array....................................................................................58 Defining the paths..............................................................................................................
Configuring the Fibre Channel ports.....................................................................................82 Installing and configuring the host.............................................................................................83 Loading the operating system and software...........................................................................83 Installing and configuring the FCAs .....................................................................................
Supported emulations.......................................................................................................118 Emulation specifications....................................................................................................118 NonStop.............................................................................................................................121 Supported emulations.....................................................................................................
1 Overview What's in this guide This guide includes information on installing and configuring XP7 Storage. The following operating systems are covered: • HP-UX • Windows • NonStop • OpenVMS • VMware • Linux • Solaris • IBM AIX For additional information on connecting disk arrays to a host system and configuring for a mainframe, see the HP XP7 Mainframe Host Attachment and Operations Guide.
• HP XP7 Array Manager Software • Check with your HP representative for other XP7 software available for your system. NOTE: • Linux, NonStop, and Novell NetWare: Make sure you have superuser (root) access. • OpenVMS firmware version: Alpha System firmware version 5.6 or later for Fibre Channel support. Integrity servers have no minimum firmware version requirement. • HP does not support using Command View Advanced Edition Suite Software from a Guest OS.
Device emulation types XP7 Storage supports these device emulation types: • OPEN-x devices: OPEN-x logical units represent disk devices. Except for OPEN-V, these devices are based on fixed sizes. OPEN-V is a user-defined size based on a CVS device. Supported emulations include OPEN-3, OPEN-8, OPEN-9, OPEN-E, OPEN-L, and OPEN-V devices. • LUSE devices (OPEN-x*n): Logical Unit Size Expansion (LUSE) devices combine 2 to 36 OPEN-x devices to create expanded LDEVs larger than standard OPEN-x disk devices.
offers VxVM, which includes DMP. HP supplies HDLM. All these products provide multipath configuration management, FCAs I/O load balancing, and automatic failover support, however their level of configuration possibilities and FCAs support differs. • For instructions on STMS, Storage Multipathing, or VxVM, see the manufacturers' manuals. SNMP configuration XP7 Storage support standard SNMP for remotely managing arrays.
2 HP-UX You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation road map Perform these actions to install and configure the disk array: 1. “Installing and configuring the disk array” (page 12) 2. 3. 4.
Defining the paths Use the XP7 Command View Advanced Edition Software or the XP7 Remote Web Console (shown) to define paths between hosts and volumes (LUNs) in the disk array. This process is also called “LUN mapping.
CAUTION: The correct host mode must be set for all new installations (newly connected ports) to HP-UX hosts. Do not select a mode other than 08 for HP-UX. Changing a host mode after the host has been connected is disruptive and requires the server to be rebooted. When a new host group is added, additional host group modes (options) can be configured. The storage administrator must verify if an additional host group mode is required for the host group.
Table 1 Host group modes (options) HP-UX Host Group Mode Function Default Comments 12 Deletion of Ghost LUN Inactive Previously MODE280 33 Task retry ID enable Inactive HP-UX 11.31 only CAUTION: Changing host group modes for ports where servers are already installed and configured is disruptive and requires the server to be rebooted. Configuring the Fibre Channel ports Configure the disk array Fibre Channel ports by using XP7 Command View Advanced Edition Software or the XP7 Remote Web Console.
multi-cluster environment with three clusters, each containing two nodes. The nodes share access to the disk array. Figure 2 Multi-cluster environment (HP-UX) Within the SAN, the clusters can be homogeneous (all the same operating system) or heterogeneous (mixed operating systems). How you configure LUN security and fabric zoning depends on the operating system mix and the SAN configuration.
Verifying FCA installation After configuring the ports on the disk array, verify that the FCAs are installed properly. Use the ioscan –f command, and verify that the rows shown in the example are displayed. If these rows are not displayed, check the host adapter installation (hardware and driver installation) or the host configuration. Example # ioscan Class ...
where: • x = SCSI bus instance number • y = SCSI target ID • z = LUN • c stands for controller • t stands for target ID • d stands for device The numbers x, y, and z are hexadecimal. Table 3 Device file name example (HP-UX) 5. SCSI bus instance number Hardware path SCSI TID LUN File name 00 14/12.6.0 6 0 c6t0d0 00 14/12.6.
SAM” (page 149) for further information. The newer releases of HP-UX have deprecated the SAM tool and replaced it with the System Management Homepage (SMH) tool. Verifying the device files and drivers The device files for new devices are usually created automatically during HP-UX startup. Each device must have a block-type device file in the /dev/dsk directory and a character-type device file in the /dev/rdsk directory. To verify the device files and drivers, run ioscan (or ioscan –fn).
7. Create the volume group. To allocate more than one physical volume to the new volume group, add the other physical volumes, separated by a space. Example # vgcreate /dev/vg06 /dev/dsk/c6t0d0 Volume group "/dev/vg06" has been successfully created. Volume group configuration for /dev/vg06 has been saved in /etc/1vmconf/vg06.conf.
Example lvextend –L size /dev/vgnn/lvolx • lvreduce Decreases the size of an existing logical volume. Any file system attached to the logical volume must be unmounted before executing the lvreduce command. Example lvreduce –L size /dev/vgnn/lvolx CAUTION: Data within the file system can be lost after execution of lvreduce. Create logical volumes after you create volume groups. A logical volume must be created for each new SCSI disk device. To create logical volumes: 1.
Consistency Recovery Schedule LV Size (Mbytes) Current LE Allocated PE Stripes Stripe Size (Kbytes) Bad block Allocation 4. MWC parallel 2344 586 586 0 0 on strict Repeat steps 1–3 for each logical volume to be created. You can create only one logical volume at a time. However, you can verify multiple logical volumes at a time. Creating the file systems Create the file system for each new logical volume on the disk array. The default file system types are: • HP-UX OS version 11.
1. Verify the current I/O timeout value using the pvdisplay command: Example # pvdisplay /dev/dsk/c0t6d0 - - - Physical volumes PV Name VG Name PV Status Allocatable VGDA Cur LV PE Size (Mbytes) Total PE Free PE Allocated PE Stale PE IO Timeout (Seconds) 2.
Mounting and verifying the file systems After the mount directories have been created, mount and verify the file system for each logical volume. To mount and verify the file systems: 1. Use mount to mount the file system for the volume. Example # mount /dev/vg06/lvol1 2. /AHPMD-LU00 Repeat step 1 for each logical volume on the disk array. If you need to unmount a file system, use the unmount command. 3. Use the bdf command to verify that the file systems are correct.
To set up and verify the auto-mount parameters: 1. Edit the /etc/checklist (/etc/fstab) file to add a line for each OPEN-x device on the disk array. This example and the following table show the auto-mount parameters. Example #cp -ip /etc/checklist /etc/checklist.
3 Windows You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. NOTE: For minimum and recommended KB fixes, please refer to the HP SPOCK website at: http://spock.corp.hp.
• Creating host groups • Assigning Fibre Channel adapter WWNs to host groups • Mapping volumes (LDEVs) to host groups (by assigning LUNs) In XP7 Command View Advanced Edition Software, LUN mapping includes: • Configuring ports • Creating storage groups • Mapping volumes and WWN/host access permissions to the storage groups For more information about LUN mapping, see the HP XP7 Provisioning for Open Systems User Guide or the XP7 Remote Web Console online help.
The available host mode settings are as follows: Table 5 Host mode settings (Windows) Host mode Description 2C (available on some array models) HP recommended. For use with LUSE volumes when online LUN expansion is required or might be required in the future. 0C HP recommended. Use if future online LUN expansion is not required or planned.
The following host group modes (options) are available for Windows: Table 7 Host group modes (options) Windows Host Group Mode 6 Function Default Parameter Setting Failure for TPRLO Inactive When using the Emulex FCA in the Windows environment, the parameter setting for TPRLO failed. After receiving TPRLO and FCP_CMD, respectively. PRLO will respond when HostMode=0x0C/ 0x2C and HostModeOption=0x06. (MAIN Ver.50-03-14-00/00 and later) 13 SIM report at link failure.
SAN topology. Use switch zoning if you connect different types of hosts to the array through the same switch. Installing and configuring the host This section explains how to install and configure Fibre Channel adapters (FCAs) that connect the host to the disk array. Loading the operating system and software Follow the manufacturer's instructions to load the operating system and software onto the host. Load all OS patches and configuration utilities supported by HP and the FCA manufacturer.
• If booting over the SAN, within a server, the booting FCAs must be from the same vendor. Additional data storage FCAs can be from a different vendor.
6. 7. 8. Navigate to the Disk drives section and expand the section. Right-click each device labeled HP OPEN-V Multi-Path Disk Device then click Properties. Record the device information using the worksheet in “Worksheet” (page 111). For Windows 2012 Use 1. 2. 3. 4. 5. 6. 7. 8. for following procedure for Windows 2012. Log into the host as an administrator. Click Server Manager. Click Tools. Click Computer Management. Navigate to System Tools then click Device Manager.
5. 6. 7. Click OK to update the system configuration and start the Write Signature wizard. For each new disk, click OK to write a signature, or click No to prevent writing a signature. When you have performed this process for all new disks, the Disk Management main window opens and displays the added disks. For Windows 2012 Use 1. 2. 3. 4. for following procedure for Windows 2012. Click Server Manager. Click Tools. Click Computer Management. Navigate to Storage then click Disk Management.
Format Options: Click Perform a Quick Format to decrease the time required to format the partition. Click Enable file and folder compression only if you want to enable compression. 3. 4. 5. Verify the Disk Management main window displays the correct file system (NTFS) for the formatted partition. “Healthy” indicates the partition has been created and formatted successfully. Repeat this procedure for each new disk device. Exit Disk Management, clicking Yes to save your changes.
4 NonStop You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. The HP NonStop operating system runs on HP S-series and Integrity NonStop servers to provide continuous availability for applications, databases, and devices.
two different clusters of the disk array, and give each host group access to separate but identical LUNs. This arrangement minimizes the shared components among the four paths, providing both mirroring and greater failure protection. NOTE: For the highest level of availability and fault tolerance, HP recommends the use of two XP7 disk arrays, one for the Primary disks and one for the Mirror disks. This process is also called “LUN mapping.
CAUTION: The correct host mode must be set for all new installations (newly connected ports) to NonStop hosts. Do not select a mode other than 0C or 2C for NonStop. Changing a host mode after the host has been connected is disruptive and requires the server to be rebooted. When a new host group is added, additional host group modes (options) can be configured. The storage administrator must verify if an additional host group mode is required for the host group.
Configuring the Fibre Channel ports Configure the disk array Fibre Channel ports by using XP7 Command View Advanced Edition Software or the XP7 Remote Web Console (shown). Select the settings for each port based on your SAN topology. Use switch zoning if you connect different types of hosts to the array through the same switch. Installing and configuring the host This section explains how to install and configure the host and Fibre Channel ServerNet Adapters (FCSAs) that connect the host to the disk array.
Fabric zoning and LUN security for multiple operating systems You can connect multiple clusters of various operating systems to the same switch using appropriate switch zoning and array LUN security as follows: • Use LUN Manager for LUN isolation when multiple NonStop systems connect through a shared array port. LUN Manager provides LUN security by allowing you to restrict which LUNs each host can access.
5 OpenVMS You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: 1. “Installing and configuring the disk array” (page 40) 2. 3. 4.
NOTE: As illustrated in “Microprocessor port sharing (OpenVMS)” (page 41), there is no microprocessor sharing with 8-port module pairs. With 16- and 32-port module pairs, alternating ports are shared. Table 11 Microprocessor port sharing (OpenVMS) Channel adapter Model Description Nr.
Path configuration for OpenVMS requires the following steps: 1. Define one command device LUN per array and present it to the OpenVMS hosts across all connected paths. 2.
When a new host group is added, additional host group modes (options) can be configured. The storage administrator must verify if an additional host group mode is required for the host group.
Setting the UUID HP recommends that OpenVMS customers use host mode option 33 to enable the UUID feature. This increases the capabilities for OpenVMS hosts that access the disk array, by: • Allowing the presentation of CU:LDEVs before 7F:FF to the OpenVMS hosts. • Allowing the OpenVMS system administrator to define the DGA device number to present to the OpenVMS host.
2. 3. 4. 5. In the tree, double-click a port. The host groups corresponding to the port are displayed. In the tree, select a host group. The LU Path list displays showing information about LU paths associated with the selected host group. In the LU Path list, select one or more LUNs to which volumes are assigned (if a volume is assigned to an LUN, the columns on the right of the LUN column are not empty). When plural LUNs are selected, same UUID is set to all selected LUNs.
Installing and configuring the host This section explains how to install and configure Fibre Channel adapters (FCAs) that connect the host to the disk array. Loading the operating system and software Follow the manufacturer's instructions to load the operating system and software onto the host. Load all OS patches and configuration utilities supported by HP and the FCA manufacturer.
Figure 6 Multi-cluster environment (OpenVMS) Within the SAN, the clusters can be homogeneous (all the same operating system) or heterogeneous (mixed operating systems). How you configure LUN security and fabric zoning depends on the operating system mix and the SAN configuration. WARNING! For OpenVMS — HP recommends that a volume be presented to one OpenVMS cluster or stand alone system at a time.
Connecting the disk array The 1. 2. 3. HP service representative connects the disk array to the host by: Verifying operational status of the disk array channel adapters, LDEVs, and paths. Connecting the Fibre Channel cables between the disk array and the fabric switch or host. Creating Fibre Channel zones connecting the host systems to the array ports. See your switch manufacturer's documentation for information on setting up zones. 4. Verifying the ready status of the disk array and peripherals.
Verifying file system operation 1. Use the show device d command to list the devices: Example $ show device dg NOTE: Use the show device/full dga100 command to show the path information for the device: Example: $ show device/full $1$dga100: Disk $1$DGA100: (NODE01), device type HP OPEN-V, is online, file-oriented device, shareable, device has multiple I/O paths, served to cluster via MSCP Server, error logging is enabled.
$ directory Directory $1$DGA100:[USER] TEST.TXT;1 Total of 1 file. 7. Verify the content of the data file: Example $ type test.txt this is a line of text for the test file test.txt 8. Delete the data file: Example $ delete test.txt; $ directory %DIRECT-W-NOFILES, no files found $ type test.txt %TYPE-W-SEARCHFAIL,error searching for $1$DGA100:[USER]TEST.TXT; -RMS-E-FNF, file not found The delete command removes the test.txt file. The directory command verifies that the test.
6 VMware You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: 1. “Installing and configuring the disk array” (page 51) 2. 3. 4.
• Assigning Fibre Channel adapter WWNs to host groups • Mapping volumes (LDEVs) to host groups (by assigning LUNs) In XP7 Command View Advanced Edition Software, LUN mapping includes: • Configuring ports • Creating storage groups • Mapping volumes and WWN/host access permissions to the storage groups For details see the HP XP7 Provisioning for Open Systems User Guide. Note the LUNs and their ports, WWNs, nicknames, and LDEVs for later use in verifying host and device configuration.
When a new host group is added, additional host group modes (host mode options) can be configured. The storage administrator must verify if an additional host group mode is required for the host group. CAUTION: Changing host group modes for ports where servers are already installed and configured is disruptive and requires the server to be rebooted.
Installing and configuring the host This section explains how to install and configure Fibre Channel adapters (FCAs) that connect the host to the disk array. Loading the operating system and software Follow the manufacturer's instructions to load the operating system and software onto the host. Load all OS patches and configuration utilities supported by HP and the FCA manufacturer.
Table 14 Fabric zoning and LUN security settings (VMware) Environment OS Mix Fabric Zoning Standalone SAN (non-clustered) homogeneous (a single OS type present Not required in the SAN) Clustered SAN heterogeneous (more than one OS type Required present in the SAN) Multi-Cluster SAN LUN Security Must be used when multiple hosts or cluster nodes connect through a shared port Installing and configuring the host 55
Host multipathing For XP7 Storage LUNS, the ESXi host detects and uses the VMW_SATP_DEFAULT_AA Storage Array Type Plugin (SATP). The plugin has these possible Path Policies (PSP): • VMW_PSP_FIXED (default) • VMW_PSP_RR (HP recommended setting) • VMW_PSP_MRU IMPORTANT: The default Path Policy is VMW_PSP_FIXED; however, HP recommends using VMW_PSP_RR. Connecting the disk array The 1. 2. 3.
VMware ESX Server 5.X 1. In VirtualCenter, select the VM you plan to edit, and then click Edit Settings. 2. Select the SCSI controller for use with your shared LUNs. NOTE: If only one SCSI controller is present, add another disk that uses a different SCSI bus than your current configured devices. 3. Select the Bus Sharing mode (virtual or physical) appropriate for your configuration, and then click OK. NOTE: Sharing VMDK disks is not supported.
7 Linux You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: 1. “Installing and configuring the disk array” (page 58) 2. 3. 4.
This process is also called “LUN mapping.
CAUTION: The correct host mode must be set for all new installations (newly connected ports) to Linux hosts. Do not select a mode other than 00 for Linux. Changing a host mode after the host has been connected is disruptive and requires the server to be rebooted. When a new host group is added, additional host group modes (options) can be configured. The storage administrator must verify if an additional host group mode is required for the host group.
Table 15 Host group mode (option) Linux Host Group Mode 7 Function Default Comments Reporting Unit Attention when adding LUN Inactive Previously MODE249 CAUTION: Changing host group modes for ports where servers are already installed and configured is disruptive and requires the server to be rebooted. Configuring the Fibre Channel ports Configure the disk array Fibre Channel ports by using XP7 Command View Advanced Edition Software or the XP7 Remote Web Console (shown).
Setting the system option modes The HP service representative sets the system option mode(s) based on the operating system and software configuration of the host. Notify your HP representative if you install storage agnostic software (such as backup or cluster software) that might require specific settings. Installing and configuring the host This section explains how to install and configure Fibre Channel adapters (FCAs) that connect the host to the disk array.
Fabric zoning and LUN security for multiple operating systems You can connect multiple clusters with multiple operating systems to the same switch and fabric using appropriate zoning and LUN security as follows: • Storage port zones can overlap if more than one operating system needs to share an array port. • Heterogeneous operating systems can share an array port if you set the appropriate host group and mode. All others must connect to a dedicated array port.
3. Verify that the system recognizes the disk array partitions by viewing the /proc/partitions file.
4. 5. Select w to write the partition information to disk and complete the fdisk command. Other commands that you might want to use include: d to remove partitions q to stop a change 6. Repeat steps 1–5 for each device. Creating the file systems Creating file systems with ext3 1. Enter mkfs –t ext3 /dev/device_name. Example # mkfs –t ext3 /dev/sdd 2. Repeat step 1 for each device on the disk array. Creating the mount directories Create mount directories using the mkdir command.
2. Repeat step 1 for each device on the disk array. Creating the mount table Add the new devices to the /etc/fstab file to specify the automount parameters for each device. 1. Edit the /etc/fstab file to add one line for each device to be automounted. Each line of the file contains: (A) device name, (B) mount point, (C) file system type (“ext3”), (D) mount options (“defaults”), (E) enhance parameter (“1”), and (F) fsck pass 2.
8 Solaris You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: 1. “Installing and configuring the disk array” (page 67) 2. 3. 4. 5.
This process is also called “LUN mapping.
CAUTION: The correct host mode must be set for all new installations (newly connected ports) to Solaris hosts. Do not select a mode other than 09 for Solaris. Changing a host mode after the host has been connected is disruptive and requires the server to be rebooted. When a new host group is added, additional host group modes (options) can be configured. The storage administrator must verify if an additional host group mode is required for the host group.
Table 17 Host group modes (options) Solaris Host Group Mode 2 Function Default Comments Veritas DBE+RAC Database Inactive Edition/Advanced Cluster for Real Application Clusters or if Veritas Cluster Server 4.0 or later with I/O fencing function is used. Previously MODE186 7 Reporting Unit Attention when adding LUN Inactive Previously MODE249 13 SIM report at link failure Inactive Optional This mode is common to all host platforms.
Installing and configuring the host This section explains how to install and configure Fibre Channel adapters (FCAs) that connect the host to the disk array. Loading the operating system and software Follow the manufacturer's instructions to load the operating system and software onto the host. Load all OS patches and configuration utilities supported by HP and the FCA manufacturer.
Setting the disk and device parameters The queue depth parameter (max_throttle) for the devices must be set according to one of the options specified in Table 18 (page 72). Table 18 Max throttle (queue depth) requirements for the devices (Solaris) Queue depth option Requirements Option 1 XP7: Queue_depth 2048 default. CAUTION: The number of issued commands must be completely controlled.
To configure the FCA: • Check with your HP representative to determine which non-Oracle branded FCAs are supported by HP with the Oracle SAN driver Stack, and if a specific System Mode or Host Group Mode setting is required for Oracle and non-Oracle branded FCAs. • For Solaris 10, use the Oracle update manager to install the latest patches. • To use Oracle StorEdge Traffic Manager (MPxIO)/Oracle Storage Multipathing, edit the driver configuration file /kernel/drv/scsi_vhci.
no-device-delay=0; nodev-tmo=30; linkdown-tmo=30; # verify, should be default value • Persistent bindings are necessary in a fabric topology and are used to bind a SCSI target ID to a particular WWPN (of an array port). This is required to guarantee that the SCSI target IDs will remain the same when the system is rebooted. Persistent bindings can be set by editing the configuration file or by using the lputil utility.
name="sd" parent="qla2300" target=0; Perform a reconfiguration reboot to implement the changes to the configuration files. Use the /opt/QLogic_Corporation/drvutil/qla2300/qlreconfig –d qla2300 -s command to perform LUN rediscovery after configuring LUNs as explained in “Defining the paths” (page 13). Verifying the FCA configuration After installing the FCAs, verify recognition of the FCAs and drivers as follows: 1. Log into the system as root.
Connecting the disk array The 1. 2. 3. HP service representative performs the following steps to connect the disk array to the host: Verifying operational status of the disk array channel adapters, LDEVs, and paths. Connecting the Fibre Channel cables between the disk array and the fabric switch or host. Verifying the ready status of the disk array and peripherals.
Labeling and partitioning the devices Partition and label the new devices using the Oracle format utility. CAUTION: The repair, analyze, defect, and verify commands/menus are not applicable to the XP7 Storage. When selecting disk devices, be careful to select the correct disk as using the partition/label commands on disks that have data can cause data loss. 1. 2. 3. 4. 5. 6. 7. 8. Enter format at the root prompt to start the utility. Verify that all new devices are displayed.
5. You may check and change the maxcontig parameter later with the fstyp and tunefs commands as outlined in the following example: # fstyp -v /dev/rdsk/c1t2d0s0 | grep maxcontig maxcontig 128 rotdelay 0ms rps # tunefs -a 32 /dev/rdsk/c1t2d0s0 90 Creating the mount directories 1. 2. 3. 4. Create a mount directory for each device using the mkdir command. Enter each device into the mount table by editing /etc/vfstab. Use the mount -a command to auto-mount devices.
9 IBM AIX You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: 1. “Installing and configuring the disk array” (page 79) 2. 3.
• Creating host groups • Assigning Fibre Channel adapter WWNs to host groups • Mapping volumes (LDEVs) to host groups (by assigning LUNs) In XP7 Command View Advanced Edition Software, LUN mapping includes: • Configuring ports • Creating storage groups • Mapping volumes and WWN/host access permissions to the storage groups For details see the HP XP7 Provisioning for Open Systems User Guide.
CAUTION: The correct host mode must be set for all new installations (newly connected ports) to AIX hosts. Do not select a mode other than 0F for AIX. Changing a host mode after the host has been connected is disruptive and requires the server to be rebooted. When a new host group is added, additional host group modes (options) can be configured. The storage administrator must verify if an additional host group mode is required for the host group.
Table 19 Host group mode (option) IBM AIX Host Group Mode 2 22 Function Default Comments Veritas Storage Foundation for Oracle RAC, DBE+RAC Database Edition/Advanced Cluster for Real Application Clusters or if Veritas Cluster Server 4.0 or later with I/O fencing function is used. Inactive Previously MODE186 Do not apply this option to Oracle Cluster. This Host Group Mode can change the Inactive response to the Host when a reserved device has received a mode sense command unrelated to the Reserve.
Installing and configuring the host This section explains how to install and configure Fibre Channel adapters (FCAs) that connect the host to the disk array. Loading the operating system and software Follow the manufacturer's instructions to load the operating system and software onto the host. Load all OS patches and configuration utilities supported by HP and the FCA manufacturer.
Figure 10 Multi-cluster environment (IBM AIX) Within the SAN, the clusters can be homogeneous (all the same operating system) or heterogeneous (mixed operating systems). How you configure LUN security and fabric zoning depends on the operating system mix and the SAN configuration.
2. 3. If the disk array LUNs are defined after the IBM system is powered on, issue a cfgmgr command to recognize the new devices. Use the lsdev command to display system device data and verify that the system recognizes the newly installed devices. The devices are listed by device file name. All new devices should be listed as Available. If they are listed as Define, you must perform additional configuration steps before they can be used.
Table 22 Device parameters-queue depth (IBM AIX) Parameter Queue depth per LU Queue depth per port (MAXTAGS) Recommended Value 32 1024 The recommended queue depth settings might not provide the best I/O performance for your system. You can adjust the queue depth setting to optimize the I/O performance of the disk array. Displaying the device parameters using the AIX command line At the command line prompt, enter lsattr -E -l hdiskx, where hdiskx is the device file name.
Communications Applications and Services Print Spooling Problem Determination Performance & Resource Scheduling System Environments Processes & Subsystems Applications Using SMIT (information only) 3. 4. 5. Select Fixed Disk. Select Change/Show Characteristics of a Disk. Select the desired device from the Disk menu. The Change/Show Characteristics of a Disk screen for that device is displayed. 6. Enter the correct values for the read/write timeout value, queue depth, and queue type parameters.
Processes & Subsystems Applications Using SMIT (information only) 3. Select Logical Volume Manager. Example System Storage Management (Physical & Logical Storage) Move cursor to desired item and press Enter. Logical Volume Manager File Systems Files & Directories Removable Disk Management System Backup Manager 4. *1 Select Volume Groups. Example Logical Volume Manager Move cursor to desired item and press Enter. Volume Groups Logical Volumes Physical Volumes Paging Space 5. Select Add a Volume Group.
PHYSICAL VOLUME names Activate volume group AUTOMATICALLY at system restart? Volume Group MAJOR NUMBER 7. [hdisk1] yes [] Enter yes or no in the Activate volume group AUTOMATICALLY at system restart? field. If you are not using PowerHA, enter yes. If you are using PowerHA, enter no. 8. Press Enter when you have entered the values. The confirmation screen appears. Example ARE YOU SURE? Continuing may delete information you may want to keep. This is your last chance to stop before continuing.
4. Select Add / Change / Show / Delete File Systems. Example File Systems Move cursor to desired item and press Enter. List All File Systems List All Mounted File Systems Add / Change / Show / Delete File Systems Mount a File System Mount a Group of File Systems Unmount a File System Unmount a Group of File Systems Verify a File System Backup a File System Restore a File System 5. Select Journaled File System. Example Add / Change / Show / Delete File Systems Move cursor to desired item and press Enter.
Mount AUTOMATICALLY at system restart? Enter yes. CAUTION: In high availability systems (PowerHA), enter no. Number of bytes per node. Enter the number of bytes appropriate for the application, or use the default value. Example Add a Journaled File System Type or select values in entry fields. Press Enter AFTER making all desired changes.
4. Verify that the file system is usable by performing some basic operations (for example, file creation, copying, and deletion) on each logical device. Example # cd /hp00 # cp /smit.log /hp00/smit.log.back1 # ls -l hp00 –rw-rw-rw1 root system 375982 Nov 30 17:25 smit.log.back1 # cp smit.log.back1 smit.log.back2 # ls -l -rw-rw-rw1 root system 375982 Nov 30 17:25 smit.log.back1 -rw-rw-rw1 root system 375982 Nov 30 17:28 smit.log.back2 # rm smit.log.back1 # rm smit.log.back2 5.
10 Citrix XenServer Enterprise You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: 1. “Installing and configuring the disk array” (page 93) 2. 3. 4.
This process is also called “LUN mapping.
CAUTION: The correct host mode must be set for all new installations (newly connected ports) to Linux hosts. Do not select a mode other than 00 for Linux. Changing a host mode after the host has been connected is disruptive and requires the server to be rebooted. When a new host group is added, additional host group modes (options) can be configured. The storage administrator must verify if an additional host group mode is required for the host group.
Table 23 Host group mode (option) Linux Host Group Mode 7 Function Default Comments Reporting Unit Attention when adding LUN Inactive Previously MODE249 CAUTION: Changing host group modes for ports where servers are already installed and configured is disruptive and requires the server to be rebooted. Configuring the Fibre Channel ports Configure the disk array Fibre Channel ports by using XP7 Command View Advanced Edition Software or the XP7 Remote Web Console (shown).
Setting the system option modes The HP service representative sets the system option mode(s) based on the operating system and software configuration of the host. Notify your HP representative if you install storage agnostic software (such as backup or cluster software) that might require specific settings. Installing and configuring the host This section explains how to install and configure Fibre Channel adapters (FCAs) that connect the host to the disk array.
Fabric zoning and LUN security for multiple operating systems You can connect multiple clusters with multiple operating systems to the same switch and fabric using appropriate zoning and LUN security as follows: • Storage port zones can overlap if more than one operating system needs to share an array port. • Heterogeneous operating systems can share an array port if you set the appropriate host group and mode. All others must connect to a dedicated array port.
1 0 1 1 qlogic host1 qlogic QLogic HBA Driver 1 host0 qlogic QLogic HBA Driver 0 [root@cb-xen-srv31 ~]# Configuring disk array devices Disks in the disk array are configured using the same procedure for configuring any ne
3. Click Enter Maintenance Mode . 4. Select the General tab and then click Properties.
5. Select the Multipathing tab, check the Enable multipathing on this server check box, and then click OK. 6. Right-click the domU that was placed in maintenance mode and select Exit Maintenance Mode.
7. Open a command line interface to the dom0 and edit the /etc/multipath-enable.conf file with the appropriate array. NOTE: HP recommends that you use the RHEL 5.x device mapper config file and multipathing parameter settings on HP.com. Use only the array-specific settings, and not the multipath.conf file bundle into the device mapper kit. All array host modes for Citrix XenServer are the same as Linux. 8.
3. Select the type of virtual disk storage for the storage array and then click Next. NOTE: For Fibre Channel, select Hardware HBA.
4. Complete the template and then click Finish. Adding a Virtual Disk to a domU After the Storage Repository has been created on the dom0, the vdisk from the Storage Repository can be assigned to the domU. This section describes how to pass vdisks to the domU. HP Proliant Virtual Console can be used with HP Integrated CitrixXen Server Enterprise Edition to complete this process.
1. Select the domU. 2. Select the Storage tab and then click Add.
3. Type a name, description, and size for the new disk and then click Add. Adding a dynamic LUN To add a LUN to a dom0 dynamically, follow these steps. 1. Create and present a LUN to a dom0 from the array. 2. Enter the following command to rescan the sessions that are connected to the arrays for the new LUN: xe sr-probe type=lvmohba. NOTE: To create a new Storage Repository, see “Creating a Storage Repository” (page 102).
11 Troubleshooting This chapter includes resolutions for various error conditions you may encounter. If you are unable to resolve an error condition, ask your HP support representative for assistance.
Table 25 Error conditions (continued) Error condition Recommended action The host detects a parity error. Check the FCA and make sure it was installed properly. Reboot the host. The host hangs or devices are declared and the host hangs. Make sure there are no duplicate disk array TIDs and that disk array TIDs do not conflict with any host TIDs.
12 Support and other resources Contacting HP For worldwide technical support information, see the HP support website: http://www.hp.
Conventions for storage capacity values HP XP7 storage systems use the following values to calculate physical storage capacity values (hard disk drives): • 1 KB (kilobyte) = 1,000 (103) bytes • 1 MB (megabyte) = 1,0002 bytes • 1 GB (gigabyte) = 1,0003 bytes • 1 TB (terabyte) = 1,0004 bytes • 1 PB (petabyte) = 1,0005 bytes • 1 EB (exabyte) = 10006 bytes HP XP7 storage systems use the following values to calculate logical storage capacity values (logical devices): 110 • 1 block = 512 bytes •
A Path worksheet Worksheet Table 26 Path worksheet LDEV (CU:LDEV) (CU = control unit) 0:00 0:01 0:02 0:03 0:04 0:05 0:06 0:07 0:08 0:09 0:10 Device Type SCSI Bus Number Path 1 Alternate Paths TID: TID: TID: LUN: LUN: LUN: TID: TID: TID: LUN: LUN: LUN: TID: TID: TID: LUN: LUN: LUN: TID: TID: TID: LUN: LUN: LUN: TID: TID: TID: LUN: LUN: LUN: TID: TID: TID: LUN: LUN: LUN: TID: TID: TID: LUN: LUN: LUN: TID: TID: TID: LUN: LUN: LUN: TID: TID: TID:
B Path worksheet (NonStop) Worksheet Table 27 Path worksheet (NonStop) LUN # CU:LDEV ID Array Group Emulation type Array Array Port Port WWN NSK Server NSK SAC name (G-M-S-S) NSK SAC WWN Example: 00 01:00 1-11 OPEN-E 1A /OSDNSK3 110-2-3-1 50060B00 $XPM001 50060E80 0437B000 112 Path worksheet (NonStop) 002716AC NSK volume name Path P
C Disk array supported emulations HP-UX This appendix provides information about supported emulations and device type specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).
Table 29 Emulation specifications (HP-UX) (continued) Emulation1 Category2 Product name3 Blocks Sector size # of (512 bytes) (bytes) cylinders Heads Sectors per track Capacity MB* 4 OPEN-8 CVS SCSI disk OPEN-8-CVS Footnote5 512 Footnote6 15 96 Footnote 7 OPEN-9 CVS SCSI disk OPEN-9-CVS Footnote5 512 Footnote6 15 96 Footnote 7 OPEN-E CVS SCSI disk OPEN-E-CVS Footnote5 512 Footnote6 15 96 Footnote 7 OPEN-V SCSI disk OPEN-V Footnote5 512 Footnote6 15 128 Footno
For an OPEN-V CVS volume with capacity = 49 MB: # of cylinders = 49 × 16/15 = 52.26 (rounded up to next integer) = 53 cylinders OPEN-3/8/9/E: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) × 1024/720 × n Example For a CVS LUSE volume with capacity = 37 MB and n = 4: # of cylinders = 37 × 1024/720 × 4 = 52.
Table 30 LUSE device parameters (HP-UX) (continued) Device type OPEN-E*n OPEN-L*n Physical extent size (PE) Max physical extent size (MPE) n = 2 to 9 default default n = 10 8 17366 n = 11 8 19102 n = 12 8 20839 n = 13 8 22576 n = 14 8 24312 n = 15 8 26049 n = 16 8 27786 n = 17 8 29522 n = 18 8 31259 n = 19 8 32995 n = 20 8 34732 n = 21 8 36469 n = 22 8 38205 n = 23 8 39942 n = 24 8 41679 n = 25 8 43415 n = 26 8 45152 n = 27 8 46889 n = 28 8 4862
SCSI TID map for Fibre Channel adapters When an arbitrated loop (AL) is established or reestablished, the port addresses are assigned automatically to prevent duplicate TIDs. With the SCSI over Fibre Channel protocol (FCP), there is no longer a need for target IDs in the traditional sense. SCSI is a bus-oriented protocol requiring each device to have a unique address because all commands go to all devices. For Fibre Channel, the AL-PA is used instead of the TID to direct packets to the desired destination.
Windows This appendix provides information about supported emulations and emulation specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).
Table 33 Emulation specifications (Windows) (continued) Emulation1 Category2 Product name3 Blocks Sector size # of (512 bytes) (bytes) cylinders Heads Sectors Capacity MB* 4 per track OPEN-E CVS SCSI disk OPEN-E-CVS Footnote 5 512 Footnote 6 15 96 Footnote 7 OPEN-V SCSI disk OPEN-V Footnote 5 512 Footnote 6 15 128 Footnote 7 OPEN-3*n CVS SCSI disk OPEN-3*n-CVS Footnote 5 512 Footnote 6 15 96 Footnote 7 OPEN-8*n CVS SCSI disk OPEN-8*n-CVS Footnote 5 512
OPEN-3/8/9/E: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) × 1024/720 × n Example For a CVS LUSE volume with capacity = 37 MB and n = 4: # of cylinders = 37 × 1024/720 × 4 = 52.62 × 4 = 53 × 4 = 212 OPEN-V: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) × 16/15 × n Example For an OPEN-V CVS LUSE volume with capacity = 49 MB and n = 4: # of cylinders = 49 × 16/15 × 4 = 52.
NonStop This appendix provides information about supported emulations and emulation specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).
OpenVMS This appendix provides information about supported emulations and device type specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).
Table 37 Emulation specifications (OpenVMS) (continued) Emulation1 Category2 Product name3 Blocks Sector size # of (512 bytes) (bytes) cylinders Heads Sectors Capacity MB* 4 per track OPEN-E CVS SCSI disk OPEN-E-CVS Footnote5 512 Footnote6 15 96 Footnote7 OPEN-V SCSI disk OPEN-V Footnote5 512 Footnote6 15 128 Footnote7 OPEN-3*n CVS SCSI disk OPEN-3*n-CVS Footnote5 512 Footnote6 15 96 Footnote7 OPEN-8*n CVS SCSI disk OPEN-8*n-CVS Footnote5 512 Footnote6 15 96 F
OPEN-V: The number of cylinders for a CVS volume = # of cylinders = (capacity (MB) specified by user) × 16/15 Example For an OPEN-V CVS volume with capacity = 49 MB: # of cylinders = 49 × 16/15 = 52.26 (rounded up to next integer) = 53 cylinders OPEN-3/8/9/E: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) × 1024/720 × n Example For a CVS LUSE volume with capacity = 37 MB and n = 4: # of cylinders = 37 × 1024/720 × 4 = 52.
VMware This appendix provides information about supported emulations and device type specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).
Table 39 Emulation specifications (VMware) (continued) Emulation1 Category2 Product name3 Blocks Sector size # of (512 bytes) (bytes) cylinders Heads Sectors per track Capacity MB* 4 OPEN-8 CVS SCSI disk OPEN-8-CVS Footnote5 512 Footnote6 15 96 Footnote7 OPEN-9 CVS SCSI disk OPEN-9-CVS Footnote5 512 Footnote6 15 96 Footnote7 OPEN-E CVS SCSI disk OPEN-E-CVS Footnote5 512 Footnote6 15 96 Footnote7 OPEN-V SCSI disk OPEN-V Footnote5 512 Footnote6 15 128 Footnote7
For an OPEN-3 CVS volume with capacity = 37 MB: # of cylinders = 37 × 1024/720 = 52.62 (rounded up to next integer) = 53 cylinders OPEN-V: The number of cylinders for a CVS volume = # of cylinders = (capacity (MB) specified by user) × 16/15 Example For an OPEN-V CVS volume with capacity = 49 MB: # of cylinders = 49 × 16/15 = 52.
Linux This appendix provides information about supported emulations and device type specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).
Table 41 Emulation specifications (Linux) (continued) Emulation1 Category2 Product name3 Blocks Sector size # of (512 bytes) (bytes) cylinders Heads Sectors per track Capacity MB* 4 OPEN-E CVS SCSI disk OPEN-E-CVS Footnote5 512 Footnote6 15 96 Footnote7 OPEN-V SCSI disk OPEN-V Footnote5 512 Footnote6 15 128 Footnote7 OPEN-3*n CVS SCSI disk OPEN-3*n-CVS Footnote5 512 Note 6 15 96 Footnote7 OPEN-8*n CVS SCSI disk OPEN-8*n-CVS Footnote5 512 Note 6 15 96 Footnote7
OPEN-3/8/9/E: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) × 1024/720 × n Example For a CVS LUSE volume with capacity = 37 MB and n = 4: # of cylinders = 37 × 1024/720 × 4 = 52.62 × 4 = 53 × 4 = 212 OPEN-V: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) × 16/15 × n Example For an OPEN-V CVS LUSE volume with capacity = 49 MB and n = 4: # of cylinders = 49 × 16/15 × 4 = 52.
Solaris This appendix provides information about supported emulations and device type specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).
Table 43 Emulation specifications (Solaris) (continued) Emulation1 Category2 Product name3 Blocks Sector size # of (512 bytes) (bytes) cylinders Heads Sectors per track Capacity MB* 4 OPEN-E CVS SCSI disk OPEN-E-CVS Footnote5 512 Footnote6 15 96 Footnote7 OPEN-V SCSI disk OPEN-V Footnote5 512 Footnote6 15 128 Footnote7 OPEN-3*n CVS SCSI disk OPEN-3*n-CVS Footnote5 512 Footnote6 15 96 Footnote7 OPEN-8*n CVS SCSI disk OPEN-8*n-CVS Footnote5 512 Footnote6 15 96 Fo
OPEN-3/8/9/E: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) × 1024/720 × n Example For a CVS LUSE volume with capacity = 37 MB and n = 4: # of cylinders = 37 × 1024/720 × 4 = 52.62 × 4 = 53 × 4 = 212 OPEN-V: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) × 16/15 × n Example For an OPEN-V CVS LUSE volume with capacity = 49 MB and n = 4: # of cylinders = 49 × 16/15 × 4 = 52.
IBM AIX This appendix provides information about supported emulations and device type specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).
Table 45 Emulation specifications (IBM AIX) (continued) Emulation1 Category2 Product name3 Blocks Sector size # of (512 bytes) (bytes) cylinders Heads Sectors per track Capacity MB* 4 OPEN-E CVS SCSI disk OPEN-E-CVS Note 5 512 Footnote6 15 96 Footnote7 OPEN-V SCSI disk OPEN-V Note 5 512 Footnote6 15 128 Footnote7 OPEN-3*n CVS SCSI disk OPEN-3*n-CVS Note 5 512 Footnote6 15 96 Footnote7 OPEN-8*n CVS SCSI disk OPEN-8*n-CVS Note 5 512 Footnote6 15 96 Footnote7 OPE
OPEN-3/8/9/E: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) × 1024/720 × n Example For a CVS LUSE volume with capacity = 37 MB and n = 4: # of cylinders = 37 × 1024/720 × 4 = 52.62 × 4 = 53 × 4 = 212 OPEN-V: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) × 16/15 × n Example For an OPEN-V CVS LUSE volume with capacity = 49 MB and n = 4: # of cylinders = 49 × 16/15 × 4 = 52.
Table 46 OPEN-3 parameters by emulation type (IBM AIX) (continued) Emulation Type Parameter OPEN-3 OPEN-3*n (n=2 to 36) OPEN-3 CVS OPEN-3 CVS*n (n=2 to 36) pc c partition size 4,806,720 4,806,720*n Depends on configuration of CV1 Depends on configuration of CV3 pd d partition size Set optionally Set optionally Set optionally Set optionally pe e partition size Set optionally Set optionally Set optionally Set optionally pf f partition size Set optionally Set optionally Set optionall
Table 47 OPEN-8 parameters by emulation type (IBM AIX) (continued) Emulation Type Parameter OPEN-8 OPEN-8*n (n=2 to 36) OPEN-8 CVS OPEN-8 CVS*n (n=2 to 36) rm Number of rotations of the disk 6,300 6,300 6,300 6,300 oa a partition offset (Starting block in a partition) Set optionally Set optionally Set optionally Set optionally ob b partition offset (Starting block in b partition) Set optionally Set optionally Set optionally Set optionally oc c partition offset (Starting block in c pa
Table 47 OPEN-8 parameters by emulation type (IBM AIX) (continued) Emulation Type Parameter OPEN-8 OPEN-8*n (n=2 to 36) OPEN-8 CVS OPEN-8 CVS*n (n=2 to 36) fe e partition fragment size 1,024 1,024 1,024 1,024 ff f partition fragment size 1,024 1,024 1,024 1,024 fg g partition fragment size 1,024 1,024 1,024 1,024 fh h partition fragment size 1,024 1,024 1,024 1,024 See “Notes for disk parameters” (page 142).
Table 48 OPEN-9 parameters by emulation type (IBM AIX) (continued) Emulation Type Parameter OPEN-9 OPEN-9*n (n=2 to 36) OPEN-9 CVS OPEN-9 CVS*n (n=2 to 36) pf f partition size Set optionally Set optionally Set optionally Set optionally pg g partition size Set optionally Set optionally Set optionally Set optionally ph h partition size Set optionally Set optionally Set optionally Set optionally ba a partition block size 8,192 8,192 8,192 8,192 bb b partition block size 8,192 8,
Table 49 OPEN-E parameters by emulation type (IBM AIX) (continued) Emulation Type Parameter OPEN-E OPEN-E*n (n=2 to OPEN-E CVS 36) OPEN-E CVS*n (n=2 to 36) oc c partition offset (Starting block in c partition) 0 0 0 0 od d partition offset (Starting block in d partition) Set optionally Set optionally Set optionally Set optionally oe e partition offset (Starting block in e partition) Set optionally Set optionally Set optionally Set optionally of f partition offset (Starting block in f
Table 49 OPEN-E parameters by emulation type (IBM AIX) (continued) Emulation Type Parameter fh h partition fragment size OPEN-E OPEN-E*n (n=2 to OPEN-E CVS 36) OPEN-E CVS*n (n=2 to 36) 1,024 1,024 1,024 1,024 See “Notes for disk parameters”. Notes for disk parameters 1 The value of pc is calculated as follows: pc = nc * nt * ns The nc of OPEN-x CVS corresponds to the capacity specified by SVP or remote console.
Table 50 Byte information (IBM AIX) Category LU product name Number of bytes per Inode OPEN-3 OPEN-3 OPEN-3*2 to OPEN-3*28 4096 OPEN-3*29 to OPEN-3*36 8192 OPEN-8 OPEN-8*2 to OPEN-8*9 4096 OPEN-8*10 to OPEN-8*18 8192 OPEN-8*19 to OPEN-8*36 16384 OPEN-9 OPEN-9*2 to OPEN-9*9 4096 OPEN-9*10 to OPEN-9*18 8192 OPEN-9*19 to OPEN-9*36 16384 OPEN-8 OPEN-9 OPEN-3/8/9 CVS OPEN-3 CVS OPEN-8 CVS OPEN-9 CVS OPEN-E CVS OPEN-K 4096 CVS OPEN-3/8/9*n CVS 35 to 64800 4096 64801 to 126000 8192 126
Physical partition size table Table 51 Physical partition size (IBM AIX) Category LU product name Physical partition size in megabytes OPEN-3 OPEN-3 4 OPEN-3*2 to OPEN-3*3 8 OPEN-3*4 to OPEN-3*6 16 OPEN-3*7 to OPEN-3*13 32 OPEN-3*14 to OPEN-3*27 64 OPEN-3*28 to OPEN-3*36 128 OPEN-8 8 OPEN-8*2 16 OPEN-8*3 to OPEN-8*4 32 OPEN-8*5 to OPEN-8*9 64 OPEN-8*10 to OPEN-8*18 128 OPEN-8*19 to OPEN-8*36 256 OPEN-9 8 OPEN-9*2 16 OPEN-9*3 to OPEN-9*4 32 OPEN-9*5 to OPEN-9*9 64 OPEN-9*1
Table 51 Physical partition size (IBM AIX) (continued) Category LU product name Physical partition size in megabytes 259201 - 518400 512 518401 and higher 1024 IBM AIX 145
D Using Veritas Cluster Server to prevent data corruption Using VCS I/O fencing By issuing a Persistent Reserve SCSI-3 command, VCS employs an I/O fencing feature that prevents data corruption from occurring if cluster communication stops. To accomplish I/O fencing, each node of VCS registers reserve keys for each disk in a disk group that is imported. The reserve key consists of a unique value for each disk group and a value to distinguish nodes.
Figure 12 Nodes and ports Using VCS I/O fencing 147
Table 52 Port 1A Key Registration Entries N yevrrW etn seERW LU - Disk Group yelbeisikv no it A o n 1ia -trtsrg eiorP elbat 1 0000 aG RN P0 W A W 0, 1, 2 - Disk Group 1 3 0000aG RN P1 W A W 8, 9 - Disk Group 3 1 0000bG RPN 2 W BW 0, 1, 2 - Disk Group 1 2 0000b G RPN 3 W BW 4, 5,6 - Disk Group 2 4 –– :: 72 1 –– Table 53 Port 2A Key Registration Entries N yevrrW etn seERW LU - Disk Group yelbeisikv no it A o n 2i-a trtsrg eiorP elbat 1000 aG RN P0 W A W 0, 1, 2 - Disk Group 1 2 1000 aG RN P1 W A W 4, 5, 6 -
E Reference information for the HP System Administration Manager (SAM) The HP System Administration Manager (SAM) is used to perform HP-UX system administration functions, including: • Setting up users and groups • Configuring the disks and file systems • Performing auditing and security activities • Editing the system kernel configuration This appendix provides instructions for: • Using SAM to configure the disk devices • Using SAM to set the maximum number of volume groups Configuring the devi
To configure the newly-installed disk array devices: 1. Select Disks and File Systems, then select Disk Devices. 2. 3. Verify that the new disk array devices are displayed in the Disk Devices window. Select the device to configure, select the Actions menu, select Add, and then select Using the Logical Volume Manager. In the Add a Disk Using LVM window, select Create... or Extend a Volume Group.
F HP Clustered Gateway deployments Windows The HP Cluster Gateway and HP Scalable NAS software both use the HP PolyServe software as their underlying clustering technology and both have similar requirements for XP7 Storage. They have both been tested with XP7 Storage and this appendix details configuration requirements specific to XP7 Storage deployments using HP PolyServe Software on Windows.
For details on importing and deporting disks, dynamic volume creation and configuration, and file system creation and configuration, see the HP Scalable NAS File Serving Software Administration Guide . Linux The HP Cluster Gateway and HP Scalable NAS software both use the HP PolyServe software as their underlying clustering technology and both have similar requirements for XP7 Storage.
Manager, with both local and remote HORCM instances running on each server, and with all file system LUNs (P-VOLs) controlled by the local instance and all snapshot V-VOLs (S-VOLs) controlled by the remote instance. Dynamic volume and file system creation When the LUNs have been presented to all nodes in the cluster, import them into the cluster using the GUI or the mx command.
Glossary AL-PA Arbitrated loop physical address. A 1-byte value that the arbitrated loop topology uses to identify the loop ports. This value becomes the last byte of the address identifier for each public port on the loop. command device A volume in the disk array that accepts Continuous Access, Business Copy, or XP7 for Business Continuity Manager control operations, which are then executed by the array. CU Control unit. CVS Custom volume size.
port A physical connection that allows data to pass between a host and a disk array. R-SIM Remote service information message. SIM Service information message. SNMP Simple Network Management Protocol. A widely used network monitoring and control protocol. Data is passed from SNMP agents, which are hardware and/or software processes reporting activity in each network device (hub, router, bridge, and so on) to the workstation console used to oversee the network.
Index A Array Manager, 9 auto-mount parameters, setting, 24 B Business Copy, 11 C clustering, 16, 39, 46, 54, 62, 75, 83, 97 command device(s) one LDEV as a, 11 RAID Manager, 11 Command View Advanced Edition, 8, 11, 12, 26, 35, 51, 58, 67, 79, 93 configuration device, 18, 39, 48, 64, 76, 85, 99 emulation types, 10 recognition, 17, 63, 98 using SAM, 149 disk array, 12, 26, 35, 51, 58, 67, 79, 93 FCAs, 15, 46, 54, 62, 71, 83, 97 FCSAs, 38 Fibre Channel ports, 15, 30, 38, 46, 53, 61, 71, 83, 96 host, 15, 30,
supported, 71 verify driver installation, 63, 98 verifying configuration, 75 FCSA(s) configuring, 38 supported, 38 features, disk array, 8 Fibre Channel adapters, configuring, 30 adapters, SCSI TID map, 8, 117 connection speed, 9 interface, 9 ports, configuring, 15, 30, 38, 46, 53, 61, 71, 83, 96 supported elements, 9 switches, 47 file system(s) creating, 77, 89 for logical volumes, 22 journaled, 89 mounting, 24, 91 not mounted after rebooting, 107 verify operations, 34 verifying, 19, 24, 49, 66, 91 LDEV(s
security, LUN, 16, 39, 46, 54, 62, 75, 83, 97 server restarting, 63, 98 server support, 8 SIMS, 107 storage capacity, 8 storage capacity values conventions, 110 subscription service, HP, 109 system option mode, setting, 29, 45, 53, 62, 70, 82, 97 T technical support HP, 109 troubleshooting, 107 error conditions, 107 U UNIX, supported versions HP-UX, 8 V Veritas Volume Manager configuration, 78 Virtual Machines setup, 56 volume(s) groups creating, 19 setting maximum number, 150 groups, assigning new devic