HP StoreVirtual Storage User Guide Abstract This guide provides instructions for configuring individual storage systems, as well as for creating storage clusters, volumes, snapshots, and remote copies.
© Copyright 2009, 2013 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents 1 Getting started.........................................................................................14 Creating storage with HP StoreVirtual Storage............................................................................14 Configuring storage systems.....................................................................................................15 Creating a storage volume using the Management Groups, Clusters, and Volumes wizard...............15 Enabling server access to volumes.....
To reconfigure RAID...........................................................................................................33 Reconfiguring RAID for a P4800 G2 with 2 TB drives.............................................................33 Monitoring RAID status............................................................................................................34 Data reads and writes and RAID status.................................................................................
Which physical interface is active....................................................................................62 Summary of NIC states during failover.............................................................................63 Example network cabling topologies with link aggregation dynamic mode...........................63 How Adaptive Load Balancing works ..................................................................................64 Which physical interface is preferred...................
Using Active Directory for external authentication........................................................................83 Configuring external authentication......................................................................................84 Associating the Active Directory group with the LeftHand OS group..........................................84 Removing the Active Directory configuration..........................................................................85 7 Monitoring the SAN............
Creating a management group..............................................................................................108 Name the management group and add storage systems.......................................................108 Add administrative user....................................................................................................109 Set management group time..............................................................................................109 Set DNS server...............
Virtual Managers........................................................................................................124 Using the Failover Manager...................................................................................................125 Planning the virtual network configuration...........................................................................125 Using the Failover Manager on Microsoft Hyper-V Server......................................................
How data protection levels work...................................................................................147 Network RAID-10 (2–Way Mirror).............................................................................147 Network RAID-10+1 (3-Way Mirror)..........................................................................147 Network RAID-10+2 (4-Way Mirror)..........................................................................148 Network RAID-5 (Single Parity).................................
Mounting the snapshot on a host.......................................................................................173 Making a Windows application-managed snapshot available...............................................174 Managing snapshot temporary space.................................................................................176 Convert the temporary space to access data...................................................................176 Delete the temporary space...........................
Completing the iSCSI Initiator and disk setup...........................................................................206 Persistent targets or favorite targets.....................................................................................206 HP StoreVirtual DSM for Microsoft MPIO settings..................................................................206 Disk management............................................................................................................
Pausing and restarting monitoring...........................................................................................227 Changing the graph.............................................................................................................228 Hiding and showing the graph..........................................................................................228 Displaying or hiding a line................................................................................................
Repair the storage system.............................................................................................249 Rebuilding data...............................................................................................................250 Reconfigure RAID........................................................................................................251 Returning the storage system to the cluster...........................................................................
1 Getting started HP StoreVirtual Storage enables you to create a virtualized pool of storage resources and manage a SAN. The LeftHand OS software is installed on the HP StoreVirtual Storage and you use the HP StoreVirtual Centralized Management Console (CMC) to manage the storage. For a list of supported software and hardware, see the HP StoreVirtual 4000 Storage Compatibility Matrix at http://www.hp.
7. 8. 9. recommended as the WWNNs based on the management group may change. (See the HP SAN Design Reference Guide.) Create a Fibre Channel server in the CMC. (See “Planning Fibre Channel server connections to management groups” (page 206).) Assign LUNs to the Fibre Channel server. (See “Assigning volumes to Fibre Channel servers” (page 212).) Discover the LUNs in the OS.
Figure 1 The LeftHand OS software storage hierarchy 1. Management group 2. Cluster 3. Volume To complete this wizard, you will need the following information: • A name for the management group.
Using the Map View The Map View tab is available for viewing the relationships between management groups, servers, sites, clusters, volumes and snapshots. When you log in to a management group, there is a Map View tab for each of those elements in the management group. For example, when you want to make changes such as moving a volume to a different cluster, or deleting shared snapshots, the Map View allows you to easily identify how many snapshots and volumes are affected by such changes.
Setting preferences Use the Preferences window to set the following: • Font size in the CMC • Locale for the CMC. The locale determines the language displayed in the CMC. • Naming conventions for storage elements • Online upgrade options. Setting the font size and locale Use the Preferences window, opened from the Help menu, to set font size and locale in the CMC. Font sizes from 9 through 16 are available. The CMC obtains the locale setting from your computer.
If you use the given defaults, the resulting names look like those in Table 2 (page 19). Notice that the volume name carries into all the snapshot elements, including SmartClone volumes, which are created from a snapshot. Table 2 Example of how default names work Element Default name Example SmartClone Volumes VOL_ VOL_VOL_ExchLogs_SS_3_1 Snapshots _SS_ VOL_ExchLogs_SS_1 Remote Snapshots _RS_ VOL_RemoteBackup_RS_1 Schedules to Snapshot a Volume _Sch_SS_ VOL_ExchLogs_Sch_SS_2.
Table 3 CMC setup for remote support 20 Task For more information, see Enable SNMP on each storage system “Enabling SNMP agents” (page 96) Set the SNMP trap recipient to IP address of the system where the remote support client is installed “Adding SNMP traps” (page 97) Open port 8959 (used for the CLI) Your network administrator Set the management group login and password for a read-only (View_Only_Administrator group) user “Adding an administrative user” (page 80) Getting started
2 Working with storage systems Storage systems displayed in the navigation window have a tree structure of configuration categories under them, as shown in Figure 3 (page 21). The configuration categories provide access to the configuration tasks for individual storage systems. You must configure some basic storage system parameters before using it in a cluster.
Table 4 HP platform identification (continued) HP StoreVirtual model HP platform Documentation Link HP StoreVirtual 4130 HP StoreVirtual 4330 ProLiant DL360p Gen8 HP ProLiant DL360p Gen8 Server Maintenance and Service Guide http://www.hp.com/go/proliantgen8/docs ProLiant DL380p Gen8 HP ProLiant DL380p Gen8 Server User Guide http://www.hp.
1. 2. Select a storage system in the navigation window and log in. Click Storage System Tasks on the Details tab and select Set ID LED On. The ID LED on the front of the storage system is now a bright blue. Another ID LED is located on the back of the storage system. When you click Set ID LED On, the status changes to On. 3. Select Storage System Tasks→Set ID LED Off when you have finished. The LED on the storage system turns off.
Figure 4 Disk enclosure not found as shown in Details tab When powering off the storage system, be sure to power off the components in the following order: 1. Power off the server blades enclosure or system controller from the CMC as described in “Powering off the storage system” (page 24). 2. Manually power off the disk enclosure. When you reboot the storage system, use the CMC, as described in “Rebooting the storage system” (page 24).
NOTE: If you enter 0 for the value when powering off, you cannot cancel the action. Any value greater than 0 allows you to cancel before the power off actually takes place. 5. Click Power Off. Figure 5 Confirming storage system power off Depending on the configuration of the management group and volumes, your volumes and snapshots can remain available. Upgrading LeftHand OS on storage systems The CMC enables online upgrades for storage systems, including the latest software releases and patches.
Figure 6 Availability tab Checking status of dedicated boot devices Some storage systems contain either one or two dedicated boot devices. In storage systems with two dedicated boot devices, both devices are active by default. If a storage system has dedicated boot devices, the Boot Devices tab appears in the Storage configuration category. Storage systems that do not have dedicated boot devices will not display the Boot Devices tab.
Table 5 Boot device status (continued) Boot device status Description Not Recognized The device is not recognized as a boot device. Unsupported The device cannot be used. (For example, the compact flash card is the wrong size or type.) : When the status of a boot device changes, an event is generated. See “Alarms and events overview” (page 88). Replacing a dedicated boot device If a boot hard drive fails, you will see an event that the boot device is faulty.
3 Configuring RAID and Managing Disks For each storage system, you can select the RAID configuration and the RAID rebuild options, and monitor the RAID status. You can also review disk information and, for some models, manage individual disks. Getting there 1. 2. In the navigation window, select a storage system and log in if necessary. Open the tree under the storage system and select the Storage category.
Table 6 Descriptions of RAID levels (continued) RAID level Description method because it requires 50 percent of the drive capacity to store the redundant data. RAID 1+0 first mirrors each drive in the array to another, and then stripes the data across the mirrored pair. If a physical drive fails, the mirror drive provides a backup copy of the files and normal system operations are not interrupted.
Table 7 Information in the RAID setup report This item Describes this Device Name The disk sets used in RAID. The number and names of devices varies by storage system and RAID level. Device Type The RAID level of the device. For example, in a HP P4300 G2, RAID 5 displays a Device Type of RAID 5 and subdevices as 8. NOTE: On the 4730 and the 4630 with 25 drives, since the global hot spare is configured, each logical drive will show 13 subdevices (12 data drives plus 1 spare).
NOTE: If you plan on using clusters with only a single storage system, use RAID 6 to ensure data redundancy within that storage system. Using Network RAID in a cluster A cluster is a group of storage systems across which data can be protected by using Network RAID. Network RAID protects against the failure of a RAID disk set within a storage system, failure of an entire storage system or external failures like networking or power.
Table 8 Data availability and safety in RAID configurations (continued) Configuration Data safety and availability during disk failure Data availability if an entire individual storage system fails or if network connection to a storage system is lost Volumes configured with Network RAID-10 or greater on clustered storage systems, RAID 5 Yes. 1 disk per RAID set can fail Yes without copying from another storage system in the cluster.
Reconfiguring RAID Reconfiguring RAID on a storage system or a VSA destroys any data stored on that storage system. For VSAs, there is no alternate RAID choice, so the only outcome for reconfiguring RAID is to wipe out all data. • Changing preconfigured RAID on a new storage system RAID must be configured on individual storage systems before they are added to a management group.
Monitoring RAID status RAID is critical to the operation of the storage system. If RAID has not been configured, the storage system cannot be used. Monitor the RAID status of a storage system to ensure that it remains normal. If the RAID status changes, a CMC event is generated. For more information about events and event notification, see “Alarms and events overview” (page 88). Data reads and writes and RAID status A RAID status of Normal, Rebuild, or Degraded all allow data reads and writes.
Managing disks Use the Disk Setup tab to monitor disk information and perform disk management tasks as listed in Table 9 (page 35). Table 9 Disk management tasks for storage systems Disk setup function Where available Monitor disk information All storage systems View Disk Details Storage systems running version 9.5.01 or later Activate Drive ID LEDs Storage systems with this specific capability Getting there 1. 2. 3. In the navigation window, select a storage system.
Table 10 Description of items on the disk report Column Description Disk Corresponds to the physical slot or bay in the storage system. This column also displays the Drive ID LED if it has been activated. Status Status is one of the following: • Active—green (on and participating in RAID) • Active, Un-authorized—yellow (the controller detects a communication problem with the drive, and cannot control the drive LEDs. However, this does not affect I/O to the drive.
If all the SSD drives approach the end of their wear life at the same time, and you plan to replace all the drives in the storage system, use one of the following methods to preserve the data while replacing the drives: • Migrate the volumes and snapshots from the cluster containing the storage system with the SSD drives to a different cluster in the management group. Remove the storage system from the cluster and the management group, replace the drives and rebuild RAID on the drives.
Figure 12 Viewing the Disk Setup tab in a HP P4500 G2 Figure 13 Diagram of the drive bays in a HP P4500 G2 Viewing disk status for the HP P4300 G2 The disks are labeled 1 through 8 in the Disk Setup window, shown in Figure 14 (page 38), and correspond to the disk drives from top to bottom, left to right ( Figure 15 (page 39)), when you are looking at the front of the HP P4300 G2.
Figure 15 Diagram of the drive bays in a HP P4300 G2 Viewing disk status for the P4800 G2 The disks are labeled 1 through 35 in the Disk Setup window ( Figure 16 (page 39)), and correspond to the disk drives from top to bottom, left to right, (Figure 17 (page 39)), when you are looking at the front of the P4800 G2.
Figure 18 Viewing the Disk Setup tab in a P4900 G2 Figure 19 Diagram of the drive bays in a P4900 G2 Viewing disk status for the HP StoreVirtual 4130 The disks are labeled 1 through 4 in the Disk Setup window (Figure 20 (page 40)), and correspond to the disk drives from top to bottom, left to right (Figure 21 (page 41)), when you are looking at the front of the HP StoreVirtual 4130.
Figure 21 Diagram of the drive bays in a HP StoreVirtual 4130 Viewing disk status for the HP StoreVirtual 4330 The disks are labeled 1 through 8 in the Disk Setup window (Figure 22 (page 41)), and correspond to the disk drives from top to bottom, left to right (Figure 23 (page 41)), when you are looking at the front of the HP StoreVirtual 4330.
Figure 24 Viewing the Disk Setup tab in an HP StoreVirtual 4530 Figure 25 Diagram of the drive bays in an HP StoreVirtual 4530 Viewing the disk status for the HP StoreVirtual 4630 The disks are labeled 1 through 25 in the Disk Setup window (Figure 26 (page 43)), and correspond to the disk drives from top to bottom, left to right (Figure 27 (page 43)), when you are looking at the front of the HP StoreVirtual 4630.
Figure 26 Viewing the Disk Setup tab in an HP StoreVirtual 4630 Figure 27 Diagram of the drive bays in an HP StoreVirtual 4630 Viewing the disk status for the HP StoreVirtual 4730 The disks are labeled 1 through 25 in the Disk Setup window (Figure 28 (page 44)), and correspond to the disk drives from top to bottom, left to right (Figure 29 (page 44)), when you are looking at the front of the HP StoreVirtual 4730.
Figure 28 Viewing the Disk Setup tab in an HP StoreVirtual 4730 Figure 29 Diagram of the drive bays in an HP StoreVirtual 4730 Replacing a disk The correct procedure for replacing a disk in a storage system depends upon a number of factors, including the RAID configuration, the data protection level of volumes and snapshots, and the number of disks being replaced. Replacing a disk in a storage system that is in a cluster requires rebuilding data on the replaced disk.
Table 11 Disk replacement requirements Storage system or configuration Requirements Hot-swap storage systems configured for RAID 1+0, 5, or RAID is normal and Safe to Remove status is yes. See 6 “Replacing disks in hot-swap storage systems” (page 45). VSA Replace disk on host server according to manufacturer's instructions.
update before checking it again. If the status indicates the second drive is safe to remove, then it can be replaced. For example, if an array is Rebuilding, no other drives in the array (except for unused hot-spare drives) are safe to remove. However, if the configuration includes two or more arrays and those arrays are Normal, the Safe To Remove status indicates that drives in those other arrays may be replaced.
Replacing a disk in a hot-swap storage system The hot-swap storage systems are: • HP P4300 G2 • HP P4500 G2 • P4800 G2 • P4900 G2 • HP StoreVirtual 4130 • HP StoreVirtual 4330 • HP StoreVirtual 4330 FC • HP StoreVirtual 4530 • HP StoreVirtual 4630 • HP StoreVirtual 4730 • HP StoreVirtual 4730 FC Complete the checklist for replacing a disk in RAID 1+0, RAID 5, or RAID 6. Then follow the appropriate procedures for the storage system.
Figure 31 Disk rebuilding on the Disk Setup tab Troubleshooting Disk drive carrier with older firmware is not detected and drive status is displayed inconsistently in different tools If a disk drive carrier is running an older firmware version that the RAID controller in the storage system does not detect, the controller incorrectly reports the drive as unauthorized and does not turn on the LEDs.
4 Managing the network Correctly setting up the network for HP StoreVirtual Storage ensures data availability and reliability. IMPORTANT: The network settings must be the same for the switches, clients, and storage systems. Set up the end-to-end network before creating storage volumes. Network best practices • Isolate the SAN, including CMC traffic, on a separate network. If the SAN must run on a public network, use a VPN to secure data and CMC traffic.
Changing network configurations Changing the network configuration of a storage system may affect connectivity with the network and application servers. Consequently, we recommend that you configure network characteristics on individual storage systems before creating a management group or adding them to existing clusters.
Table 12 Network interface status and information (continued) Column Description • NICSlot:Port1 • bond0—The bonded interface(s) (appears only if storage system is configured for bonding) Description Describes each interface listed. For example, the bond0 is the Logical Failover Device. Speed/Method Lists the actual operating speed reported by the interface. Duplex/Method Lists duplex as reported by the interface. Status Describes the state of the interface.
To change the speed and duplex 1. 2. 3. 4. 5. 6. 7. In the navigation window, select the storage system and log in. Open the tree, and select Network. Click the TCP Status tab. Select the interface to edit. Click TCP Status Tasks, and select Edit. Select the combination of speed and duplex that you want. Click OK. A series of status messages appears. Then the changed setting appears in the TCP status report. NOTE: You can also use the Configuration Interface to edit the speed and duplex.
configure jumbo frames on each client and each network switch may result in data unavailability or performance degradation. Jumbo frames can co-exist with 1500 byte frames on the same subnet if the following conditions are met: • Every device downstream of the storage system on the subnet must support jumbo frames. • If you are using 802.1q virtual LANs, jumbo frames and nonjumbo frames must be segregated into separate VLANs.
6. 7. 8. Change the flow control setting on the Edit window. Click OK. Repeat these steps for all the NICs you want to change. On the TCP Status tab window, for bonded NICs, the NIC flow control column shows the flow control settings for the physical NICs, and the bond0 as blank. Flow control is enabled and working in this case The TCP/IP tab Lists the network interfaces on the storage system.
To ping an IP address 1. 2. 3. 4. Select a storage system, and open the tree below it. Select Network. Click TCP/IP Tasks, and select Ping. Select which network interface to ping from, if you have more than one enabled. A bonded interface has only one interface from which to ping. 5. Enter the IP address to ping, and click Ping. If the server is available, the ping is returned in the Ping Results window. If the server is not available, the ping fails in the Ping Results window.
Configuring network interface bonds To ensure consistent failover characteristics and traffic distribution, use the same network bond type in all the storage systems in a cluster. Network interface bonding provides high availability, fault tolerance, load balancing and/or bandwidth aggregation for the network interface cards in the storage system. Bonds are created by joining physical NICs into a single logical interface.
category, TCP/IP tab window. Table 15 (page 57) lists the names of these interfaces. These interfaces can be bonded a number of ways. Note that not all the bond configurations which are supported by HP StoreVirtual Storage are supported with 10 GbE NICs.
Table 16 Supported bonding configurations (continued) Number of ports x NIC type Active-Passive 802.3ad ALB 1 x 1 GbE + 1 x 10 GbE in Yes single bond No No Multiple bonds of same type1 Yes Yes Yes Multiple bonds of different type2 Yes Yes Yes No Yes HP StoreVirtual 4630 storage systems 2 x 10 GbE No 1 Both bonded interfaces are the same type. 2 The bonds, bond0 and bond1, are not the same type, but each type of bond may be combined with the other two types.
How Active-Passive bonding works Bonding NICs for Active-Passive allows you to specify a preferred interface that will be used for data transfer. This is the active interface. The other interface acts as a backup, and its status is “Passive (Ready).” Physical and logical interfaces The two NICs in the storage system are labeled as listed in Table 18 (page 59). If both interfaces are bonded for failover, the logical interface is labeled bond0 and acts as the master interface.
Which physical interface is preferred When the Active-Passive bond is created, if both NICs are plugged in, the LeftHand OS interface becomes the active interface. The other interface is Passive (Ready). For example, if N:Port1 is the preferred interface, it will be active and N:Port2 will be Passive (Ready). Then, if N:Port1 fails, N:Port2 changes from Passive (Ready) to active. Interface:Port1 changes to Passive (Failed).
Figure 32 Active-Passive in a two-switch topology with server failover 1. Servers 2. HP StoreVirtual Storage systems 3. Storage cluster 4. GigE trunk 5. Active path 6. Passive path The two-switch scenario in Figure 32 (page 61) is a basic, yet effective, method for ensuring high availability. If either switch fails, or a cable or NIC on one of the storage systems fails, the Active-Passive bond causes the secondary connection to become active and take over.
Figure 33 Active-Passive failover in a four-switch topology 1. Servers 2. HP StoreVirtual Storage systems 3. Storage cluster 4. GigE trunk 5. Active path 6. Passive path Figure 33 (page 62) illustrates the Active-Passive configuration in a four-switch topology. How link aggregation dynamic mode bonding works Link Aggregation Dynamic Mode allows the storage system to use both interfaces simultaneously for data transfer. Both interfaces have an active status.
Table 22 Link aggregation dynamic mode failover scenario and corresponding NIC status Example failover scenario NIC status 1. Link Aggregation Dynamic Mode bond0 is created. Interface:Port1 and Interface:Port2 are both active. • Bond0 is the master logical interface. • Interface:Port1 is Active. • Interface:Port2 is Active. 2. Interface:Port1 interface fails. Because Link Aggregation • Interface:Port1 status becomes Passive (Failed).
Figure 34 Link aggregation dynamic mode in a single-switch topology 1. Servers 2. HP StoreVirtual Storage systems 3. Storage cluster How Adaptive Load Balancing works Adaptive Load Balancing allows the storage system to use both interfaces simultaneously for data transfer. Both interfaces have an active status. If the interface link to one NIC goes offline, the other interface continues operating. Using both NICs also increases network bandwidth.
Table 24 Example Adaptive Load Balancing failover scenario and corresponding NIC status Example failover scenario NIC status 1. Adaptive Load Balancing bond0 is created. Interface:Port1 and Interface:Port2 are both active. • Bond0 is the master logical interface. • Interface:Port1 is Active. • Interface:Port2 is Active. 2. Interface:Port1 interface fails. Because Adaptive Load Balancing is configured, Interface:Port2 continues operating. • Interface:Port1 status becomes Passive (Failed). 3.
Figure 35 Adaptive Load Balancing in a two-switch topology 1. Servers 2. HP StoreVirtual Storage systems 3. Storage cluster 4. GigE trunk Creating a NIC bond Follow these guidelines when creating NIC bonds: Prerequisites Verify that the speed, duplex, flow control, and frame size are all set properly on both interfaces that are being bonded. These settings cannot be changed on a bonded interface or on either of the supporting interfaces.
• Ensure that the bond has a static IP address for the logical bond interface. The default values for the IP address, subnet mask and default gateway are those of one of the physical interfaces. • Verify on the Communication tab that the LeftHand OS interface is communicating with the bonded interface. CAUTION: To ensure that the bond works correctly, you should configure it as follows: • Create the bond on the storage system before you add it to a management group.
NOTE: Because it can take a few minutes for the storage system to set the network address, the search may fail the first time. If the search fails, wait a minute or two and select Try Again on the Network Search Failed message. 12. Verify the new bond interface. Figure 37 Viewing a new Active-Passive bond 1. Bonded logical network interface 2. Physical interfaces shown as slaves The bond interface shows as “bond0” and has a static IP address. The two physical NICs now show as slaves in the Mode column.
Figure 39 (page 69) shows the status of interfaces in an Active-Passive bond. Figure 40 (page 69) shows the status of interfaces in a Link Aggregation Dynamic Mode bond. Figure 39 Viewing the status of an Active-Passive bond 1. Preferred interface Figure 40 Viewing the status of a link aggregation dynamic mode bond 1.
Deleting a NIC bond When you delete an Active-Passive bond, the preferred interface assumes the IP address and configuration of the deleted logical interface. The other NIC is disabled, and its IP address is set to 0.0.0.0. When you delete either a Link Aggregation Dynamic Mode or an Adaptive Load Balancing bond, one of the active interfaces in the bond retains the IP address of the deleted logical interface. The other NIC is disabled, and its IP address is set to 0.0.0.0. 1.
Figure 42 Verifying interface used for LeftHand OS communication 5. Verify that the LeftHand OS communication port is correct. Disabling a network interface When disabling a network interface, consider the following: • You can only disable top-level interfaces. This includes bonded interfaces and NICs that are not part of bonded interfaces. • To ensure that you always have access to the storage system, do not disable the last interface.
Configuring a disabled interface If one interface is still connected to the storage system but another interface is disconnected, you can reconnect to the second interface using the CMC. See “Configuring the IP address manually” (page 55). If both interfaces to the storage system are disconnected, you must attach a terminal, or PC or laptop to the storage system with a null modem cable and configure at least one interface using the Configuration Interface. See “Configuring a network connection” (page 245).
Adding or changing domain names to the DNS suffix list Add up to six domain names to the DNS suffix list (also known as the look-up zone). The storage system searches the suffixes first and then uses the DNS server to resolve host names. You can also change or remove the suffixes used. 1. On the management group DNS tab, select the DNS suffix to edit. 2. Click DNS Tasks and select Edit DNS Suffixes. 3. Using the Add, Edit, and Remove buttons, make the desired changes to the DNS suffixes in the list. 4.
Deleting routing information You 1. 2. 3. 4. 5. 6. 7. 8. can only delete user-added routes. In the navigation window, select a storage system, and log in. Open the tree, and select Network. Click the Routing tab. On the Routing tab, select the optional route to delete. Click Routing Tasks, and select Edit Routing Information. Select the routing information row to delete. Click Delete. Click OK on the confirmation message.
Figure 43 Selecting the LeftHand OS network interface and updating the list of managers 4. 5. 6. 7. Select an IP address from the list of manager IP addresses. Click Communication Tasks, and select Select LeftHand OS Interface. Select an Ethernet port for this address. Click OK. The storage system connects to the IP address through the selected Ethernet port.
Figure 44 Viewing the list of manager IP addresses 4. Click Communication Tasks, and select Update Communications List. The list is updated with the current storage system in the management group and a list of IPs with every manager’s enabled network interfaces. A window opens which displays the manager IP addresses in the management group and a reminder to reconfigure the application servers that are affected by the update.
5 Setting the date and time The storage systems within management groups use the date and time settings to create a time stamp when data is stored. You set the time zone and the date and time in the management group, and the storage systems inherit those management group settings. • Using network time protocol Configure the storage system to use a time service, either external or internal to your network. • Setting the time zone Set the time zone for the storage system.
NOTE: When using a Windows server as an external time source for an storage system, you must configure W32Time (the Windows Time service) to also use an external time source. The storage system does not recognize the Windows server as an NTP server if W32Time is configured to use an internal hardware clock. 1. Click Time Tasks, and select Add NTP Server. 2. Enter the IP address of the NTP server you want to use. 3. Decide whether you want this NTP server to be designated preferred or not preferred.
The server you added first is the one accessed first when time needs to be established. If this NTP server is not available for some reason, the next NTP server that was added, and is preferred, is used for time serving. To change the order of access for time servers 1. Delete the server whose place in the list you want to change. 2. Add that same server back into the list. It is placed at the bottom of the list, and is the last to be accessed.
6 Managing authentication Manage authentication to the HP StoreVirtual Storage using administrative users and groups. Managing administrative users When you create a management group, one default administrative user is created. The default user automatically becomes a member of the Full Administrator group. Use the default user and/or create new ones to provide access to the management functions of the LeftHand OS software. Adding an administrative user 1. 2. 3. 4. 5. 6. 7. 8.
4. 5. 6. In the Member Groups section, select the group from which to remove the user. Click Remove. Click OK to finish. Deleting an administrative user 1. 2. 3. 4. Log in to the management group, and select the Administration category. Select a user in the Users table. Click Administration Tasks in the tab window, and select Delete User. Click OK. NOTE: If you delete an administrative user, that user is automatically removed from any administrative groups.
Adding administrative groups When you create a group, you also set the permission level for the users assigned to that group. 1. Log in to the management group, and select the Administration category. 2. Click Administration Tasks in the tab window, and select New Group. 3. Enter a group name and an optional description. 4. Select the permission level for each management function for the group you are creating. See Table 27 (page 81) for more information. 5. To add a user to the group: a.
3. 4. Click OK on the confirmation window. Click OK to finish. Using Active Directory for external authentication Use Active Directory to simplify management of user authentication with HP StoreVirtual Storage. Configuring Active Directory allows Microsoft Windows domain users to authenticate to HP StoreVirtual Storage using their Windows credentials, avoiding the necessity of adding and maintaining individual users in the LeftHand OS software.
Best practices • Create a unique group in the CMC for the Active Directory association. Use a name and description that signifies the Active Directory association. See “Adding administrative groups” (page 82). • Create a separate LeftHand OS ‘administrator’ group in Active Directory. • Create a unique user in Active Directory to use as the Bind user for the management group to allow for communication between storage and Active Directory.
a. b. c. 5. 6. Click Find External Group. Enter the user name in the Enter AD User Name box and click OK. Select the correct group from the list that opens of Active Directory Groups and click OK. Click OK when you have finished editing the group. Log out of the management group and log back in using your UPN login (e.g., name@company.com) to verify the configuration.
7 Monitoring the SAN Monitor the SAN to: • Track usage. • Ensure that best practices are followed when changes are made, such as adding additional storage systems to clusters. • Maintain the overall health of the SAN. Tools for monitoring the SAN include the SAN Status Page, the Configuration Summary and the Best Practice table, the Alarms and Events features, including customized notification methods, and diagnostic tests and log files available for the storage systems.
The best practices displayed in this content pane are the same as those displayed on the Configuration Summary page. • Configuration Summary—Monitor SAN configurations to ensure optimum capacity management, performance, availability, and ease of management. • Volume Data Protection Level—Ensure that the SAN is configured for ongoing maximum data protection while scaling capacity and performing system maintenance.
Customizing the SAN Status Page layout Customize the layout of the SAN Status Page to highlight the information most important to you. All customizations are retained when the CMC is closed and restarted. Drag-and-drop content panes to change their position on the page. The layout is three columns by default. To rearrange content panes, drag a content pane and drop it on another content pane. The two panes switch places.
require taking action and are available only from the Events window for each management group. • Warning—Provides important information about a system component that may require taking action. These types of events are visible in both the Alarms window (for all management groups) and the Events window (for the management group where the alarm occurred). • Critical—Provides vital information about a system component that requires user action.
NOTE: Except for the P4800 G2, alarms and events information is not available for storage systems listed under Available Systems in the CMC, because they are not currently in use on the SAN. Table 30 (page 90) defines the alarms and events columns that appear in the CMC. Table 30 Alarms and events column descriptions Column Description Severity Severity of the event or alarm: informational, warning, or critical. Date/Time Date and time the event or alarm occurred.
3. Click Filter. The list of alarms changes to display only those that contain the filter text. 4. To display all alarms, click Clear to remove the filter. Viewing and copying alarm details 1. 2. 3. In the navigation window, log in to the management group. In the Alarms window, double-click an alarm. For assistance with resolving the alarm, click the link in either the Event field or the Resolution field.
Configuring remote log destinations Use remote log destinations to automatically write all events for the management group to a computer other than the storage system. For example, direct the event data to a single log server in a remote location. You must also configure the destination computer to receive the log files by configuring syslog on the destination computer. The syslog facility to use is local5, and the syslog levels are LOG_INFO, LOG_WARNING, LOG_CRIT.
1. From the Filters list, select an option to filter on. Options in bold are predefined filters you cannot change. Options that are not bold are custom filters that you have saved from the filters panel, described in “Saving filter views” (page 93). 2. Click Apply. To remove the filter, click Reset. To change the date range: 1. In the From list, select Choose From, and select the date. 2. Click OK. 3. In the To list, select Choose To, and select the date. 4. Click OK. 5. Click Update.
6. 7. Optional: To paste the event details into a document or email message, click Copy to copy the details to the clipboard. Click Close to finish. Copying events to the clipboard 1. 2. 3. In the navigation window, log in to the management group. Select Events in the tree. Do one of the following: • Select one or more events, click Event Tasks, and select Copy Selected to Clipboard. • Click Event Tasks, and select Copy All to Clipboard. Exporting event data to a .csv or .txt file 1. 2. 3. 4.
6. In the Sender Address field, enter the email address, including the domain name, to use as the sender for notifications. The system automatically adds the host name of the storage system in the email From field, which appears in many email systems. This host name helps identify where the event occurred. 7. Do one of the following: • To save your changes and close the window, click Apply. • To save your changes, close the window, and send a test email message, click Apply and Test.
community string must be set to public. To receive notification of events, you must configure SNMP traps. If using the HP System Management Homepage, view the SNMP settings there. You can also start SNMP and send test v1 traps. Enabling SNMP agents Most storage systems allow enabling and disabling SNMP agents. After installing version 9.0, SNMP will be enabled on the storage system by default.
For HP remote support, add the Central Management Server for HP Insight Remote Support. 5. Do one of the following: • Select By Address and enter the IP address, then select an IP Netmask from the list. Select Single Host if adding only one SNMP client. After entering the information, the dialog box displays acceptable and unacceptable combinations of IP addresses and IP netmasks so you can correct issues immediately. • Select By Name and enter a host name.
1. 2. 3. 4. In the navigation window, log in to the management group. In the tree, select Events→SNMP. Click SNMP Tasks and select Edit SNMP Traps Settings. Enter the Trap Community String. The trap community string does not have to be the same as the community string used for access control, but it can be. 5. 6. Click Add. Enter the IP address or host name for the SNMP client that is receiving the traps. For HP remote support, add the CMS for HP Insight Remote Support. 7. Select the Trap Version.
6. 7. Clear the selected severities checkboxes. Click OK to confirm. Using the SNMP MIBs The SNMP MIBs provide read-only access to the storage system. The SNMP implementation in the storage system supports MIB-II compliant objects. These files, when loaded in the SNMP client, allow you to see storage system-specific information such as model number, serial number, hard disk capacity, network characteristics, RAID configuration, DNS server configuration details, and more. NOTE: With version 8.
• NETWORK-SERVICES-MIB • NOTIFICATION-LOG-MIB • RFC1213-MIB • SNMP-TARGET-MIB • SNMP-VIEW-BASED-ACM-MIB • SNMPv2-MIB • UCD-DLMOD-MIB • UCD-SNMP-MIB Troubleshooting SNMP Table 31 SNMP troubleshooting Issue Solution SNMP queries are timing out Ensure that the timeout value is long enough for your environment. In complex configurations, SNMP queries should have longer timeouts. SNMP data gathering is not instantaneous, and scales in time with the complexity of the configuration.
• A description of the test • Pass / fail criteria NOTE: Available diagnostic tests depend on the storage system. For VSA, only the Disk Status Test is available. Table 32 Example list of hardware diagnostic tests and pass/fail criteria Diagnostic test Description Pass criteria Fail criteria Fan Test Checks the status of all fans. Fan is normal Fan is faulty or missing Power Test Checks the status of all power supplies.
Figure 47 Viewing the hardware information for a storage system Saving a hardware information report 1. 2. 3. Click Hardware Information Tasks and select Save to File to download a text file of the reported statistics. Choose the location and name for the report. Click Save. The report is saved with an .html extension. Hardware information report details Available hardware report statistics vary depending on the storage system.
Table 33 Selected details of the hardware report (continued) This term means this • Driver name • Driver version DNS data Information about DNS, if a DNS server is being used, providing the IP address of the DNS servers. IP address of the DNS servers. Memory Information about RAM in the storage system, including values for total memory and free memory in GB. CPU Details about the CPU, including model name or manufacturer of the CPU, clock speed of the CPU, and cache size.
Using log files If HP Support requests that you send a copy of a log file, use the Log Files tab to save that log file as a text file. The Log Files tab lists two types of logs: • Log files that are stored locally on the storage system (displayed on the left side of the tab). • Log files that are written to a remote log server (displayed on the right side of the tab). This list is empty until you configure remote log files and the remote log target computer. Saving log files locally 1. 2. 3. 4. 5.
1. 2. 3. 4. 5. 6. 7. Select a storage system in the navigation window. Open the tree below the storage system and select Diagnostics. Select the Log Files tab. Select the log in the Remote logs list. Click Log File Tasks and select Edit Remote Log Destination. Change the log type or destination and click OK. Ensure that the remote computer has the proper syslog configuration. Deleting remote logs 1. 2. 3. 4. 5. Select a storage system in the navigation window.
Exporting the System Summary The System Summary has information about all of the storage systems on the network. Export the summary to a .csv file for use in a spreadsheet or database.
8 Working with management groups A management group is a collection of one or more storage systems. It is the container within which you cluster storage systems and create volumes for storage. Creating a management group is the first step in creating HP StoreVirtual Storage. Functions of management groups • Provide the highest administrative domain for the SAN. Typically, storage administrators will configure at least one management group within their data center.
Table 34 Management group components (continued) Component Description method you will use. See “Setting the date and time” (page 77). DNS configuration You can configure DNS at the management group level for all storage systems in the management group. The storage system can use a DNS server to resolve host names. You will need the DNS domain name, suffix, and server IP address. See “Using a DNS server” (page 72).
NOTE: This name cannot be changed later without destroying the management group. When naming a management group, ensure that you do not use the name of an existing management group. Doing so causes the stores to be initialized and any data on those stores to be permanently deleted. 2. Select the storage system(s) to add to the management group. Use Ctrl+Click to select more than one.
5. 6. Add the VIP and subnet mask. Click Next. Create a volume and finish creating management group Optional: If you want to create volumes later, select Skip Volume Creation and click Finish. 1. Enter a name, description, data protection level, size, and provisioning type for the volume. 2. Click Finish. NOTE: 3. 4. A message opens notifying you to register and receive license keys. Click OK. Review the details on the Summary window and click Close.
Figure 48 Configuration Summary Reading the configuration summary As items are added to the management group, the Summary graph fills in and the count is displayed in the graph. The Summary graph fills in proportionally to the optimum number for that item in a management group, as described in “Configuration guidance” (page 112). Optimal configurations Optimal configurations are indicated in green. For example, in Figure 49 (page 111), there are 15 storage systems in the management group “CJS1.
Figure 50 Warning when items in the management group are reaching optimum limits 1. Volumes and snapshots are nearing the optimum limit. One cluster is nearing the optimum limit for storage systems. Configuration errors When any item exceeds a recommended maximum, it turns red, and remains red until the number is reduced. See Figure 51 (page 112). Figure 51 Error when some item in the management group has reached its limit 1. Volumes and snapshots have exceeded recommended maximums.
Table 36 iSCSI sessions guidance Number of sessions Guidance Up to 4,000 Green 4,001 – 5,000 Orange 5,001 or more Red Table 37 Storage systems in the management group Number Guidance Up to 20 Green 21 — 30 Orange 30 or more Red Table 38 Storage systems in the cluster Number Guidance Up to 10 Green 11 — 16 Orange 17 or more Red Best Practice summary overview The Best Practice summary provides a reference about best practices that can increase the reliability and/or performance of your
Figure 52 Best Practice Summary for well-configured SAN Expand the management group in the summary to see the individual categories that have recommended best practices. The summary displays the status of each category and identifies any conditions that fall outside the best practice. Click on a row to see details about that item's best practice. Disk level data protection Disk level data protection indicates whether the storage system has an appropriate disk RAID level set.
Volume-level data protection Use a data protection level greater than Network RAID-0 to ensure optimum data availability if a storage system fails. For information about data protection, see “Planning data protection” (page 145). Volume access Use iSCSI load balancing to ensure better performance and better utilization of cluster resources. For more information about iSCSI load balancing, see “iSCSI load balancing” (page 238).
1. 2. In the navigation window, select a management group and log in by any of the following methods: • Double-click the management group. • Open the Management Group Tasks menu, and select Log in to Management Group. You can also open this menu by right-clicking on the management group. • Click any of the Log in to view links on the Details tab. Enter the user name and password, and click Log In.
Stopping managers Under normal circumstances, you stop a manager when you are removing a storage system from a management group. Stopping a manager that will compromise fault tolerance generates an alarm. You cannot stop the last manager in a management group. The only way to stop the last manager is to delete the management group, which permanently deletes all data stored on volumes in the management group. Implications of stopping managers • Quorum of the storage systems may be decreased.
Saving the management group configuration information 1. 2. 3. From the Tasks menu, select Management Group→View Management Group Configuration. If there are multiple management groups, select the management group from the List of Management Groups and click Continue. Click Save in the Management Group Configuration window to save the configuration details in a .txt file. Shutting down a management group Safely shut down a management group to ensure the safety of your data.
1. 2. Stop server or host access to the volumes in the list. Click Shut Down Group. Restarting the management group When you are ready to restart the management group, simply power on the storage systems for that group: 1. Power on the storage systems that were shut down. 2. Click Find→Find Systems in the CMC to discover the storage systems. When the storage systems are all operating properly, the volumes become available and can be reconnected with the hosts or servers.
Figure 54 Manually setting management group to normal mode 3. Click Set To Normal. Removing a storage system from a management group When a storage system needs to be repaired or upgraded, remove it from the management group before beginning the repair or upgrade. Also remove a storage system from a management group if you are replacing it with another system. Prerequisites • Stop the manager on the storage system if it is running a manager.
Prerequisites • Log in to the management group. • Remove all volumes and snapshots. • 1. 2. 3. Delete all clusters. In the navigation window, log in to the management group. Click Management Group Tasks on the Details tab, and select Delete Management Group. In the Delete Management Window, enter the management group name, and click OK. After the management group is deleted, the storage systems return to the Available Systems pool.
9 Working with managers and quorum When a management group is created using release 10.0, it will be created with the correct number of managers started. Older management groups upgraded to release 10.0 may require additional managers or a Failover Manager started before the upgrade to 10.0 can be completed. See Table 39 (page 123) to see the optimum number of managers required for each configuration.
For more information about managers, see “Managers overview” (page 122). Table 39 Default number of managers added when a management group is created Number of storage systems Manager configuration 1 1 manager 2 2 managers and a virtual manager, if a Failover Manager is not available. 1 3 3 managers 4 3 managers 5 or more 5 managers 1 2 2 See “Failover Managers” (page 124) and “Virtual Managers” (page 124) for more information about virtual managers and Failover Managers.
Failover Managers The Failover Manager is a specialized version of the LeftHand OS software designed to operate as a manager and provide automated failover capability. It runs as a virtual appliance in either a VMware vSphere or Microsoft Hyper-V Server environment, and must be installed on network hardware other than the storage systems in the SAN. The Failover Manager participates in the management group as a manager; however, it performs quorum operations only, not data movement operations.
Figure 56 Virtual manager added to a management group Using the Failover Manager Adding a Failover Manager to the management group enables the SAN to have automated failover using a manager installed on network hardware other than the storage systems in the HP StoreVirtual Storage. Once installed and configured on network hardware, the Failover Manager is added to a management group where it serves solely as a quorum tie-breaking manager.
The installer for the Failover Manager for Hyper-V Server includes a wizard that guides you through configuring the virtual machine on the network and powering on the Failover Manager. CAUTION: Do not install the Failover Manager on a volume that is served from HP StoreVirtual Storage, since this would defeat the purpose of the Failover Manager.
Using the Failover Manager for VMware vSphere Install the Failover Manager from the DVD, or from the DVD .iso image downloaded from the website: http://www.hp.com/go/StoreVirtualDownloads The installer offers two choices for installing the Failover Manager for VMware: • Failover Manager for VMware vSphere—The installer for the Failover Manager for VMware vSphere includes a wizard that guides you through configuring the virtual machine on the network and powering on the Failover Manager.
14. If this is the only Failover manager you are installing, select No, I am done and click Next. NOTE: If you want to install another Failover Manager, the wizard repeats the steps, using information you already entered, as appropriate. 15. Finish the installation, reviewing the configuration summary, and click Deploy. When the installer is finished, the Failover Manager is ready to be used in the HP StoreVirtual Storage.
Installing the Failover Manager using the OVF files with the VI Client 1. Download the .OVF files from the following website: http://www.hp.com/go/StoreVirtualDownloads 2. 3. Click Agree to accept the terms of the License Agreement. Click the link for OVF files to open a window from which you can copy the files to the ESX Server. Configure the IP address and host name 1. 2. 3. 4. 5. 6. In the inventory panel, select the new Failover Manager and power it on.
Table 41 Troubleshooting for VMware vSphere installation Issue Solution General Installation You want to reinstall the Failover Manager. 1. 2. 3. 4. Close your CMC session. In the VI Client, power off the Failover Manager. Right-click and select Delete from Disk. Copy fresh files into the virtual machine folder from the downloaded zip file or distribution media. 5. Open the VI Client, and begin again. You cannot find the Failover Manager in the CMC, and cannot recall its IP address.
You should only use a virtual manager if you cannot use a Failover Manager or if manual failover is preferred for a specific reason. See “Managers and quorum” (page 123) for detailed information about quorum, fault tolerance, and the number of managers. Because a virtual manager is available to maintain quorum in a management group when a storage system goes offline, it can also be used for maintaining quorum during maintenance procedures.
Figure 57 Two-site failure scenarios that are correctly using a virtual manager Scenario 1—Communication between the sites is lost In this scenario, the sites are both operating independently. On the appropriate site, depending upon your configuration, select one of the storage systems, and start the virtual manager on it. That site then recovers quorum and operates as the primary site.
TIP: 1. 2. 3. Always use a Failover Manager for a two-system management group. Select the management group in the navigation window and log in. Click Management Group Tasks on the Details tab, and select Add virtual manager. Click OK to confirm the action. The virtual manager is added to the management group (1, Figure 58 (page 133)). The Details tab lists the virtual manager as being added, and the virtual manager appears in the management group (1, Figure 58 (page 133)).
Figure 59 Starting a virtual manager when the storage system running a manager becomes unavailable 1. Unavailable storage system 2. Virtual manager started on storage system running a regular manager NOTE: If you attempt to start a virtual manager on a storage system that appears to be normal in the CMC, and you receive a message that the storage system is unavailable, start the virtual manager on a different storage system.
Removing a virtual manager from a management group 1. 2. 3. Log into the management group from which you want to remove the virtual manager. Click Management Group Tasks on the Details tab, and select Delete Virtual Manager. Click OK to confirm the action. NOTE: The CMC does not allow you to delete a virtual manager if that deletion causes a loss of quorum.
10 Working with clusters Clusters are groups of storage systems created in a management group. Clusters create a pool of storage from which to create volumes. The volumes seamlessly span the storage systems in the cluster. Expand the capacity of the storage pool by adding storage systems to the cluster.
8. (Optional) Enter information for creating the first volume in the cluster, or select Skip Volume Creation. NOTE: The size listed in the Cluster Available Space box is an estimate because the actual size of the cluster once it is created can vary. Therefore, you may notice that, after creating a cluster and viewing the Details tab of the cluster, the size listed in the Total Available Space box is different. However, the size in the Total Available Space box is accurate. 9. Click Finish. 10.
Editing iSNS servers from the Cluster Tasks menu 1. 2. 3. Quiesce any applications that are accessing volumes in the cluster. Log off the active sessions in the iSCSI initiator for those volumes. Edit iSNS servers using either of the following methods: From the Cluster Tasks menu: a. Right-click the cluster or click Cluster Tasks. b. Select Edit Cluster→Edit iSNS Servers. c. In the Edit iSNS Servers window, select the VIP to change or delete, or click Add to add a new VIP. d.
Maintaining storage systems in clusters Use the Edit Cluster menu to perform cluster maintenance tasks. Adding a storage system to a cluster Add a storage system to an existing cluster to expand the storage for that cluster. If the cluster contains a single storage system, adding a second storage system triggers a change for the volumes in the cluster to go from Network RAID 0 to Network RAID 10, which offers better data protection and volume availability.
Figure 60 Swapping storage systems in the cluster 6. 7. Repeat the process for each storage system to be swapped. Click Swap Storage Systems when you are finished. The swap operation may take some time, depending upon the number of storage systems swapped and the amount of data being restriped. Reordering storage systems in a cluster Reorder the systems in a cluster to control the stripe patterns, especially in a multi-site cluster. 1. Select the cluster in the navigation window. 2.
1. 2. In the Edit Cluster window, select a storage system from the list. Click Remove Systems. The storage system moves out of the cluster, but remains in the management group. 3. Click OK when you are finished. NOTE: Removing a storage system causes a full cluster restripe. Troubleshooting a cluster Auto Performance Protection monitors individual storage system health related to performance issues that affect the volumes in the cluster.
1. Select the affected storage system in the navigation window. The storage system icon blinks in the tree. 2. Check the Status line on the Details tab. • If status is Storage System Overloaded, wait up to 10 minutes and check the status again. The status may return to Normal and the storage system will be resyncing. • If status is Storage System Inoperable, reboot the storage system and see if it returns to Normal, when it comes back up.
3. From the Repair Storage System window, select the item that describes the problem to solve. Click More for more detail about each selection. • Repair a disk problem If the storage system has a bad disk, be sure to read “Replacing a disk” (page 44) before beginning the process. • Storage system problem Select this choice if you have verified that the storage system must be removed from the management group to fix the problem.
11 Provisioning storage The LeftHand OS software uses volumes, including SmartClone volumes, and snapshots to provision storage to application servers and to back up data for recovery or other uses. Before you create volumes or configure schedules to snapshot a volume, plan the configuration you want for the volumes and snapshots.
Full provisioning Full provisioning reserves the same amount of space on the SAN as is presented to application servers. Full provisioning ensures that the application server will not fail a write. When a fully provisioned volume approaches capacity, you receive a warning that the disk is nearly full. Thin provisioning Thin provisioning reserves less space on the SAN than is presented to application servers. The LeftHand OS software allocates space as needed when data is written to the volume.
Data protection level Seven data protection levels are available, depending upon the number of available storage systems in the cluster. Table 44 Setting a data protection level for a volume With this number of Select any of these data protection levels available storage systems in cluster 1 • Network RAID-0 (None) • One copy of data in the cluster. 2 • Network RAID-0 (None) • One copy of data in the cluster. • Network RAID-10 (2–Way Mirror) • Two copies of data in the cluster.
How data protection levels work The system calculates the actual amount of storage resources needed for all data protection levels. When you choose Network RAID-10, Network RAID-10+1, or Network RAID-10+2, data is striped and mirrored across either two, three, or four adjacent storage systems in the cluster. When you choose Network RAID-5 or Network RAID-6, the layout of the data stripe, including parity, depends on both the Network RAID mode and cluster size.
Best applications for Network RAID-10+1 are those that require data availability even if two storage systems in a cluster become unavailable. Figure 63 (page 148) illustrates the write patterns on a cluster with four storage systems configured for Network RAID-10+1. Figure 63 Write patterns in Network RAID-10+1 (3-Way Mirror) Network RAID-10+2 (4-Way Mirror) Network RAID-10+2 data is striped and mirrored across four or more storage systems.
Figure 65 (page 149) illustrates the write patterns on a cluster with four storage systems configured for Network RAID-5. Figure 65 Write patterns and parity in Network RAID-5 (Single Parity) 1. Parity for data blocks A, B, C 2. Parity for data blocks D, E, F 3. Parity for data blocks G, H, I 4. Parity for data blocks J, K, L Network RAID-6 (Dual Parity) Network RAID-6 divides the data into stripes and adds parity.
Figure 66 Write patterns and parity in Network RAID-6 (Dual Parity) 1. P1 is parity for data blocks A, B, C, D 2. P2 is parity for data blocks E, F, G, H 3. P3 is parity for data blocks I, J, K, L 4. P4 is parity for data blocks M, N, O, P Provisioning snapshots Snapshots provide a copy of a volume for use with backup and other applications. You create snapshots from a volume on the cluster. Snapshots are always thin provisioned.
Plan how you intend to use snapshots, and the schedule and retention policy for schedules to snapshot a volume. Snapshots record changes in data on the volume, so calculating the rate of changed data in the client applications is important for planning schedules to snapshot a volume. NOTE: Volume size, provisioning, and using snapshots should be planned together. If you intend to use snapshots, review “Using snapshots” (page 165).
Figure 67 Cluster tab view Cluster use summary The Use Summary window presents information about the storage space available in the cluster. Figure 68 Reviewing the Use Summary tab In the Use Summary window, the Storage Space section lists the space available on the storage systems in the cluster. Saved space lists the space saved in the cluster by using thin provisioning and the SmartClone feature.
Table 45 Information on the Use Summary tab (continued) Category Description snapshots are created, or as thinly provisioned volumes grow. Saved Space Thin Provisioning The space saved by thin provisioning volumes. This space is calculated by the system. SmartClone Feature Space saved by using SmartClone volumes is calculated using the amount of data in the clone point and any snapshots below the clone point. Only as data is added to an individual SmartClone volume does it consume space on the SAN.
Table 46 Information on the Volume Use tab (continued) Category Description to see the space saved number decrease as data on the volume increases. • Full provisioning allocates the full amount of space for the size of the volume. Reclaimable space is the amount of space that you can get back if this fully provisioned volume is changed to thinly provisioned. Consumed Space Amount of space used by actual data volumes or snapshots.
Table 47 Information on the System Use tab Category Description Name Host name of the storage system. Raw space Total amount of disk capacity on the storage system. Note: Storage systems with greater capacity will only operate to the capacity of the lowest capacity storage system in the cluster. RAID configuration RAID level configured on the storage system. Usable space Space available for storage after RAID has been configured.
However, the file system does not inform the block device underneath (the LeftHand OS volume) that there is freed-up space. In fact, no mechanism exists to transmit that information. There is no SCSI command which says “Block 198646 can be safely forgotten.” At the block device level, there are only reads and writes. So, to ensure that our iSCSI block devices work correctly with file systems, any time a block is written to, that block is forever marked as allocated.
Changing configuration characteristics to manage space Options for managing space on the cluster include • Changing snapshot retention—retaining fewer snapshots requires less space • Changing schedules to snapshot a volume—taking snapshots less frequently requires less space • Deleting volumes or moving them to a different cluster NOTE: Deleting files on a file system does not free up space on the SAN volume. For more information, see “Block systems and file systems” (page 155).
12 Using volumes A volume is a logical entity that is made up of storage on one or more storage systems. It can be used as raw data storage or it can be formatted with a file system and used by a host or file server. Create volumes on clusters that contain one or more storage systems.
Types of volumes • Primary volumes are volumes used for data storage. • Remote volumes are used as targets for Remote Copy for business continuance, backup and recovery, and data mining/migration configurations. See the HP StoreVirtual Storage Remote Copy User Guide for detailed information about remote volumes. • A SmartClone volume is a type of volume that is created from an existing volume or snapshot. SmartClone volumes are described in “SmartClone volumes” (page 183).
Table 49 Characteristics for new volumes (continued) Volume characteristic Configurable for Primary or Remote Volume What it means The default value = Network RAID-10. For information about the data protection levels, see “Planning data protection” (page 145). Type Both • Primary volumes are used for data storage. • Remote volumes are used for configuring Remote Copy for business continuance, backup and recovery, or data mining/migration.
4. 5. [Optional] Assign a server to the volume. Click OK. The LeftHand OS software creates the volume. The volume is selected in the navigation window and the Volume tab view displays the Details tab. NOTE: The system automatically factors data protection levels into the settings. For example, if you create a fully provisioned 500 GB volume and the data protection level is Network RAID-10 (2–Way Mirror), the system automatically allocates 1000 GB for the volume.
Table 50 Requirements for changing volume characteristics (continued) Item Requirements for Changing Data protection level The cluster must have sufficient storage systems and unallocated space to support the new data protection level. For example, you just added more storage to a cluster and have more capacity. You decide to change the data protection level for a volume from Network RAID-0 to Network RAID-10 to ensure you have redundancy for your data.
iSCSI sessions and volume migration iSCSI sessions are rebalanced during volume migration. While data is being migrated the volume is still accessible and fully functional. The rebalancing affects systems using the DSM for MPIO differently than systems that are not using the DSM for MPIO. • Using DSM for MPIO—Administrative sessions are rebalanced to the new cluster immediately upon volume migration.
Restrictions on deleting volumes You cannot delete a volume when the volume has a schedule that creates remote copies. You must delete the remote copy schedule first. CAUTION: Typically, you do not want to delete individual volumes that are part of a volume set. For example, you may set up Exchange to use two volumes to support a StorageGroup: one for mailbox data and one for logs. Those two volumes make a volume set. Typically, you want keep or delete all volumes in a volume set.
13 Using snapshots Snapshots are a copy of a volume for use with backup and other applications. Types of snapshots Snapshots are one of the following types: • Regular or point-in-time —Snapshot that is taken at a specific point in time. However, an application writing to that volume may not be quiesced. Thus, data may be in flight or cached and the actual data on the volume may not be consistent with the application's view of the data.
would run weekly and retain 5 copies. A third schedule would run monthly and keep 4 copies. • File-level restore without tape or backup software • Source volumes for data mining, test and development, and other data use. Best Practice—Use SmartClone volumes. See “SmartClone volumes” (page 183). Planning snapshots When planning to use snapshots, consider their purpose and size.
Table 52 Snapshot characteristics (continued) Snapshot parameter What it means vCenter Server is installed. See the HP StoreVirtual Storage Application Aware Snapshot Manager Deployment Guide for more information about the controlling server IP address. Prerequisites for application-managed snapshots Creating an application-managed snapshot using the LeftHand OS software is the same as creating any other snapshot. However, you must select the Application-Managed Snapshot option in the New Snapshot window.
Creating snapshots Create a snapshot to preserve a version of a volume at a specific point in time. For information about snapshot characteristics, see “Configuring snapshots” (page 166). Creating an application-managed snapshot, with or without volume sets, requires the use of the Application Aware Snapshot Manager. The application-managed snapshot option quiesces Windows and VMware applications on the server before creating the snapshot.
6. (Optional) Edit the Snapshot Name and Description for each snapshot. NOTE: Be sure to leave the Application-Managed Snapshots check box selected. This option maintains the association of the volumes and snapshots and quiesces the application before creating the snapshots. If you clear the check box, the system creates a point-in-time snapshot of each volume listed. 7. Click Create Snapshots to create a snapshot of each volume. The snapshots appear in the CMC.
Table 54 Planning the scheduling for snapshots (continued) Requirement What it means If there is not sufficient room in the cluster for both snapshots, the scheduled snapshot will not be created, and the snapshot schedule will not continue until an existing snapshot is deleted or space is otherwise made available. Plan scheduling and retention policies The minimum recurrence you can set for snapshots is 30 minutes.
volumes. If it is not, select a volume that is aware of all associated volumes, and create the schedule there. Updating schedule for volume sets When you first create the schedule, the system stores information about the volume set as it exists at that time. If you add volumes to or remove volumes from the volume set using the application, you must update the schedule. To update it, you only need to edit the schedule and click OK.
Editing scheduled snapshots You can edit everything in the scheduled snapshot window except the name. If the snapshot is part of a snapshot set, you can also verify that the volumes included in the schedule are the current volumes in the volume set. For more information, see “Scheduling snapshots for volume sets” (page 170). 1. In the navigation window, select the volume for which you want to edit the scheduled snapshot. 2. In the tab window, click the Schedules tab to bring it to the front. 3.
Deleting schedules to snapshot a volume NOTE: After you delete a snapshot schedule, if you want to delete snapshots created by that schedule, you must do so manually. 1. 2. 3. 4. 5. In the navigation window, select the volume for which you want to delete the snapshot schedule. Click the Schedules tab to bring it to the front. Select the schedule you want to delete. Click Schedule Tasks on the Details tab, and select Delete Schedule. To confirm the deletion, click OK.
3. 4. Configure server access to the snapshot If you mount a Windows application-managed snapshot as a volume, use diskpart.exe to change the resulting volume's attributes, as described in “Making a Windows application-managed snapshot available” (page 174).
13. Exit diskpart by typing exit. 14. Reboot the server. 15. Verify that the disk is available by launching Windows Logical Disk Manager. You may need to assign a drive letter, but the disk should be online and available for use. 16.
13. Display the volume's attributes typing att vol. The volume will show that it is hidden, read-only, and shadow copy. 14. 15. 16. 17. Change these attributes by typing att vol clear readonly hidden shadowcopy. Exit diskpart by typing exit. Reboot the server. Verify that the disk is available by launching Windows Logical Disk Manager. You may need to assign a drive letter, but the disk should be online and available for use. 18.
1. 2. In the navigation window, select snapshot for which you want to delete the temporary space. Right-click, and select Delete Temporary Space. A warning message opens. 3. Click OK to confirm the delete. Rolling back a volume to a snapshot or clone point Rolling back a volume to a snapshot or a clone point replaces the original volume with a read/write copy of the selected snapshot.
1. 2. Log in to the management group that contains the volume that you want to roll back. In the navigation window, select the snapshot to which you want to roll back. Review the snapshot Details tab to ensure you have selected the correct snapshot. 3. Click Snapshot Tasks on the Details tab, and select Roll Back Volume. A warning message opens that illustrates the possible consequences of performing a rollback, including • Existing iSCSI sessions present a risk of data inconsistencies.
1. 2. Click New SmartClone Volume. Enter a name, and configure the additional settings. For more information about characteristics of SmartClone volumes, see “Defining SmartClone volume characteristics” (page 186). 3. Click OK when you have finished setting up the SmartClone volume and updated the table. The new volume appears in the navigation window, with the snapshot now a designated clone point for both volumes. 4. Assign a server, and configure hosts to access the new volume, if desired.
Cancel the rollback operation If you need to log off iSCSI sessions, stop application servers, or other actions, cancel the operation, perform the necessary tasks, and then do the rollback. 1. Click Cancel. 2. Perform necessary actions. 3. Start the rollback again. Deleting a snapshot When you delete a snapshot, the data necessary to maintain volume consistency are moved up to the next snapshot or to the volume (if it is a primary volume), and the snapshot is removed from the navigation window.
Troubleshooting snapshots Table 56 Troubleshooting snapshot issues Issue Description Snapshots fail with error “Cannot create a quiesced snapshot because the snapshot operation exceeded the time limit for holding off I/O in the frozen virtual machine.” When taking managed snapshots via the CMC or the CLI on a volume that contains a large number of virtual machines, some virtual machines may fail due a failure to quiesce. The culprit could either be the VMware tools synch driver or MS VSS.
Table 56 Troubleshooting snapshot issues (continued) Issue Description An error occurs when an application-managed snapshot is created during NIC failover on an application server. If an application-managed snapshot is created while a NIC failover is in progress on an application server, the following error may display in the Windows event log: App-Managed SS Failed: Could not get list of volumes on server Wait until the NIC failover has completed.
14 SmartClone volumes SmartClone are space-efficient copies of existing volumes or snapshots. They appear as multiple volumes that share a common snapshot, called a clone point. They share this snapshot data on the SAN. SmartClone volumes can be used to duplicate configurations or environments for widespread use, quickly and without consuming disk space for duplicated data. Use the SmartClone process to create up to 25 volumes in a single operation.
Table 57 Terms used for SmartClone features (continued) Term Definition (page 184), the snapshots Volume_1_SS_1 and Volume_1_SS_2 are shared snapshots. Map view Tab that displays the relationships between clone points and SmartClone volumes. See the map view in Figure 86 (page 196) and Figure 87 (page 197). In Figure 74 (page 184) you can see on the left a regular volume with three snapshots and on the right, a regular volume with one SmartClone volume, one clone point, and two shared snapshots.
Safely use production data for test, development, and data mining Use SmartClone volumes to safely work with your production environment in a test and development environment, before going live with new applications or upgrades to current applications. Or, clone copies of your production data for data mining and analysis. Test and development Using the SmartClone process, you can instantly clone copies of your production LUNs and mount them in another environment.
Naming convention for SmartClone volumes A well-planned naming convention helps when you have many SmartClone volumes. Plan the naming ahead of time, since you cannot change volume or snapshot names after they have been created. You can design a custom naming convention when you create SmartClone volumes. Naming and multiple identical disks in a server Mounting multiple identical disks to servers typically requires that servers write new disk signatures to them.
Table 58 Characteristics for new SmartClone volumes (continued) SmartClone volume characteristic What it means more information, see “Assigning iSCSI server connections access to volumes” (page 211). Permission Type of access to the volume: Read, Read/Write, None Naming SmartClone volumes Because you may create dozens or even hundreds of SmartClone volumes, you need to plan the naming convention for them.
Figure 77 Rename SmartClone volume from base name 1. Rename SmartClone volume in list Shared versus individual characteristics Characteristics for SmartClone volumes are the same as for regular volumes. However, certain characteristics are shared among all the SmartClone volumes and snapshots created from a common clone point.
Figure 78 Programming cluster with SmartClone volumes, clone point, and the source volume 1. Source volume 2. Clone point 3. SmartClone volumes (5) In this example, you edit the SmartClone volume, and on the Advanced tab you change the cluster to SysAdm. The confirmation message lists all the volumes and snapshots that will change clusters as a result of changing the edited volume.
Figure 80 SysAdm cluster now has the SmartClone volumes, clone point, and the source volume Table 59 (page 190) shows the shared and individual characteristics of SmartClone volumes. Note that if you change the cluster or the data protection level of one SmartClone volume, the cluster and data protection level of all the related volumes and snapshots will change.
Figure 81 Navigation window with clone point 1. Original volume 2. Clone point 3. SmartClone volume In Figure 81 (page 191), the original volume is “C#.” • Creating a SmartClone volume of C# first creates a snapshot, C#_SCsnap. • After the snapshot is created, you create at least one SmartClone volume, C#class_1.
Figure 82 Clone point appears under each SmartClone volume 1. Clone point appears multiple times. Note that it is exactly the same in each spot NOTE: Remember that a clone point only takes up space on the SAN once. Shared snapshot Shared snapshots occur when a clone point is created from a newer snapshot that has older snapshots below it in the tree. They are designated in the navigation window with the icon shown here. Figure 83 Navigation window with shared snapshots 1. Original volume 2.
In Figure 83 (page 192), the original volume is C#. Three snapshots were created from C#: • C#_snap1 • C#_snap2 • C#_SCsnap Then a SmartClone volume was created from the latest snapshot, C#_SCsnap. That volume has a base name of C#_class. The older two snapshots, C#_snap1 and C#_snap2, become shared snapshots, because the SmartClone volume depends on the shared data in both those snapshots.
Figure 84 Setting characteristics for SmartClone volumes 1. Set characteristics for multiples here 2. Edit individual clones here For details about the characteristics of SmartClone volumes, see “Defining SmartClone volume characteristics” (page 186). 1. Log in to the management group in which you want to create a SmartClone volume. 2. Select the volume or snapshot from which to create a SmartClone volume. • From the main menu you can select Tasks→Volume→New SmartClone or Tasks→Snapshot→New SmartClone.
8. If you want to modify any individual characteristic, do it in the list before you click OK to create the SmartClone volumes. For example, you might want to change the assigned server of some of the SmartClone volumes. In the list you can change individual volumes’ server assignments. 9. Click OK to create the volumes. The new SmartClone volumes appear in the navigation window under the volume folder. Figure 85 New SmartClone volumes in Navigation window 1. Clone point 2.
Figure 86 Viewing SmartClone volumes and snapshots as a tree in the Map View Using views The default view is the tree layout, displayed in Figure 86 (page 196). The tree layout is the most effective view for smaller, more complex hierarchies with multiple clone points, such as clones of clones, or shared snapshots. You may also display the Map view in the organic layout.
Figure 87 Viewing the organic layout of SmartClone volumes and related snapshots in the Map View Viewing clone points, volumes, and snapshots The navigation window view of SmartClone volumes, clone points, and snapshots includes highlighting that shows the relationship between related items. For example, in Figure 88 (page 198), the clone point is selected in the tree. The clone point supports the SmartClone volumes, so it is displayed under those volumes.
Figure 88 Highlighting all related clone points in navigation window 1. Selected clone point 2. Clone point repeated under SmartClone volumes Editing SmartClone volumes Use the Edit Volume window to change the characteristics of a SmartClone volume. Table 62 Requirements for changing SmartClone volume characteristics Item Shared or Individual Requirements for Changing Description Individual May be up to 127 characters. Size Individual Sets available space on cluster.
Table 62 Requirements for changing SmartClone volume characteristics (continued) Item Shared or Individual Requirements for Changing Type Individual Determines whether the volume is primary or remote. Provisioning Individual Determines whether the volume is fully provisioned or thinly provisioned. To edit the SmartClone volumes 1. 2. In the navigation window, select the SmartClone volume for which you want to make changes. Click Volume Tasks, and select Edit Volume.
Deleting multiple SmartClone volumes Delete multiple SmartClone volumes in a single operation from the Volume and Snapshots node of the cluster. First you must stop any application servers that are using the volumes, and log off any iSCSI sessions. 1. Select the Volumes and Snapshots node to display the list of SmartClone volumes in the cluster. Figure 90 List of SmartClone volumes in cluster 2. 3. Use Shift+Click to select the SmartClone volumes to delete. Right-click, and select Delete Volumes.
15 Working with scripting The HP StoreVirtual LeftHand OS Command Line Interface (CLI) is built upon the LeftHand OS API. Use the CLI to develop automation and scripting and perform storage management. Install the CLI from the HP StoreVirtual Management Software DVD or download the software from http://www.hp.com/go/StoreVirtualDownloads Documentation You can also download sample scripts that illustrate common uses for the CLI.
16 Controlling server access to volumes Application servers (servers), also called clients or hosts, access storage volumes on HP StoreVirtual Storage using either Fibre Channel or iSCSI connectivity. You set up each server that needs to connect to volumes in a management group in the CMC. We refer to this setup as a “server connection.
Planning server connections to management groups Add each server connection that needs access to a volume to the management group containing the volume. After you add a server connection to a management group, you can assign the server connection to one or more volumes or snapshots.
Characteristics of iSCSI server connections For detailed information about using iSCSI with HP StoreVirtual Storage, including load balancing and CHAP authentication, see “HP StoreVirtual Storage using iSCSI and Fibre Channel” (page 238). Table 65 Characteristics of iSCSI server connections Item Description and requirements Name Name of the server that is displayed in the CMC. The server name is case sensitive and cannot be changed after the server is created.
Table 66 Entering CHAP information in a new server For this CHAP Mode Complete these fields 1-way CHAP • CHAP name • Target secret—minimum of 12 characters 2-way CHAP • CHAP name • Target secret—minimum of 12 characters • Initiator secret—minimum of 12 characters; must be alphanumeric 11. Click OK. The iSCSI server connection appears in the management group in the navigation window. You can now assign this server connection to volumes, giving the server access to the volumes.
Completing the iSCSI Initiator and disk setup After you have assigned a server connection to one or more volumes, you must configure the appropriate iSCSI settings on the server. For information about iSCSI, see “HP StoreVirtual Storage using iSCSI and Fibre Channel” (page 238). Persistent targets or favorite targets After you discover targets in the iSCSI initiator, you can connect to the volumes. When you connect, select Add this connection to the list of Favorite Targets.
NOTE: If you want to leave iSCSI access allowed, you must add the initiator node name to the iSCSI tab. 4. 5. 6. 7. Click the Fibre Channel tab. Enter a name and optional description for the server connection. If you are taking application-managed snapshots, enter the Controlling Server IP Address. Select the port names to assign to the server from the Unassigned Initiator WWPN list. NOTE: Use the ports from the Fibre Channel host initiator. You should only use the ports from the same initiator. 8.
4. 5. 6. Change the appropriate information. Click OK when you are finished. If you have changed the Initiator WWPN assignments, ensure that you update the LUN assignments as well. Deleting a Fibre Channel server connection Deleting a Fibre Channel server connection stops access to volumes by servers using that server connection. Access to the same volume by other servers continues. 1. In the navigation window, select the Fibre Channel server connection you want to delete. 2. Click the Details tab. 3.
• For iSCSI: same load balancing setting for all servers • For Fibre Channel: one volume assigned to multiple servers in a cluster should have the same LUN ID for each server Creating a server cluster 1. 2. 3. 4. 5. 6. In the navigation window, select the Servers category. Right-click and select New Server Cluster. Enter a name and description (optional) for the server cluster. Do one of the following: • Click Add Server and select the server from the list of available servers that opens.
Viewing the relationship between storage systems, volumes, and servers After you create a server cluster and connect volumes, use the Map View tab for viewing the relationships between systems, volumes and servers. For more information on using the map view tools, see “Using the display tools” (page 17).
Assigning iSCSI server connections access to volumes After you add an iSCSI server connection to your management group, you can assign one or more volumes or snapshots to the server connection, giving the server access to those volumes or snapshots.
4. 5. From the Permission list, select the permission the server should have. Click OK. You can now connect to the volume from the server’s iSCSI initiator. See “Completing the iSCSI Initiator and disk setup” (page 206). Assigning volumes to Fibre Channel servers After configuring the Fibre Channel servers in the CMC, you assign LUNs to the servers.
Assigning a boot volume to a Fibre Channel server Assign a boot volume to any Fibre Channel server connection. 1. In the navigation window, right-click the server connection you want to assign to a boot volume. 2. Select Assign and Unassign Boot Volume. 3. Select Assign as Boot Volume for the volume. 4. Set the LUN number. 5. Click OK.
17 Monitoring performance The Performance Monitor provides performance statistics for iSCSI and storage system I/Os to help you and HP support and engineering staff understand the load that the SAN is servicing. The Performance Monitor presents real-time performance data in both tabular and graphical form as an integrated feature in the CMC. The CMC can also log the data for short periods of time (hours or days) to get a longer view of activity.
Generally, the Performance Monitor can help you determine: • Current SAN activities • Workload characterization • Fault isolation Current SAN activities example This example shows that the Denver cluster is handling an average of more than 747 IOPS with an average throughput of more than 6 million bytes per second and an average queue depth of 31.76.
Figure 95 Example showing fault isolation What can I learn about my volumes? If you have questions such as these about your volumes, the Performance Monitor can help: • Which volumes are accessed the most? • What is the load being generated on a specific volume? The Performance Monitor can let you see the following: • Most active volumes • Activity generated by a specific server Most active volumes examples This example shows two volumes (DB1 and Log1) and compares their total IOPS.
Figure 97 Example showing throughput of two volumes Activity generated by a specific server example This example shows the total IOPS and throughput generated by the server (ExchServer-1) on two volumes.
Figure 99 Example showing network utilization of three storage systems Load comparison of two clusters example This example illustrates the total IOPS, throughput, and queue depth of two different clusters (Denver and Boulder), letting you compare the usage of those clusters. You can also monitor one cluster in a separate window while doing other tasks in the CMC.
Figure 101 Example comparing two volumes Accessing and understanding the Performance Monitor window The Performance Monitor is available as a tree system below each cluster. To display the Performance Monitor window: 1. In the navigation window, log in to the management group. 2. Select the Performance Monitor system for the cluster you want. The Performance Monitor window opens. By default, it displays the cluster total IOPS, cluster total throughput, and cluster total queue depth.
For more information about the performance monitor window, see the following: • “Performance Monitor toolbar” (page 220) • “Performance monitor graph” (page 220) • “Performance monitor table” (page 221) Performance Monitor toolbar The toolbar lets you change some settings and export data. Figure 103 Performance Monitor toolbar Button or Status Definition 1. Performance Monitor status • Normal—Performance monitoring for the cluster is OK.
Figure 104 Performance Monitor graph The graph shows the last 100 data samples and updates the samples based on the sample interval setting. The vertical axis uses a scale of 0 to 100. Graph data is automatically adjusted to fit the scale. For example, if a statistic value was larger than 100, say 4,000.0, the system would scale it down to 40.0 using a scaling factor of 0.01. If the statistic value is smaller than 10.0, for example 7.5, the system would scale it up to 75 using a scaling factor of 10.
Table 69 Performance Monitor table columns (continued) Column Definition Units Unit of measure for the statistic. Value Current sample value for the statistic. Minimum Lowest recorded sample value of the last 100 samples. Maximum Highest recorded sample value of the last 100 samples. Average Average of the last 100 recorded sample values. Scale Scaling factor used to fit the data on the graph’s 0 to 100 scale. Only the line on the graph is scaled; the values in the table are not scaled.
Table 70 Performance Monitor statistics Statistic Definition IOPS Reads Cluster Volume or Snapshot NSM Average read requests X per second for the sample interval. X X IOPS Writes Average write requests X per second for the sample interval. X X IOPS Total Average read+write X requests per second for the sample interval. X X Throughput Reads Average read bytes per second for the sample interval. X X X Throughput Writes Average write bytes per second for the sample interval.
Table 70 Performance Monitor statistics (continued) Statistic Definition Cluster Volume or Snapshot NSM system for the sample interval. Memory Utilization Percent of total memory used on this storage system for the sample interval. - X Network Utilization Percent of bidirectional network capacity used on this network interface on this storage system for the sample interval. - X Network Bytes Read Bytes read from the network for the sample interval.
Access size The size of a read or write operation. As this size increases, throughput usually increases because a disk access consists of a seek and a data transfer. With more data to transfer, the relative cost of the seek decreases. Some applications allow tuning the size of read and write buffers, but there are practical limits to this. Access pattern Disk accesses can be sequential or random.
3. Click . Figure 107 Add Statistics window 4. From the Select Object list, select the cluster, volumes, and storage systems you want to monitor. Use the CTRL key to select multiple objects from the list. 5. 6. From the Select Statistics options, select the option you want. • Add All—Adds all available statistics for each selected object. • Add—Lets you add individual statistics from the list. The list is populated with the statistics that relate to the selected objects.
Removing and clearing statistics Remove or clear statistics in any of the following ways: • Remove one or more statistics from the table and graph • Clear the sample data, but retain the statistics in the table • Clear the graph display, but retain the statistics in the table • Reset to the default statistics Removing a statistic You 1. 2. 3. can remove one or more statistics from the table and graph. In the navigation window, log in to the management group.
1. From the Performance Monitor window, click to pause the monitoring. All data remain as they were when you paused. 2. To restart the monitoring, click . Data updates when the next sample interval elapses. The graph will have a gap in the time.
Changing the scaling factor The vertical axis uses a scale of 0 to 100. Graph data is automatically adjusted to fit the scale. For example, if a statistic value was larger than 100, say 4,000.0, the system would scale it down to 40.0 using a scaling factor of 0.01. If the statistic value is smaller than 10.0, for example 7.5, the system would scale it up to 75 using a scaling factor of 10. The Scale column of the statistics table shows the current scaling factor.
10. Click OK. The File Size field displays an estimated file size, based on the sample interval, duration, and selected statistics. 11. When the export information is set the way you want it, click OK to start the export. The export progress appears in the Performance Monitor window, based on the duration and elapsed time. To pause the export, click To stop the export, click , then click to resume the export. . Data already exported is saved in the CSV file.
18 Registering advanced features Advanced features expand the capabilities of the LeftHand OS software and are enabled by licensing the storage systems through the HP License Key Delivery Service website, using the license entitlement certificate that is packaged with each storage system. However, you can use the advanced features immediately by agreeing to enter an evaluation period when you begin using the LeftHand OS software for clustered storage.
Identifying licensing status You can check the status of licensing on individual advanced features by the icons displayed. The violation icon appears throughout the evaluation period. Figure 108 Identifying the license status for advanced features Backing out of Remote Copy evaluation If you decide not to use Remote Copy and you have not obtained license keys by the end of the evaluation period, you must delete any remote volumes and snapshots you have configured.
5. 6. Read the text, and select the box to enable the use of scripts during a license evaluation period. Click OK. Turn off scripting evaluation Turn off the scripting evaluation period when you take either one of these actions: • You obtain license keys for the feature you were evaluating. • You complete the evaluation and decide not to license any advanced features. Turning off the scripting evaluation ensures that no scripts continue to push the evaluation clock.
Submitting storage system feature keys 1. 2. 3. 4. 5. 6. In the navigation window, select the storage system from the Available Systems pool for which you want to register advanced features. Select the Feature Registration tab. Select the Feature Key. Right-click, and select Copy. Use Ctrl+V to paste the feature key into a text editing program, such as Notepad. Register and generate the license key at the Webware website: https://webware.hp.
The Registration tab displays the following information: • The license status of all the advanced features, including the progress of the evaluation period and which advanced features are in use and not licensed • Version information about software components of the operating system • Customer information (optional) Submitting storage system feature keys Submit the feature keys of all the storage systems in the management group. 1.
NOTE: Record the host name or IP address of the storage system along with the feature key. This record will make it easier to add the license key to the correct storage system when you receive it. Entering license keys When you receive the license keys, add them to the storage systems in the Feature Registration window. 1. In the navigation window, select the management group. 2. Select the Registration tab. 3. Click Registration Tasks, and select Feature Registration from the menu. 4.
Make a customer information file for each management group in your SAN. • Create or edit your customer profile. • Save the customer profile to a computer that is not part of your SAN. Editing your customer information file Occasionally, you may want to change some of the information in your customer profile. For example, if your company moves, or contact information changes. 1. In the navigation window, select a management group. 2. Click the Registration tab. 3.
19 HP StoreVirtual Storage using iSCSI and Fibre Channel iSCSI and HP StoreVirtual Storage The LeftHand OS software uses the iSCSI protocol to let servers access volumes. For fault tolerance and improved performance, use a VIP and iSCSI load balancing when configuring server access to volumes. Number of iSCSI sessions For information about the recommended maximum number of iSCSI sessions that can be created in a management group, see “Configuration Summary overview” (page 110).
Requirements • Cluster configured with a virtual IP address. See “VIPs” (page 238). • A compliant iSCSI initiator that supports iSCSI Login-Redirect and has passed HP's test criteria for iSCSI failover in a load balanced configuration. To determine which iSCSI initiators are compliant, view the HP StoreVirtual 4000 Storage Compatibility Matrix at http://www.hp.com/ go/StoreVirtualcompatibility. If your initiator is not listed, do not enable load balancing.
Table 74 Requirements for configuring CHAP CHAP Level What to Configure for the Server in the LeftHand OS Software What to Configure in the iSCSI Initiator CHAP not required Initiator node name only No configuration requirements 1-way CHAP • CHAP name* Enter the target secret (12-character minimum) when logging on to available target. • Target secret • CHAP name* 2-way CHAP • Enter the initiator secret (12-character minimum). • Target secret • Enter the target secret (12-character minimum).
Figure 113 Viewing the initiator to copy the initiator node name Figure 114 (page 241) illustrates the configuration for a single host authentication with 1-way CHAP required. Figure 114 Configuring iSCSI for a single host with CHAP Figure 115 (page 242) illustrates the configuration for a single host authentication with 2-way CHAP required.
Figure 115 Adding an initiator secret for 2-way CHAP CAUTION: Without the use of shared storage access (host clustering or clustered file system) technology, allowing more than one iSCSI application server to connect to a volume concurrently without cluster-aware applications and/or file systems in read/write mode could result in data corruption. NOTE: If you enable CHAP on a server, it will apply to all volumes for that server.
systems is reported differently and zoning is uniquely handled, as described in “Zoning” (page 243). For all other Fibre Channel configuration standards, see the HP SAN Design Reference Guide. Creating Fibre Channel connectivity Two or more storage systems enabled for Fibre Channel must be added to a management group to use Fibre Channel connectivity. A 10 GbE network connection is required.
20 Using the Configuration Interface The Configuration Interface is the command line interface that uses a direct connection with the storage system. You may need to access the Configuration Interface if all network connections to the storage system are disabled. Use the Configuration Interface to perform the following tasks.
$ xterm 3. In the xterm window, start minicom as follows: $ minicom -c on -l NSM Opening the Configuration Interface from the terminal emulation session 1. 2. 3. Press Enter when the terminal emulation session is established. Enter start, and press Enter at the log in prompt. When the session is connected to the storage system, the Configuration Interface window opens.
Table 77 Identifying Ethernet interfaces on the storage system (continued) Ethernet Interfaces Where labeled What the label says Motherboard:Port1, Motherboard:Port2 Configuration Interface Intel Gigabit Ethernet or Broadcom Gigabit Ethernet Label on the back of the storage system Eth0, Eth1, or a graphical symbol similar to the following: or Once you have established a connection to the storage system using a terminal emulation program, you can configure an interface connection using the Configuratio
TCP speed and duplex. You can change the speed and duplex of an interface. If you change these settings, you must ensure that both sides of the NIC cable are configured in the same manner. For example, if the storage system is set for Auto/Auto, the switch must be set the same. For more information about TCP speed and duplex settings, see “Managing settings on network interfaces” (page 50). Frame size. The frame size specifies the size of data packets that are transferred over the network.
21 Replacing hardware This chapter describes the disk replacement procedures for cases in which you do not know which disk to replace and/or you must rebuild RAID on the entire storage system. For example, if RAID has gone off unexpectedly, you need HP Support to help determine the cause, and if it is a disk failure, to identify which disk must be replaced. It also describes how to identify and replace the RAID controller in the P4900 G2 storage system.
Verify the storage system is not running a manager Verify that the storage system that needs the disk replacement is not running a manager. 1. Log in to the management group. 2. Select the storage system in the navigation window, and review the Details tab information. If the Storage System Status shows Manager Normal, and the Management Group Manager shows Normal, then a manager is running and needs to be stopped. To stop a manager: 1.
NOTE: If there are Network RAID-0 volumes that are offline, the message shown in Figure 116 (page 250) is displayed. You must either replicate or delete these volumes before you can proceed. You see the message shown in this case. Figure 116 Warning if volumes are Network RAID-0 Right-click the storage system in the navigation window, and select Repair Storage System. A “ghost” image takes the place of the storage system in the cluster, with the IP address serving as a place holder.
Reconfigure RAID 1. 2. Select the Storage category, and select the RAID Setup tab. Click RAID Setup Tasks, and select Reconfigure RAID. The RAID Status changes from Off to Normal. NOTE: If RAID reconfigure reports an error, reboot the storage system, and try reconfiguring the RAID again. If this second attempt is not successful, call HP Support. Checking the progress of the RAID reconfiguration Use the Hardware Information report to check the status of the RAID rebuild. 1.
If necessary, ensure that after the repair you have the appropriate configuration of managers. If there was a manager running on the storage system before you began the repair process, you may start a manager on the repaired storage system as necessary to finish with the correct number of managers in the management group. If you added a virtual manager to the management group, you must first delete the virtual manager before you can start a regular manager. 1.
Controlling server access Use the Local Bandwidth Priority setting to control server access to data during the rebuild process: • When the data is being rebuilt, the servers that are accessing the data on the volumes might experience slowness. Reduce the Local Bandwidth Priority to half of its current value for immediate results. • Alternatively, if server access performance is not a concern, raise the Local Bandwidth Priority to increase the data rebuild speed. To change local bandwidth priority: 1.
Verifying component failure Look at the system health LED for the controller cards (2, Figure 118 (page 254)) to determine if there is a problem. Figure 118 Storage server LEDs 1. Front UID/LED switch 2. System health LED 3. NIC 1 activity LED 4. NIC 2 activity LED 5.
Figure 119 Card 1 location Figure 120 Card 2 location A cache module is attached to each RAID controller and each cache module is connected to a battery. The unit is called a backup battery with cache (BBWC). BBWC 1 connects to Card 1 and BBWC 2 connects to Card 2. Removing the RAID controller 1. Power off the storage system: a. Use the CMC to power off the system controller as described in “Powering off the storage system” (page 24). b. Manually power off the disk enclosure. 2. 3.
4. Remove the top cover (Figure 121 (page 256)): a. Loosen the screw on the top cover with the T-10 wrench. b. Press the latch on the top cover. c. Slide the cover toward the rear of the server and then lift the top cover to remove it from the chassis. Lift the top cover away from the chassis. Figure 121 Removing the cover 5. Locate and remove the PCI cage: a.
6. The cache module is attached to the RAID controller and must be removed before removing the RAID controller. Each cache module is connected to a battery; observe the BBWC status LED (4, Figure 123 (page 257)) on both batteries before removing a cache module: • If the LED is flashing every two seconds, data is trapped in the cache. Reassemble the unit, restore system power, and repeat this procedure. • If the LED is not lit, continue with the next step of removing the RAID controller.
Figure 125 Removing the cache module 9. Remove the RAID controller from its slot. Installing the RAID controller IMPORTANT: The replacement RAID controller contains a new cache module. You must remove the cache module on the replacement controller board and attach the existing cache module to the replacement controller board and reconnect the cache module to the battery cable. 1. 2. Slide the RAID controller into the slot, aligning the controller with its matching connector.
Figure 127 Installing Card 2 3. Reinstall the PCI cage (Figure 128 (page 259)): a. Align the PCI cage assembly to the system board expansion slot, and then press it down to ensure full connection to the system board. b. Tighten the thumbscrews to secure the PCI cage assembly to the system board and secure the screw on the rear panel of the chassis. Figure 128 Reinstalling the PCI cage 4. 5. 6. 7. Place the cover back on the unit.
22 LeftHand OS TCP and UDP port usage Table 80 (page 260) lists the TCP and UDP ports that enable communication with LeftHand OS. The “management applications” listed in the Description column include the HP StoreVirtual Centralized Management Console and the scripting interface. Table 80 TCP/UDP ports used for normal SAN operations with LeftHand OS IP Protocol Port(s) Name Description TCP 22 SSH Secure Shell access for LeftHand OS Support only. Not required for normal day-to-day operations.
Table 80 TCP/UDP ports used for normal SAN operations with LeftHand OS (continued) IP Protocol Port(s) Name Description TCP 13847 LeftHand OS Internal Used for Virtual Manager communication TCP 13848 LeftHand OS Internal Used for internal data distribution and resynchronization. TCP 13849 iSCSI iSCSI initiators connect to this port when using the HP StoreVirtual DSM for Microsoft MPIO. TCP, HTTP/HTTPS 2003, 5988, 5989 CIM Server Used for HTTP requests to the CIM gateway.
Table 80 TCP/UDP ports used for normal SAN operations with LeftHand OS (continued) IP Protocol Port(s) Name Description TCP 13840, 13850, 13851 LeftHand OS Internal Outgoing from management applications. Incoming to storage systems. Used for management and control. TCP, UDP 27491 Console Discovery Outgoing from management applications. Incoming to storage systems. Used by management applications for node discovery.
23 Third-party licenses The software distributed to you by HP includes certain software packages indicated to be subject to one of the following open source software licenses: GNU General Public License (“GPL”), the GNU Lesser General Public License (“LGPL”), or the BSD License (each, an “OSS Package”).
24 Support and other resources Contacting HP For worldwide technical support information, see the HP support website: http://www.hp.
initiate a fast and accurate resolution, based on your product’s service level. Notifications may be sent to your authorized HP Channel Partner for on-site service, if configured and available in your country. The software is available in two variants: • HP Insight Remote Support Standard: This software supports server and storage devices and is optimized for environments with 1-50 servers.
25 Documentation feedback HP is committed to providing documentation that meets your needs. To help us improve the documentation, send any errors, suggestions, or comments to Documentation Feedback (docsfeedback@hp.com). Include the document title and part number, version number, or the URL when submitting your feedback.
Glossary The following glossary provides definitions of terms used in the LeftHand OS software and the HP StoreVirtual Storage. acting primary volume The remote volume, when it assumes the role of the primary volume in a failover scenario. Active-Passive A type of network bonding which, in the event of a NIC failure, causes the logical interface to use another NIC in the bond until the preferred NIC resumes operation. At that point, data transfer resumes on the preferred NIC.
DSM Device Specific Module. DSM for MPIO The HP StoreVirtual DSM for Microsoft MPIO vendor-specific DSM that interfaces with the Microsoft MPIO framework. failback After failover, the process by which you restore the primary volume and turn the acting primary back into a remote volume. failover The process by which the user transfers operation of the application server over to the remote volume. This can be a manual operation, a scripted operation, or VMware enabled.
MIB Management Information Base. A database of managed objects accessed by network management protocols. An SNMP MIB is a set of parameters that an SNMP management station can query or set in the SNMP agent of a network device (for example, a router). Multi-Site cluster A cluster of storage that spans multiple sites (up to three).
RAID rebuild rate The rate at which the RAID configuration rebuilds if a disk is replaced. RAID status Condition of RAID on the storage system: • Normal - RAID is synchronized and running. No action is required. • Rebuild - A new disk has been inserted in a drive bay and RAID is currently rebuilding. No action is required. • Degraded - RAID is not functioning properly. Either a disk needs to be replaced or a replacement disk has been inserted in a drive.
site A user-designated location in which storage systems are installed. Multi-Site SAN configurations have multiple sites with storage systems in each site, and each site has its own subnet. A site can be a logical configuration, such as a subnet within the same data center, department, or application. SmartClone volume SmartClone volumes are space-efficient copies of existing volumes or snapshots. They appear as multiple volumes that share a common snapshot, called a clone point.
volume size The size of the virtual device communicated to the operating system and the applications. VSS Provider HP StoreVirtual VSS Provider is the hardware provider that supports the Volume Shadow Copy Service on the HP StoreVirtual Storage. VSS Volume Shadow Copy Service writable space See temporary space.
Index Symbols 10 GbE identifying 10 GbE interface names in CMC, 56 1000BASE T interface, 54 4630 powering off the system controller and disk enclosure, correct order, 23 powering on the system controller and disk enclosure, correct order, 23 802.
converting temporary space from, 176 creating, 168, 171 creating for volume sets, 168 creating schedules for volume sets, 170 creating SmartClone volumes from, 179 defined, 165 deleting, 180 making available, 175, 176 requirements for, 167 rolling back from, 178, 179 assigning Fibre Channel servers to volumes and snapshots, 212 assigning iSCSI servers to volumes and snapshots, 211 assigning servers to volumes and snapshots, 211, 213 boot volume, 213 auto performance protection, 141 storage system inoperable
log into management group from one CMC only, 110 minimum configuration for fault tolerance, 147 plugging NICs into same switch for Link Aggregation Dynamic Mode, 56 resetting storage system to factory defaults erases all data and configuration information, 247 stop applications and log off iSCSI sessions before deleting volumes or snapshots, 199 verify that NIC bond works properly, 67 virtual manager requires manual activation, 131 volumes with Network RAID-0 not protected from system failure or reboot, 147
disabled network interface, 72 frame size in Configuration Interface, 246 IP address manually, 55 iSCSI single host, 240 network connection in Configuration Interface, 245 network interface bonds, 66 network interfaces, 55 NIC speed and duplex, 51 RAID, 28 SAN Status Page, 87 split network, 49 storage systems, 15 TCP speed and duplex in Configuration Interface, 246 virtual IP address, 136 virtual manager, 132 connecting to the Configuration Interface, 244 consumed space by volume, 153 contacting HP, 264 con
snapshots, 180 snapshots, and capacity management, 151 volumes, 163 descriptions changing for clusters, 137 changing for volumes, 162 details, viewing for statistics, 226 DHCP using, 55 warnings when using, 55 diagnostics hardware, 100 list of diagnostic tests, 101 viewing reports, 100 disabled network interface, configuring, 72 disabling Fibre Channel, 76 network interfaces, 71 SNMP agent, 97 SNMP traps, 98 disassociating management groups, 117 see also HP StoreVirtual Storage Remote Copy User Guide disast
snapshot schedules, 172 snapshots, 169 SNMP trap recipient, 98 volumes, 161 email setting up for event notification, 94, 95 email, setting up for event notification, 94, 95 enabling NIC flow control, 53 SNMP traps, 97 establishing network interfaces, 54 ESX Server see VMware Ethernet interfaces, 54 see also network interfaces evaluating backing out of Remote Copy, 232 backing out of scripting, 233 Remote Copy, 231 scripting, 232 event notification setting up email for, 94, 95 SNMP, configuring access contro
administrative, 81 default administrative groups, 81 deleting administrative, 82 H hardware diagnostics, 100 list of diagnostic tests, 101 tab window, 100 hardware information report details in, 103 saving to a file, 101 help obtaining, 264 high availability configuring data protection for, 30 example network cabling topology using Active-Passive, 60 example network cabling topology using Adaptive Load Balancing, 65 manager configuration for, 123 network interface bonding, 56 planning data protection for,
LED drive ID LED display, 36 for locating system in rack, 22 for removing drive safely, by storage system model, 45 hardware-specific information for, 28 information about backplane LEDs, 103 license keys, 233 licensing status, 232 lines changing look of in the Performance Monitor, 228 displaying or hiding in the Performance Monitor, 228 highlighting, 228 Link Aggregation Dynamic Mode bond active interface, 62 deleting, 70, 246 during failover, 63 example configurations, 63 guidelines for creating, 66 prefe
best practices, 49 change configuration of, 50 ping IP address, 54 split configurations for, 49 network bond consistency, Best Practice Summary, 115 network bonding Best Practice Summary, 115 network bonding, Best Practice Summary, 115 network flow control consistency, Best Practice Summary, 115 network frame size consistency, Best Practice Summary, 115 network interface bonds, 56 active-passive, 59 Adaptive Load Balancing, 64 best practices, 66 communication after deleting, 70 configuring, 66 creating, 67
overview, 214 pausing, 227 planning for SAN improvements, 217 prerequisites, 214 restarting, 227 statistics, defined, 222 understanding and using, 214 workload characterization example, 215 Performance Monitor graph changing, 228 changing line display, 228 changing the scaling factor for, 229 displaying a line, 228 hiding, 228 hiding a line, 228 showing, 228 Performance Monitor window accessing, 219 graph, 220 parts defined, 219 saving to an image file, 230 table, 221 toolbar, 220 permissions as root for Li
status and data reads and writes, 34 RAID (virtual), devices, 30 RAID and single disk replacement, 46 RAID consistency, Best Practice Summary, 114 RAID controller identifying for replacement, 254 installing replacement, 258 removing, 255 replacing, 253 verify proper operation of replacement , 259 verifying failure, 254 RAID levels defined, 248 raw space in cluster, 154 raw storage, 155 read only volumes, 173 rebooting storage systems, 24 rebuild volume data, 252 rebuilding RAID, 47 rate for RAID, 32 rebuild
S safe to remove status, 45 sample interval, changing for the Performance Monitor, 225 SAN capacity of, 144 comparing the load of two clusters, 218, 224 comparing the load of two volumes, 218 current activity performance example, Performance Monitor, 215 determining if NIC bonding would improve performance, 217 fault isolation example, Performance Monitor, 215 monitoring performance, 214 planning for SAN improvements using Performance Monitor, 217 using Performance Monitor, 214 volumes on the SAN, Perfomanc
creating from application-managed snapshots, 179 definition of, 183 deleting, 199 deleting multiple, 200 editing, 199 examples for using, 184 making application-managed snapshot available after creating, 175, 176 overview, 183 planning, 185 planning naming convention, 186 planning space requirements, 185 requirements for changing, 198 terminology, 183 uses for, 185 viewing with Map View, 196 snapshots adding, 168 application-managed, 165 as opposed to backups, 150 assigning to Fibre Channel servers, 212 ass
upgrading in a cluster, 139 storage pool, 107 storage space and raw space, 155 storage system inoperable, 141 storage system overloaded, 141 storage system status and VSA, 141 storage systems adding first one, 108 adding to existing cluster, 139 adding to management group, 116 and raw space, 154 and space provisioned in cluster, 154 configuring, 15 exchanging in a cluster, 140 identifying HP platform, 21 locating in a rack, 22 not found, 19 powering off, 24 prerequisites for removing from management group,
user adding a group to a user, 80 administrative, 80 changing a user name, 80 deleting administrative, 81 editing, 80 password, 80 utilization see capacity V verifying NIC bond, 68 verifying proper operation of replacement RAID controller, 259 viewing clone points, volumes, and snapshots, 197 disk report, 35 disk setup report, 35 RAID setup report, 29 SmartClone volumes, 195 statistics details, 226 virtual IP address, 238 and Fibre Channel, 136 and iSCSI, 238 configuring for iSCSI for, 136 gateway session
W warning rack stability, 264 warning events defined, 88 warnings all storage systems in a cluster operate at a capacity equal to that of the smallest capacity, 136 changing RAID erases data, 33 check Safe to Remove status, 45 configure network correctly, 67 deleting management group causes data loss, 120 DHCP static IP addresses, 55 DHCP unicast communication, 55 disabling network interface, 71 plugging NICs into same switch for Link Aggregation Dynamic Mode, 56 return repaired system to same place, 143 we