HP IBRIX 9300/9320 Storage Administrator Guide Abstract This guide describes tasks related to cluster configuration and monitoring, system upgrade and recovery, hardware component replacement, and troubleshooting for the HP 9300 Storage Gateway and the HP 9320 Storage. It does not document IBRIX file system features or standard Linux administrative tools and commands. For information about configuring and using IBRIX software file system features, see the HP IBRIX 9000 Storage File System User Guide.
© Copyright 2010, 2012 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents 1 Product description...................................................................................10 9300 Storage Gateway .........................................................................................................10 9320 Storage System..............................................................................................................10 System Components................................................................................................................
Failing back a server .........................................................................................................46 Setting up HBA monitoring..................................................................................................46 Checking the High Availability configuration.........................................................................48 Capturing a core dump from a failed node................................................................................
Health check reports..........................................................................................................77 Viewing logs..........................................................................................................................79 Viewing operating statistics for file serving nodes........................................................................80 9 Using the Statistics tool..............................................................................
12 Upgrading the IBRIX software to the 6.2 release.......................................111 Online upgrades for IBRIX software 6.x to 6.2..........................................................................112 Preparing for the upgrade.................................................................................................113 Performing the upgrade....................................................................................................113 After the upgrade...........................
Related information...............................................................................................................145 HP websites....................................................................................................................146 Rack stability........................................................................................................................146 Product warranties...........................................................................................
1 GbE Network Storage Gateway (AW539A) ....................................................................176 1 GbE Network Storage Gateway (AW539B) ....................................................................177 1 HP IBRIX 9300 1 GbE Gateway (AW539D) ....................................................................178 10 GbE spare parts..............................................................................................................179 10 GbE Network Storage Gateway (AW540A) .......
Class B equipment...........................................................................................................239 Taiwanese notices.................................................................................................................240 BSMI Class A notice.........................................................................................................240 Taiwan battery recycle statement........................................................................................
1 Product description This guide provides information about configuring, monitoring, and maintaining HP IBRIX 9300 Storage Gateways and 9320 Storage. IMPORTANT: It is important to keep regular backups of the cluster configuration. 9300 Storage Gateway The 9300 Storage Gateway is a flexible, scale-out solution that brings gateway file services to HP MSA, EVA, P4000, or 3rd-party arrays or SANs.
IBRIX software is designed to operate with high-performance computing applications that require high I/O bandwidth, high IOPS throughput, and scalable configurations. Some of the key features and benefits are as follows: • Scalable configuration. You can add servers to scale performance and add storage devices to scale capacity. • Single namespace. All directories and files are contained in the same namespace. • Multiple environments. Operates in both the SAN and DAS environments.
2 Getting started IMPORTANT: Follow these guidelines when using your system: • Do not modify any parameters of the operating system or kernel, or update any part of the 9320 Storage unless instructed to do so by HP; otherwise, the system could fail to operate properly. • File serving nodes are tuned for file serving operations. With the exception of supported backup programs, do not run other applications directly on the nodes.
File systems. Set up the following features as needed: • NFS, SMB (Server Message Block), FTP, or HTTP. Configure the methods you will use to access file system data. • Quotas. Configure user, group, and directory tree quotas as needed. • Remote replication. Use this feature to replicate changes in a source file system on one cluster to a target file system on either the same cluster or a second cluster. • Data retention and validation. Use this feature to manage WORM and retained files.
• Configuration database consistency (ibrix_dbck) • Shell task management (ibrix_shell) The following operations can be performed only from the GUI: • Scheduling recurring data validation scans • Scheduling recurring software snapshots • Scheduling recurring block snapshots • Scheduling recurring block snapshots • Scheduling recurring block snapshots Using the GUI The GUI is a browser-based interface to the Fusion Manager.
System Status The System Status section lists the number of cluster events that have occurred in the last 24 hours. There are three types of events: Alerts. Disruptive events that can result in loss of access to file system data. Examples are a segment that is unavailable or a server that cannot be accessed. Warnings. Potentially disruptive conditions where file system access is not lost, but if the situation is not addressed, it can escalate to an alert condition.
Services Whether the specified file system services are currently running: One or more tasks are running. No tasks are running. Statistics Historical performance graphs for the following items: • Network I/O (MB/s) • Disk I/O (MB/s) • CPU usage (%) • Memory usage (%) On each graph, the X-axis represents time and the Y-axis represents performance.
NOTE: When you perform an operation on the GUI, a spinning finger is displayed until the operation is complete. However, if you use Windows Remote Desktop to access the GUI, the spinning finger is not displayed. Customizing the GUI For most tables in the GUI, you can specify the columns that you want to display and the sort order of each column. When this feature is available, mousing over a column causes the label to change color and a pointer to appear. Click the pointer to see the available options.
group. These groups are created when IBRIX software is installed. The following entries in the /etc/group file show the default users in these groups: ibrix-admin:x:501:root,ibrix ibrix-user:x:502:ibrix,ibrixUser,ibrixuser You can add other users to these groups as needed, using Linux procedures. For example: adduser -G ibrix- When using the adduser command, be sure to include the -G option.
For more information, see the client GUI online help. IBRIX software manpages IBRIX software provides manpages for most of its commands. To view the manpages, set the MANPATH variable to include the path to the manpages and then export it. The manpages are in the $IBRIXHOME/man directory.
Port Description 9009/tcp 9200/tcp Between file serving nodes and NFS clients (user network) 2049/tcp, 2049/udp NFS 111/tcp, 111/udp RPC 875/tcp, 875/udp quota 32803/tcp lockmanager 32769/udp lockmanager 892/tcp, 892/udp mount daemon 662/tcp, 662/udp stat 2020/tcp, 2020/udp stat outgoing 4000:4003/tcp reserved for use by a custom application (CMU) and can be disabled if not used 137/udp Between file serving nodes and SMB clients (user network) 138/udp 139/tcp 445/tcp 9000:9002/tcp Bet
Configuring HP Insight Remote Support on IBRIX 9000 systems IMPORTANT: In the IBRIX software 6.1 release, the default port for the IBRIX SNMP agent changed from 5061 to 161. This port number cannot be changed. Prerequisites The required components for supporting IBRIX systems are preinstalled on the file serving nodes. You must install HP Insight Remote Support on a separate Windows system termed the Central Management Server (CMS): • HP Insight Manager (HP SIM).
Limitations Note the following: • For IBRIX systems, the HP Insight Remote Support implementation is limited to hardware events. Configuring the IBRIX cluster for Insight Remote Support To enable 9300/9320 systems for remote support, first register MSA disk arrays and then configure Phone Home settings. All nodes in the cluster should be up when you perform this step.
The time required to enable Phone Home depends on the number of devices in the cluster, with larger clusters requiring more time. To configure Phone Home settings from the CLI, use the following command: ibrix_phonehome -c -i [-z Software Entitlement Id] [-r Read Community] [-w Write Community] [-t System Contact] [-n System Name] [-o System Location] For example: ibrix_phonehome -c -i 99.2.4.75 -P US -r public -w private -t Admin -n SYS01.
To configure Entitlements, select a device and click Modify to open the dialog box for that type of device. The following example shows the Server Entitlement dialog box. The customer-entered serial number and product number are used for warranty checks at HP Support. Use the following commands to entitle devices from the CLI. The commands must be run for each device present in the cluster.
Enter the read community string on the Credentials > SMTP tab. This string should match the Phone Home read community string. If the strings are not identical, the Fusion Manager IP might be discovered as “Unknown.” Devices are discovered as described in the following table.
File serving nodes and MSA arrays are associated with the Fusion Manager IP address. In HP SIM, select Fusion Manager and open the Systems tab. Then select Associations to view the devices. You can view all IBRIX devices under Systems by Type > Storage System > Scalable Storage Solutions > All 9000 Systems Configuring Insight Remote Support for HP SIM 6.3 and IRS 5.6 Discovering devices in HP SIM HP Systems Insight Manager (SIM) uses the SNMP protocol to discover and identify IBRIX systems automatically.
The following example shows discovered devices on HP SIM 6.3. File serving nodes are discovered as ProLiant server. Configuring device Entitlements Configure the CMS software to enable remote support for IBRIX systems. For more information, see "Using the Remote Support Setting Tab to Update Your Client and CMS Information” and “Adding Individual Managed Systems” in the HP Insight Remote Support Advanced A.05.50 Operations Guide.
A Modular Storage Array (MSA) unit should be discovered with its IP address.
The devices you entitled should be displayed as green in the ENT column on the Remote Support System List dialog box. If a device is red, verify that the customer-entered serial number and part number are correct and then rediscover the devices. Testing the Insight Remote Support configuration To determine whether the traps are working properly, send a generic test trap with the following command: snmptrap -v1 -c public .1.3.6.1.4.1.232 6 11003 1234 .1.3.6.1.2.1.1.5.0 s test .
The maximum number of SNMP trap hosts has already been configured If this error is reported when you configure Phone Home, the maximum number of trapsink IP addresses have already been configured. For MSA devices, the maximum number of trapsink IP addresses is 3. Manually remove a trapsink IP address from the device and then rerun the Phone Home configuration to allow Phone Home to add the CMS IP address as a trapsink IP address.
3 Configuring virtual interfaces for client access IBRIX software uses a cluster network interface to carry Fusion Manager traffic and traffic between file serving nodes. This network is configured as bond0 when the cluster is installed. To provide failover support for the Fusion Manager, a virtual interface is created for the cluster network interface.
3. To assign the IFNAME a default route for the parent cluster bond and the user VIFS assigned to FSNs for use with SMB/NFS, enter the following ibrix_nic command at the command prompt: # ibrix_nic -r -n IFNAME -h HOSTNAME-A -R 4. Configure backup monitoring, as described in “Configuring backup servers” (page 32). Creating a bonded VIF NOTE: The examples in this chapter use the unified network and create a bonded VIF on bond0.
For example: # # # # ibric_nic ibric_nic ibric_nic ibric_nic –m –m –m –m -h -h -h -h node1 node2 node3 node4 -A -A -A -A node2/bond0:1 node1/bond0:1 node4/bond0:1 node3/bond0:1 Configuring automated failover To enable automated failover for your file serving nodes, execute the following command: ibrix_server —m [-h SERVERNAME] Example configuration This example uses two nodes, ib50-81 and ib50-82. These nodes are backups for each other, forming a backup pair.
NOTE: Because the backup NIC cannot be used as a preferred network interface for 9000 clients, add one or more user network interfaces to ensure that HA and client communication work together. Configuring VLAN tagging VLAN capabilities provide hardware support for running multiple logical networks over the same physical networking hardware. To allow multiple packets for different VLANs to traverse the same physical interface, each packet must have a field added that contains the VLAN tag.
To determine whether link state monitoring is enabled on an iSCSI interface, run the following command: ibrix_nic -l Next, check the LINKMON column in the output. The value yes means that link state monitoring is enabled; no means that it is not enabled.
4 Configuring failover This chapter describes how to configure failover for agile management consoles, file serving nodes, network interfaces, and HBAs. Agile management consoles The agile Fusion Manager maintains the cluster configuration and provides graphical and command-line user interfaces for managing and monitoring the cluster. The agile Fusion Manager is installed on all file serving nodes when the cluster is installed. The Fusion Manager is active on one node, and is passive on the other nodes.
console. This Fusion Manager rebuilds the cluster virtual interface, starts Fusion Manager services locally, transitions into active mode, and take over Fusion Manager operation. Failover of the active Fusion Manager affects the following features: • User networks. The virtual interface used by clients will also fail over. Users may notice a brief reconnect while the newly active Fusion Manager takes over management of the virtual interface. • GUI.
What happens during a failover The following actions occur when a server is failed over to its backup: 1. The Fusion Manager verifies that the backup server is powered on and accessible. 2. The Fusion Manager migrates ownership of the server’s segments to the backup and notifies all servers and 9000 clients about the migration. This is a persistent change. If the server is hosting the active FM, it transitions to another server. 3.
Use the NIC HA Setup dialog box to configure NICs that will be used for data services such as SMB or NFS. You can also designate NIC HA pairs on the server and its backup and enable monitoring of these NICs.
For example, you can create a user VIF that clients will use to access an SMB share serviced by server ib69s1. The user VIF is based on an active physical network on that server. To do this, click Add NIC in the section of the dialog box for ib69s1. On the Add NIC dialog box, enter a NIC name. In our example, the cluster uses the unified network and has only bond0, the active cluster FM/IP. We cannot use bond0:0, which is the management IP/VIF. We will create the VIF bond0:1, using bond0 as the base.
Next, enable NIC monitoring on the VIF. Select the new user NIC and click NIC HA. On the NIC HA Config dialog box, check Enable NIC Monitoring. In the Standby NIC field, select New Standby NIC to create the standby on backup server ib69s2. The standby you specify must be available and valid. To keep the organization simple, we specified bond0:1 as the Name; this matches the name assigned to the NIC on server ib69s1. When you click OK, the NIC HA configuration is complete.
You can create additional user VIFs and assign standby NICs as needed. For example, you might want to add a user VIF for another share on server ib69s2 and assign a standby NIC on server ib69s1. You can also specify a physical interface such eth4 and create a standby NIC on the backup server for it. The NICs panel on the GUI shows the NICs on the selected server.
Changing the HA configuration To change the configuration of a NIC, select the server on the Servers panel, and then select NICs from the lower Navigator. Click Modify on the NICs panel. The General tab on the Modify NIC Properties dialog box allows you change the IP address and other NIC properties. The NIC HA tab allows you to enable or disable HA monitoring and failover on the NIC and to change or remove the standby NIC. You can also enable link state monitoring if it is supported on your cluster.
1. Add the VIF: ibrix_nic –a -n bond0:2 –h node1,node2,node3,node4 2. Set up a standby server for each VIF: # ibrix_nic –b –H node1/bond0:1,node2/bond0:2 ibrix_nic –b –H node2/bond0:1,node1/bond0:2 ibrix_nic –b –H node3/bond0:1,node4/bond0:2 ibrix_nic –b –H node4/bond0:1,node3/bond0:2 nl nl nl 2.
Turn on automated failover: ibrix_server -m [-h SERVERNAME] Changing the HA configuration manually Update a power source: If you change the IP address or password for a power source, you must update the configuration database with the changes. The user name and password options are needed only for remotely managed power sources. Include the -s option to have the Fusion Manager skip BMC.
ibrix_server -l The STATE field indicates the status of the failover. If the field persistently shows Down-InFailover or Up-InFailover, the failover did not complete; contact HP Support for assistance. For information about the values that can appear in the STATE field, see “What happens during a failover” (page 38).
When both HBA monitoring and automated failover for file serving nodes are configured, the Fusion Manager will fail over a server in two situations: • Both ports in a monitored set of standby-paired ports fail. Because all standby pairs were identified in the configuration database, the Fusion Manager knows that failover is required only when both ports fail. • A monitored single-port HBA fails.
ibrix_hba -d -h HOSTNAME -w WWNN Displaying HBA information Use the following command to view information about the HBAs in the cluster. To view information for all hosts, omit the -h HOSTLIST argument. ibrix_hba -l [-h HOSTLIST] The output includes the following fields: Field Description Host Server on which the HBA is installed. Node WWN This HBA’s WWNN. Port WWN This HBA’s WWPN. Port State Operational state of the port.
ibrix_haconfig -l -h xs01.hp.com,xs02.hp.com Host HA Configuration Power Sources Backup Servers Auto Failover Nics Monitored Standby Nics HBAs Monitored xs01.hp.com FAILED PASSED PASSED PASSED FAILED PASSED FAILED xs02.hp.com FAILED PASSED FAILED FAILED FAILED WARNED WARNED Viewing a detailed report Execute the ibrix_haconfig -i command to view the detailed report: ibrix_haconfig -i [-h HOSTLIST] [-f] [-b] [-s] [-v] The -h HOSTLIST option lists the nodes to check.
IMPORTANT: Complete the steps in “Prerequisites for setting up the crash capture” (page 50) before setting up the crash capture. Prerequisites for setting up the crash capture The following parameters must be configured in the ROM-based setup utility (RBSU) before a crash can be captured automatically on a file server node in failed condition. 1. Start RBSU – Reboot the server, and then Press F9 Key. 2. Highlight the System Options option in main menu, and then press the Enter key.
2. Tune Fusion Manager to set the DUMPING status timeout by entering the following command: ibrix_fm_tune -S -o dumpingStatusTimeout=240 This command is required to delay the failover until the crash kernel is loaded; otherwise, Fusion Manager will bring down the failed node.
5 Configuring cluster event notification Cluster events There are three categories for cluster events: Alerts. Disruptive events that can result in loss of access to file system data. Warnings. Potentially disruptive conditions where file system access is not lost, but if the situation is not addressed, it can escalate to an alert condition. Information. Normal events that change the cluster. The following table lists examples of events included in each category.
Associating events and email addresses You can associate any combination of cluster events with email addresses: all Alert, Warning, or Info events, all events of one type plus a subset of another type, or a subset of all types. The notification threshold for Alert events is 90% of capacity. Threshold-triggered notifications are sent when a monitored system resource exceeds the threshold and are reset when the resource utilization dips 10% below the threshold.
Viewing email notification settings The ibrix_event -L command provides comprehensive information about email settings and configured notifications. ibrix_event -L Email Notification SMTP Server From Reply To : : : : Enabled mail.hp.com FM@hp.com MIS@hp.com EVENT ------------------------------------asyncrep.completed asyncrep.failed LEVEL ----ALERT ALERT TYPE ----EMAIL EMAIL DESTINATION ----------admin@hp.com admin@hp.
Some SNMP parameters and the SNMP default port are the same, regardless of SNMP version. The default agent port is 161. SYSCONTACT, SYSNAME, and SYSLOCATION are optional MIB-II agent parameters that have no default values. NOTE: The default SNMP agent port was changed from 5061 to 161 in the IBRIX 6.1 release. This port number cannot be changed. The -c and -s options are also common to all SNMP versions. The -c option turns the encryption of community names and passwords on or off.
ibrix_snmptrap -c -h lab13-114 -v 3 -n trapsender -k auth-passwd -z priv-passwd Associating events and trapsinks Associating events with trapsinks is similar to associating events with email recipients, except that you specify the host name or IP address of the trapsink instead of an email address. Use the ibrix_event command to associate SNMP events with trapsinks.
ibrix_snmpgroup -c -g GROUPNAME [-s {noAuthNoPriv|authNoPriv|authPriv}] [-r READVIEW] [-w WRITEVIEW] For example, to create the group group2 to require authorization, no encryption, and read access to the hp view, enter: ibrix_snmpgroup -c -g group2 -s authNoPriv -r hp The format to create a user and add that user to a group follows: ibrix_snmpuser -c -n USERNAME -g GROUPNAME [-j {MD5|SHA}] [-k AUTHORIZATION_PASSWORD] [-y {DES|AES}] [-z PRIVACY_PASSWORD] Authentication and privacy settings are optional.
• Sender Domain. The domain name that is joined with an @ symbol to the sender name to form the “from” address for remote notification. The domain name can have a maximum of 31 bytes. Because this name is used as part of an email address, do not include spaces. For example: MyDomain.com. If the domain name is not valid, some email servers will not process the mail. • Email Address fields. Up to four email addresses that the system should send notifications to.
6 Configuring system backups Backing up the Fusion Manager configuration The Fusion Manager configuration is automatically backed up whenever the cluster configuration changes. The backup occurs on the node hosting the active Fusion Manager. The backup file is stored at /tmp/fmbackup.zip on that node. The active Fusion Manager notifies the passive Fusion Manager when a new backup file is available. The passive Fusion Manager then copies the file to /tmp/fmbackup.
hard quota limit for the directory tree has been exceeded, NDMP cannot create a temporary file and the restore operation fails. Configuring NDMP parameters on the cluster Certain NDMP parameters must be configured to enable communications between the DMA and the NDMP Servers in the cluster. To configure the parameters on the GUI, select Cluster Configuration from the Navigator, and then select NDMP Backup. The NDMP Configuration Summary shows the default values for the parameters.
status of the session (backing up data, restoring data, or idle), the start time, and the IP address used by the DMA. To cancel a session, select that session and click Cancel Session. Canceling a session kills all spawned sessions processes and frees their resources if necessary. To see similar information for completed sessions, select NDMP Backup > Session History.
To rescan for devices, use the following command: ibrix_tape –r NDMP events An NDMP Server can generate three types of events: INFO, WARN, and ALERT. These events are displayed on the GUI and can be viewed with the ibrix_event command. INFO events. Identifies when major NDMP operations start and finish, and also report progress. For example: 7012:Level 3 backup of /mnt/ibfs7 finished at Sat Nov 7 21:20:58 PST 2011 7013:Total Bytes = 38274665923, Average throughput = 236600391 bytes/sec. WARN events.
7 Creating host groups for 9000 clients A host group is a named set of 9000 clients. Host groups provide a convenient way to centrally manage clients. You can put different sets of clients into host groups and then perform the following operations on all members of the group: • Create and delete mount points • Mount file systems • Prefer a network interface • Tune host parameters • Set allocation policies Host groups are optional.
To create one level of host groups beneath the root, simply create the new host groups. You do not need to declare that the root node is the parent. To create lower levels of host groups, declare a parent element for host groups. Do not use a host name as a group name. To create a host group tree using the CLI: 1. Create the first level of the tree: ibrix_hostgroup -c -g GROUPNAME 2.
To force the reassigned 9000 clients to implement the mounts, tunings, network interface preferences, and allocation policies that have been set on their new host group, either restart IBRIX software services on the clients or execute the following commands locally: • ibrix_lwmount -a to force the client to pick up mounts or allocation policies • ibrix_lwhost --a to force the client to pick up host tunings To delete a host group using the CLI: ibrix_hostgroup -d -g GROUPNAME Other host group operations
8 Monitoring cluster operations This chapter describes how to monitor the operational state of the cluster and how to monitor cluster health. Monitoring 9300/9320 hardware The GUI displays status, firmware versions, and device information for the servers, virtual chassis, and system storage included in 9300 and 9320 systems. Monitoring servers To view information about the server and chassis included in your system. 1. Select Servers from the Navigator tree.
Select the server component that you want to view from the lower Navigator panel, such as NICs.
The following are the top-level options provided for the server: NOTE: Information about the Hardware node can be found in “Monitoring hardware components” (page 70). • 68 HBAs.
• • • • ◦ Monitoring ◦ State NICs. The NICs panel shows all NICs on the server, including offline NICs. The NICs panel displays the following information: ◦ Name ◦ IP ◦ Type ◦ State ◦ Route ◦ Standby Server ◦ Standby Interface Mountpoints. The Mountpoints panel displays the following information: ◦ Mountpoint ◦ Filesystem ◦ Access NFS. The NFS panel displays the following information: ◦ Host ◦ Path ◦ Options CIFS.
• • Events. The Events panel displays the following information: ◦ Level ◦ Time ◦ Event Hardware. The Hardware panel displays the following information: ◦ The name of the hardware component ◦ The information gathered in regards to that hardware component. See “Monitoring hardware components” (page 70) for detailed information about the Hardware panel. Monitoring hardware components The Management Console provides information about the server hardware and its components.
• Message1 • Diagnostic Message1 1 Column dynamically appears depending on the situation. Obtain detailed information for hardware components in the server by clicking the nodes under the Server node.
Table 1 Obtaining detailed information about a server Panel name Information provided CPU • Status • Type • Name • UUID • Model • Location ILO Module • Status • Type • Name • UUID • Serial Number • Model • Firmware Version • Properties Memory DiMM • Status • Type • Name • UUID • Location • Properties NIC • Status • Type • Name • UUID • Properties Power Management Controller • Status • Type • Name • UUID • Firmware Version Storage Cluster • Status • Type • Name • UUID Drive: Displays informatio
Table 1 Obtaining detailed information about a server (continued) Panel name Information provided • • Location • Properties Storage Controller (Displayed for a server) • Status • Type • Name • UUID • Serial Number • Model • Firmware Version • Location • Message • Diagnostic message Volume: Displays volume information for each server.
Monitoring storage and storage components Select Vendor Storage from the Navigator tree to display status and device information for storage and storage components. The Summary panel shows details for a selected vendor storage, as shown in the following image: The Management Console provides a wide-range of information in regards to vendor storage. Drill down into the following components in the lower Navigator tree to obtain additional details: 74 • Servers.
Managing LUNs in a storage cluster The LUNs panel provides information about the LUNs in a storage cluster. The following information is provided in the LUNs panel: • LUN ID • Physical Volume Name • Physical Volume UUID In the following image, the LUNs panel displays the LUNs for a storage cluster. Monitoring the status of file serving nodes The dashboard on the GUI displays information about the operational status of file serving nodes, including CPU, I/O, and network performance information.
State Description Down: Server is powered down or inaccessible to the Fusion Manager, and no standby server is providing access to the server’s segments. The STATE field also reports the status of monitored NICs and HBAs. If you have multiple HBAs and NICs and some of them are down, the state is reported as HBAsDown or NicsDown. Monitoring cluster events IBRIX software events are assigned to one of the following categories, based on the level of severity: • Alerts.
Event: ======= EVENT ID : 1980 TIMESTAMP : Feb 14 15:08:14 LEVEL : ALERT TEXT : category:CHASSIS, name: 9730_ch1, overallStatus:DEGRADED, component:OAmodule, uuid:09USE038187WOAModule2, status:MISSING, Message: The Onboard Administrator module is missing or has failed., Diagnostic message: Reseat the Onboard Administrator module. If reseating the module does not resolve the issue, replace the Onboard Administrator module.
all tested file serving nodes are included when the overall result is determined. The results will be one of the following: • Passed. All tested hosts and standby servers passed every health check. • Failed. One or more tested hosts failed a health check. The health status of standby servers is not included when this result is calculated. • Warning. A suboptimal condition that might require your attention was found on one or more tested hosts or standby servers.
CPU Information =============== Cpu(System,User,Util,Nice) Load(1,3,15 min) Network(Bps) Disk(Bps) -------------------------- ---------------- ------------ --------0, 0, 0, 0 0.09, 0.05, 0.
Viewing operating statistics for file serving nodes Periodically, the file serving nodes report the following statistics to the Fusion Manager: • Summary. General operational statistics including CPU usage, disk throughput, network throughput, and operational state. For information about the operational states, see “Monitoring the status of file serving nodes” (page 75). • IO. Aggregate statistics about reads and writes. • Network. Aggregate statistics about network inputs and outputs. • Memory.
9 Using the Statistics tool The Statistics tool reports historical performance data for the cluster or for an individual file serving node. You can view data for the network, the operating system, and the file systems, including the data for NFS, memory, and block devices. Statistical data is transmitted from each file serving node to the Fusion Manager, which controls processing and report generation.
Upgrading the Statistics tool from IBRIX software 6.0 The statistics history is retained when you upgrade to version 6.1 or later. The Statstool software is upgraded when the IBRIX software is upgraded using the ibrix_upgrade and auto_ibrixupgrade scripts. Note the following: • If statistics processes were running before the upgrade started, those processes will automatically restart after the upgrade completes successfully.
The Time View lists the reports in chronological order, and the Table View lists the reports by cluster or server. Click a report to view it. Generating reports To generate a new report, click Request New Report on the IBRIX Management Console Historical Reports GUI.
To generate a report, enter the necessary specifications and click Submit. The completed report appears in the list of reports on the statistics home page. When generating reports, be aware of the following: • A report can be generated only from statistics that have been gathered. For example, if you start the tool at 9:40 a.m. and ask for a report from 9:00 a.m. to 9:30 a.m., the report cannot be generated because data was not gathered for that period. • Reports are generated on an hourly basis.
Updating the Statistics tool configuration When you first configure the Statistics tool, the configuration includes information for all file systems configured on the cluster. If you add a new node or a new file system, or make other additions to the cluster, you must update the Statistics tool configuration. Complete the following steps: 1. If you are adding a new file serving node to the cluster, enable synchronization for the node.
NOTE: If the old active Fusion Manager is not available (pingable) for more than two days, the historical statistics database is not transferred to the current active Fusion Manager. • If configurable parameters were set before the failover, the parameters are retained after the failover. Check the /usr/local/ibrix/log/statstool/stats.log for any errors. NOTE: The reports generated before failover will not be available on the current active Fusion Manager.
Other conditions • Data is not collected. If data is not being gathered in the common directory for the Statistics Manager (/usr/local/statstool/histstats/ by default), restart the Statistics tool processes on all nodes. See “Controlling Statistics tool processes” (page 86). • Installation issues. Check the /tmp/stats-install.log and try to fix the condition, or send the /tmp/stats-install.log to HP Support. • Missing reports for file serving nodes.
10 Maintaining the system Shutting down the system To shut down the system completely, first shut down the IBRIX software, and then power off the system hardware. Shutting down the IBRIX software Use the following procedure to shut down the IBRIX software. Unless noted otherwise, run the commands from the node hosting the active Fusion Manager. 1. Stop any active Remote Replication, data tiering, or rebalancer tasks.
7. Unmount all file systems on the cluster nodes: ibrix_umount -f To unmount file systems from the GUI, select Filesystems > unmount. 8. Verify that all file systems are unmounted: ibrix_fs -l If a file system fails to unmount on a particular node, continue with this procedure. The file system will be forcibly unmounted during the node shutdown. 9. Shut down all IBRIX Server services and verify the operation: # pdsh –a /etc/init.d/ibrix_server stop | dshbak # pdsh –a /etc/init.
1. 2. 3. Power on the node hosting the active Fusion Manager. Power on the file serving nodes (*root segment = segment 1; power on owner first, if possible). Monitor the nodes on the GUI and wait for them all to report UP in the output from the following command: ibrix_server -l 4. Mount file systems and verify their content.
Starting and stopping processes You can start, stop, and restart processes and can display status for the processes that perform internal IBRIX software functions. The following commands also control the operation of PostgreSQL on the machine. The PostgreSQL service is available at /usr/local/ibrix/init/. To start and stop processes and view process status on the Fusion Manager, use the following command: /etc/init.
The IAD Tunings dialog box configures the IBRIX administrative daemon. The Module Tunings dialog box adjusts various advanced parameters that affect server operations.
On the Servers dialog box, select the servers to which the tunings should be applied.
Tuning file serving nodes from the CLI All Fusion Manager commands for tuning hosts include the -h HOSTLIST option, which supplies one or more host groups. Setting host tunings on a host group is a convenient way to tune a set of clients all at once. To set the same host tunings on all clients, specify the clients host group. CAUTION: Changing host tuning settings alters file system performance. Contact HP Support before changing host tuning settings.
ibrix_lwhost --list See the ibrix_lwhost command description in the HP IBRIX 9000 Storage CLI Reference Guide for other available options. Windows clients. Click the Tune Host tab on the Windows 9000 client GUI. Tunable parameters include the NIC to prefer (the default is the cluster interface), the communications protocol (UDP or TCP), and the number of server threads to use. See the online help for the client if necessary.
The Change Ownership dialog box reports the status of the servers in the cluster and lists the segments owned by each server. In the Segment Properties section of the dialog box, select the segment whose ownership you are transferring, and click Change Owner.
The new owner of the segment must be able to see the same storage as the original owner. The Change Segment Owner dialog box lists the servers that can see the segment you selected. Select one of these servers to be the new owner. The Summary dialog box shows the segment migration you specified. Click Back to make any changes, or click Finish to complete the operation. To migrate ownership of segments from the CLI, use the following commands.
1. 2. 3. Identify the segment residing on the physical volume to be removed. Select Storage from the Navigator on the GUI. Note the file system and segment number on the affected physical volume. Locate other segments on the file system that can accommodate the data being evacuated from the affected segment. Select the file system on the GUI and then select Segments from the lower Navigator. If segments with adequate space are not available, add segments to the file system. Evacuate the segment.
The Summary dialog box lists the source and destination segments for the evacuation. Click Back to make any changes, or click Finish to start the evacuation. The Active Tasks panel reports the status of the evacuation task. When the task is complete, it will be added to the Inactive Tasks panel. 4. When the evacuation is complete, run the following command to retire the segment from the file system: ibrix_fs -B -f FSNAME -n BADSEGNUMLIST The segment number associated with the storage is not reused.
# ./inum2name --fsname=ibfs 500000017 ibfs:/sliced_dir/file3.bin After obtaining the name of the file, use a command such as cp to move the file manually. Then run the segment evacuation process again. The analyzer log lists the chunks that were left on segments.
The cluster network interface was created for you when your cluster was installed. (A virtual interface is used for the cluster network interface.) One or more user network interfaces may also have been created, depending on your site's requirements. You can add user network interfaces as necessary.
When you identify a user network interface for a file serving node, the Fusion Manager queries the node for its IP address, netmask, and MAC address and imports the values into the configuration database. You can modify these values later if necessary. If you identify a VIF, the Fusion Manager does not automatically query the node.
Preferring a network interface for a host group You can prefer an interface for multiple 9000 clients at one time by specifying a host group. To prefer a user network interface for all 9000 clients, specify the clients host group. After preferring a network interface for a host group, you can locally override the preference on individual 9000 clients with the command ibrix_lwhost.
1. 2. 3. 4. Unmount the file system from the client. Change the client’s IP address. Reboot the client or restart the network interface card. Delete the old IP address from the configuration database: ibrix_client -d -h CLIENT 5. Re-register the client with the Fusion Manager: register_client -p console_IPAddress -c clusterIF –n ClientName 6. Remount the file system on the client.
The following command deletes all routing table entries for virtual interface eth0:1 on file serving node s2.hp.com: ibrix_nic -r -n eth0:1 -h s2.hp.com -D Deleting a network interface Before deleting the interface used as the cluster interface on a file serving node, you must assign a new interface as the cluster interface. See “Changing the cluster interface” (page 104).
11 Migrating to an agile Fusion Manager configuration The agile Fusion Manager configuration provides one active Fusion Manager and one passive Fusion Manager installed on different nodes in the cluster. The migration procedure configures the current Management Server machine as a host for an agile Fusion Manager and installs another instance of the agile Fusion Manager on a file serving node.
/etc/init.d/network restart service network restart Verify that you can ping the new local IP address. 4. Configure the agile management console: ibrix_fm -c -d –n -v cluster -I In the command, is the old cluster IP address for the original management console and is the new IP address you acquired. For example: [root@x109s1 ~]# ibrix_fm -c 172.16.3.1 -d bond0:1 -n 255.255.248.
[root@x109s1 ~]# ibrix_fm -f NAME IP ADDRESS ------ ---------X109s1 172.16.3.100 Command succeeded! 11. Install a passive agile management console on a second file serving node. In the command, the -F option forces the overwrite of the new_lvm2_uuid file that was installed with the IBRIX software.
1. On the node hosting the active Fusion Manager, place the Fusion Manager into maintenance mode. This step fails over the active Fusion Manager role to the node currently hosting the passive agile Fusion Manager. /bin/ibrix_fm –m nofmfailover 2.
3. Uninstall the management console from the Management Server machine: /ibrix/ibrixinit -tm -U 4. Verify that the uninstalled management console is no longer registered. Run the following command from the file serving node hosting the newly active management console: ibrix_fm -f The command should now report only the agile management console on the file serving node. [root@x109s3 ibrix]# ibrix_fm -f NAME IP ADDRESS ------ ---------x109s3 172.16.3.3 Command succeeded! 5.
12 Upgrading the IBRIX software to the 6.2 release This chapter describes how to upgrade to the 6.2 IBRIX software release. IMPORTANT: Print the following table and check off each step as you complete it. Table 2 Prerequisites checklist for all upgrades Step completed? Step Description 1 Verify that the entire cluster is currently running IBRIX 6.0 or later by entering the following command: ibrix_version -l IMPORTANT: All the IBRIX nodes must be at the same release.
Table 2 Prerequisites checklist for all upgrades (continued) Step Description 7 If your FSN network bonded interfaces are currently configured for mode 6, configure them for mode 4 bonding (LACP). Make sure your Network Administrator reconfigures the network switch for LACP support on all effected ports. . Mode 4 has been found to outperform mode 6. This finding has resulted in changing the recommendation of mode 6 to mode 4.
Preparing for the upgrade To prepare for the upgrade, complete the following steps, ensure that high availability is enabled on each node in the cluster by running the following command: ibrix_haconfig -l If the command displays an Overall HA Configuration Checker Results - PASSED status, high availability is enabled on each node in the cluster. If the command returns Overall HA Configuration Checker Results - FAILED, complete the following list items based on the result returned for each component: 1.
a file system, use the upgrade60.sh utility. For more information, see “Upgrading pre-6.0 file systems for software snapshots” (page 153). • 5. Data retention. Files used for data retention (including WORM and auto-commit) must be created on IBRIX software 6.1.1 or later, or the pre-6.1.1 file system containing the files must be upgraded for retention features. To upgrade a file system, use the ibrix_reten_adm -u -f FSNAME command.
2. 3. Stop all client I/O to the cluster or file systems. On the Linux client, use lsof to show open files belonging to active processes. Verify that all IBRIX file systems can be successfully unmounted from all FSN servers: ibrix_umount -f fsname Performing the upgrade This upgrade method is supported only for upgrades from IBRIX software 6.x to the 6.2 release. Complete the following steps: 1. To obtain the latest HP IBRIX 6.2.1 (pkg-full.
5. Review the file /etc/hosts on every IBRIX node (file serving nodes and management nodes) to ensure the hosts file contains two lines similar to the following: 127.0.0.1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 In this instance, is the name of the IBRIX node as returned by the hostname command.
3. 4. 5. 6. 7. 8. Verify that all file system node servers have separate file systems mounted on the following partitions by using the df command: • / • /local • /stage • /alt Verify that all FSN servers have a minimum of 4 GB of free/available storage on the /local partition by using the df command . Verify that all FSN servers are not reporting any partition as 100% full (at least 5% free space) by using the df command . Note any custom tuning parameters, such as file system mount options.
13. Unmount each file system manually: ibrix_umount -f FSNAME Wait up to 15 minutes for the file systems to unmount. Troubleshoot any issues with unmounting file systems before proceeding with the upgrade. See “File system unmount issues” (page 123). Performing the upgrade manually This upgrade method is supported only for upgrades from IBRIX software 6.x to the 6.2 release. Complete the following steps: 1. To obtain the latest HP IBRIX 6.2.1 (pkg-full.
8. If you are using SMB, set the following parameters to synchronize the SMB software and the Fusion Manager database: • smb signing enabled • smb signing required • ignore_writethru Use ibrix_cifsconfig to set the parameters, specifying the value appropriate for your cluster (1=enabled, 0=disabled). The following examples set the parameters to the default values for the 6.
1. Restart all passive Fusion Managers. a. Determine if the Fusion Manager is in passive mode by entering the following command: ibrix_fm -i b. If the command returns “passive” (regardless of failover disabled or not), enter the following command to restart Fusion Manager: service ibrix_fusionmanager restart c. 2. Redo steps a and b for each Fusion Manager. Restart the Active Fusion Manager by issuing the following commands on the active FM server: a.
client services start automatically. Use the ibrix_version -l -C command to verify the kernel version on the client. NOTE: To use the verify_client command, the 9000 client software must be installed. Upgrading Windows 9000 clients Complete the following steps on each client: 1. Remove the old Windows 9000 client software using the Add or Remove Programs utility in the Control Panel. 2. Copy the Windows 9000 client MSI file for the upgrade to the machine. 3.
Manual upgrade Check the following: • If the restore script fails, check /usr/local/ibrix/setup/logs/restore.log for details. • If configuration restore fails, look at /usr/local/ibrix/autocfg/logs/appliance.log to determine which feature restore failed. Look at the specific feature log file under /usr/ local/ibrix/setup/logs/ for more detailed information.
[root@ib51-102 ~]# ibrix_fm -f NAME IP ADDRESS -------- ---------ib51-101 10.10.51.101 ib51-102 10.10.51.102 [root@ib51-102 ~]# ibrix_fm -i FusionServer: ib51-102 (active, quorum is running) ================================================== File system unmount issues If a file system does not unmount successfully, perform the following steps on all servers: 1. Run the following commands: chkconfig ibrix_server off chkconfig ibrix_ndmp off chkconfig ibrix_fusionmanager off 2. 3. Reboot all servers.
13 Licensing This chapter describes how to view your current license terms and how to obtain and install new IBRIX software product license keys. NOTE: For licensing features such as block snapshots on the HP P2000 G3 MSA Array System or HP 2000 Modular Smart Array, see the array documentation. Viewing license terms The IBRIX software license file is stored in the installation directory. To view the license from the GUI, select Cluster Configuration in the Navigator and then select License.
14 Upgrading firmware The Firmware Management Tool (FMT) is a utility that scans the IBRIX system for outdated firmware and provides a comprehensive report that provides the following information: • Device found • Active firmware found on the discovered device • Qualified firmware for the discovered device • Proposed action — Users are told whether an upgrade is recommended • Severity — How severe an upgrade is required • Reboot required on flash • Device information • Parent device ID Compo
Steps for upgrading the firmware IMPORTANT: The 10Gb NIC driver is updated during the IBRIX v6.2.X software upgrade. However, the new driver is not utilized/loaded until the server had been rebooted. If you run the upgrade firmware tool (hpsp_fmt) before you reboot the server, the tool detects that the old driver is still being used.
2. 3. Do the following based on the Proposed Action and Severity: Status in Proposed Action column Status in Severity column Go to UPGRADE MANDATORY Step 3 UPGRADE RECOMMENDED Step 3 is optional. However, it is recommended to perform step 3 for system stability and to avoid any known issues. NONE or DOWNGRADE MANDATORY Step 4 NONE or DOWNGRADE RECOMMENDED Step 4 is optional. However, it is recommended to perform step 4 for system stability and to avoid any known issues.
a. Determine whether the node to be flashed is the active Fusion Manager by enter the following command: ibrix_fm –i b. Perform a manual FM failover on the local node by entering the following command from the active Fusion Manager: ibrix_fm -m nofmfailover server1 The FM failover will take approximately one minute. c. d. If server1 is not the active Fusion Manager, proceed to step e to fail over server1 to server2.
6. If you are upgrading to 6.2, you must complete the steps provided in the “After the upgrade” section for your type of upgrade, as shown in the following table: Type of upgrade Complete the steps in this section Online upgrades “After the upgrade” (page 113) Automated offline upgrades “After the upgrade” (page 115) Manual offline upgrades “After the upgrade” (page 118) Finding additional information on FMT You can find additional information on FMT as follows: • Online help for FMT.
15 Troubleshooting Collecting information for HP Support with Ibrix Collect Ibrix Collect is a log collection utility that allows you collect relevant information for diagnosis by HP Support when system issues occur. The collection can be triggered manually using the GUI or CLI, or automatically during a system crash.
4. Click Okay. To collect logs and command results using the CLI, use the following command: ibrix_collect –c –n NAME NOTE: Only one manual collection of data is allowed at a time. NOTE: When a node restores from a system crash, the vmcore under /var/crash/ directory is processed. Once processed, the directory will be renamed /var/ crash/_PROCESSED. HP Support may request that you send this information to assist in resolving the system crash.
To specify more than one collection to be deleted at a time from the CLI, provide the names separated by a semicolon. To delete all data collections manually from the CLI, use the following command: ibrix_collect –F Configuring Ibrix Collect You can configure data collection to occur automatically upon a system crash. This collection will include additional crash digester output. The archive filename of the system crash-triggered collection will be in the format _crash_.zip. 1.
ibrix_collect –C –m [–s ] [–f ] [–t ] NOTE: More than one email ID can be specified for -t option, separated by a semicolon. The “From” and “To” command for this SMTP server are Ibrix Collect specific.
Troubleshooting specific issues Software services Cannot start services on a file serving node, or Linux 9000 client SELinux might be enabled. To determine the current state of SELinux, use the getenforce command. If it returns enforcing, disable SELinux using either of these commands: setenforce Permissive setenforce 0 To permanently disable SELinux, edit its configuration file (/etc/selinux/config) and set SELINUX=parameter to either permissive or disabled. SELinux will be stopped at the next boot.
Windows 9000 clients Logged in but getting a “Permission Denied” message The 9000 client cannot access the Active Directory server because the domain name was not specified. Reconfigure the Active Directory settings, specifying the domain name. See the HP IBRIX 9000 Storage Installation Guide for more information. Verify button in the Active Directory Settings tab does not work This issue has the same cause as the above issue.
of the Express Query, they are typically due to another unrelated event in the cluster or the file system. Therefore, most of the work to recover from an Express Query MIF is to check the health of the cluster and the file system and take corrective actions to fix the issues caused by these events. Once the cluster and file system have an OK status, the MIF status can be cleared since the Express Query service will be recovering and restarting automatically.
6. Cluster and file system health checks have an OK status but Express Query is yet in a MIF condition for one or several specific file systems. This unlikely situation occurs when some data has been corrupted and it cannot be recovered. To solve this situation: a. If there is a full backup of the file system involved, do a restore. b. If there is no full backup: 1. Disable Express Query for the file system, by entering the following command: ibrix_fs –T –D –f 2.
16 Recovering a file serving node Use the following procedure to recover a failed file serving node. You will need to create a QuickRestore DVD or USB key, as described later, and then install it on the affected node. This step installs the operating system and IBRIX software on the node and launches a configuration wizard. CAUTION: The Quick Restore DVD or USB key restores the file serving node to its original factory state.
3. Enter the information for the node being restored on the Network Configuration dialog box and click OK. 4. Confirm that the information displayed in the Configuration Summary dialog box is correct and click Commit.
5. On the X9000 Installation — Network Setup Complete dialog box, select Join this IBIRX server to an existing cluster and click OK.
6. The wizard scans the network for existing clusters. On the Join Cluster dialog box, select the management console (Fusion Manager) for your cluster, and then click OK. If your cluster does not exist in the list of choices, click Cancel so that you can provide the IP address of the FM to which this node has to be registered.
7. If you clicked the Cancel button in the previous dialog box, enter the management console IP of the desired cluster on the Management Console IP dialog box and click OK. 8. On the Replace Existing Server dialog box, click Yes when you are asked if you want to replace the existing server.
Completing the restore on a file serving node Complete the following steps: 1. Ensure that you have root access to the node. The restore process sets the root password to hpinvent, the factory default. 2. Verify information about the node you restored: ibrix_server -f [-p] [-M] [-N] -h SERVERNAME 3. If you disabled NIC monitoring before using the QuickRestore, re-enable the monitor: ibrix_nic -m -h MONITORHOST -A DESTHOST/IFNAME For example: ibrix_nic -m -h titan16 -A titan15/eth2 4. 5.
1. 2. Take the appropriate actions: • If Active Directory authentication is used, join the restored node to the AD domain manually. • If Local user authentication is used, create a temporary local user on the GUI and apply the settings to all servers. This step resynchronizes the local user database. Then remove the temporary user. Run the following command: ibrix_httpconfig -R -h HOSTNAME 3. Verify that HTTP services have been restored.
17 Support and other resources Contacting HP For worldwide technical support information, see the HP support website: http://www.hp.com/support Before contacting HP, collect the following information: • Product model names and numbers • Technical support registration number (if applicable) • Product serial numbers • Error messages • Operating system type and revision level • Detailed questions Related information .
Using HP MSA Disk Arrays • HP 2000 G2 Modular Smart Array Reference Guide • HP 2000 G2 Modular Smart Array CLI Reference Guide • HP P2000 G3 MSA System CLI Reference Guide • Online help for HP Storage Management Utility (SMU) and Command Line Interface (CLI) To find these documents, go the Manuals page (http://www.hp.com/support/manuals) and select storage >Disk Storage Systems > MSA Disk Arrays >HP 2000sa G2 Modular Smart Array or HP P2000 G3 MSA Array Systems.
18 Documentation feedback HP is committed to providing documentation that meets your needs. To help us improve the documentation, send any errors, suggestions, or comments to Documentation Feedback (docsfeedback@hp.com). Include the document title and part number, version number, or the URL when submitting your feedback.
A Cascading Upgrades If you are running an IBRIX version earlier than 5.6, do incremental upgrades as described in the following table. If you are running IBRIX 5.6, upgrade to 6.1 before upgrading to 6.2. If you are upgrading from Upgrade to Where to find additional information IBRIX version 5.4 IBRIX version 5.5 “Upgrading the IBRIX software to the 5.5 release” (page 160) IBRIX version 5.5 IBRIX version 5.6 “Upgrading the IBRIX software to the 5.6 release” (page 157) IBRIX version 5.
1. 2. 3. 4. 5. Ensure that all nodes are up and running. To determine the status of your cluster nodes, check the dashboard on the GUI or use the ibrix_health command. Ensure that High Availability is enabled on each node in the cluster. Verify that ssh shared keys have been set up.
a file system, use the upgrade60.sh utility. For more information, see “Upgrading pre-6.0 file systems for software snapshots” (page 153). ◦ Data retention. Files used for data retention (including WORM and auto-commit) must be created on IBRIX software 6.1.1 or later, or the pre-6.1.1 file system containing the files must be upgraded for retention features. To upgrade a file system, use the ibrix_reten_adm -u -f FSNAME command.
If you are using NFS, verify that all NFS processes are stopped: ps –ef | grep nfs If necessary, use the following command to stop NFS services: /etc/init.d/nfs stop Use kill -9 to stop any NFS processes that are still running. If necessary, run the following command on all nodes to find any open file handles for the mounted file systems: lsof Use kill -9 to stop any processes that still have open file handles on the file systems. 12.
6. If you are using SMB, set the following parameters to synchronize the SMB software and the Fusion Manager database: • smb signing enabled • smb signing required • ignore_writethru Use ibrix_cifsconfig to set the parameters, specifying the value appropriate for your cluster (1=enabled, 0=disabled). The following examples set the parameters to the default values for the 6.
NOTE: To use the verify_client command, the 9000 client software must be installed. Upgrading Windows 9000 clients Complete the following steps on each client: 1. Remove the old Windows 9000 client software using the Add or Remove Programs utility in the Control Panel. 2. Copy the Windows 9000 client MSI file for the upgrade to the machine. 3. Launch the Windows Installer and follow the instructions to complete the upgrade. 4.
Restarting the utility If the upgrade is stopped or the system shuts down, you can restart the upgrade utility and it will continue the operation. (To stop an upgrade, press Ctrl-C on the command line or send an interrupt signal to the process.) There should be no adverse effects to the file system; however, certain blocks that were newly allocated by the file system at the time of the interruption will be lost. Running ibrix_fsck in corrective mode will recover the blocks. NOTE: The upgrade60.
Automatic upgrade Check the following: • If the initial execution of /usr/local/ibrix/setup/upgrade fails, check /usr/local/ibrix/setup/upgrade.log for errors. It is imperative that all servers are up and running the IBRIX software before you execute the upgrade script. • If the install of the new OS fails, power cycle the node. Try rebooting. If the install does not begin after the reboot, power cycle the machine and select the upgrade line from the grub boot menu.
================================================== [root@ib51-101 ibrix]# ibrix_fm -f NAME IP ADDRESS -------- ---------ib51-101 15.226.51.101 ib51-102 10.10.51.102 1. If the node is hosting the active Fusion Manager, as in this example, stop the Fusion Manager on that node: [root@ib51-101 ibrix]# /etc/init.d/ibrix_fusionmanager stop Stopping Fusion Manager Daemon [ [root@ib51-101 ibrix]# 2.
Upgrading the IBRIX software to the 5.6 release This section describes how to upgrade to the latest IBRIX software release. The management console and all file serving nodes must be upgraded to the new release at the same time. Note the following: • Upgrades to the IBRIX software 5.6 release are supported for systems currently running IBRIX software 5.5.x. If your system is running an earlier release, first upgrade to the 5.5 release, and then upgrade to 5.6.
FAILED message appears on the active management console, see the specified log file for details. Manual upgrades The manual upgrade process requires external storage that will be used to save the cluster configuration. Each server must be able to access this media directly, not through a network, as the network configuration is part of the saved configuration. HP recommends that you use a USB stick or DVD. NOTE: Be sure to read all instructions before starting the upgrade procedure.
5. When the IBRIX 9000 Network Storage System screen appears, enter qr to install the IBRIX software on the file serving node. The server reboots automatically after the software is installed. Remove the DVD from the DVD-ROM drive. Restoring the node configuration Complete the following steps on each node, starting with the previous active management console: 1. Log in to the node. The configuration wizard should pop up. Escape out of the configuration wizard. 2.
Automatic upgrade Check the following: • If the initial execution of /usr/local/ibrix/setup/upgrade fails, check /usr/local/ibrix/setup/upgrade.log for errors. It is imperative that all servers are up and running the IBRIX software before you execute the upgrade script. • If the install of the new OS fails, power cycle the node. Try rebooting. If the install does not begin after the reboot, power cycle the machine and select the upgrade line from the grub boot menu.
NOTE: If you are upgrading from an IBRIX 5.x release, any support tickets collected with the ibrix_supportticket command will be deleted during the upgrade. Download a copy of the archive files (.tgz) from the /admin/platform/diag/supporttickets directory. Upgrades can be run either online or offline: • Online upgrades. This procedure upgrades the software while file systems remain mounted.
To determine whether you have an agile management console configuration, run the ibrix_fm -i command. If the output reports the status as quorum is not configured, your cluster does not have an agile configuration. Be sure to use the upgrade procedure corresponding to your management console configuration: • For standard upgrades, use Page 162. • For agile upgrades, use Page 166.
Upgrading file serving nodes After the management console has been upgraded, complete the following steps on each file serving node: 1. From the management console, manually fail over the file serving node: /bin/ibrix_server -f -p -h HOSTNAME The node reboots automatically. 2. 3. 4. Move the /ibrix directory used in the previous release installation to ibrix.old.
indicators match. If you followed all instructions and the version indicators do not match, contact HP Support. 4. Propagate a new segment map for the cluster: /bin/ibrix_dbck -I -f FSNAME 5. Verify the health of the cluster: /bin/ibrix_health -l The output should specify Passed / on. Standard offline upgrade This upgrade procedure is appropriate for major upgrades. The management console must be upgraded first. You can then upgrade file serving nodes in any order.
Upgrading the file serving nodes After the management console has been upgraded, complete the following steps on each file serving node: 1. Move the /ibrix directory used in the previous release installation to ibrix.old. For example, if you expanded the tarball in /root during the previous IBRIX installation on this node, the installer is in /root/ibrix. 2. Expand the distribution tarball or mount the distribution DVD in a directory of your choice.
The output should show Passed / on. Agile upgrade for clusters with an agile management console configuration Use these procedures if your cluster has an agile management console configuration. The IBRIX software 5.4.x to 5.5 upgrade can be performed either online or offline. Future releases may require offline upgrades. NOTE: Be sure to read all instructions before starting the upgrade procedure.
This step fails back the active management console role to the node currently hosting the passive agile management console (the node that originally was active). 9. Wait approximately 90 seconds for the failover to complete, and then run the following command on the node that was the target for the failover: /bin/ibrix_fm -i The command should report that the agile management console is now Active on this node. 10.
20. Change to the installer directory if necessary and run the upgrade: ./ibrixupgrade -f The installer upgrades both the management console software and the file serving node software on the node. 21. Verify the status of the management console: /etc/init.d/ibrix_fusionmanager status The status command confirms whether the correct services are running. Output will be similar to the following: Fusion Manager Daemon (pid 18748) running...
6. Verify that the ibrix and ipfs services are running: lsmod|grep ibrix ibrix 2323332 0 (unused) lsmod|grep ipfs ipfs1 102592 0 (unused) If either grep command returns empty, contact HP Support. 7. From the management console, verify that the new version of IBRIX software FS/IAS has been installed on the file serving node: /bin/ibrix_version -l –S 8. If the upgrade was successful, failback the file serving node: /bin/ibrix_server -f -U -h HOSTNAME 9.
Preparing for the upgrade 1. On the active management console node, disable automated failover on all file serving nodes: /bin/ibrix_server -m -U 2. Verify that automated failover is off. In the output, the HA column should display off. /bin/ibrix_server -l 3. On the active management console node, stop the NFS and SMB services on all file serving nodes to prevent NFS and SMB clients from timing out.
9. Change to the installer directory if necessary and run the upgrade: ./ibrixupgrade -f The installer upgrades both the management console software and the file serving node software on the node. 10. On the node that was just upgraded and has its management console in maintenance mode, move the management console back to passive mode: /bin/ibrix_fm -m passive The node now resumes its normal backup operation for the active management console.
5. Verify that all version indicators match for file serving nodes. Run the following command from the active management console: /bin/ibrix_version –l If there is a version mismatch, run the /ibrix/ibrixupgrade -f script again on the affected node, and then recheck the versions. The installation is successful when all version indicators match. If you followed all instructions and the version indicators do not match, contact HP Support. 6.
B Component diagrams for 9300 systems Front view of file serving node Item Description 1 Quick-release levers (2) 2 HP Systems Insight Manager display 3 Hard drive bays 4 SATA optical drive bay 5 Video connector 6 USB connectors (2) Rear view of file serving node Item Description 1 PCI slot 5 2 PCI slot 6 3 PCI slot 4 4 PCI slot 2 5 PCI slot 3 6 PCI slot 1 7 Power supply 2 (PS2) 8 Power supply 1 (PS1) 9 USB connectors (2) 10 Video connector Front view of file serving nod
Item Description 11 NIC 1 connector 12 NIC 2 connector 13 Mouse connector 14 Keyboard connector 15 Serial connector 16 iLO 2 connector 17 NIC 3 connector 18 NIC 4 connector Component diagrams for 9300 systems
Server PCIe card PCI slot HP SC08Ge 3Gb SAS Host Bus Adapter 1 NC364T Quad 1Gb NIC 2 empty 3 empty 4 empty 5 empty 6 HP SC08Ge 3Gb SAS Host Bus Adapter 1 empty 2 empty 3 NC522SFP dual 10Gb NIC 4 empty 5 empty 6 HP SC08Ge 3Gb SAS Host Bus Adapter 1 NC364T Quad 1Gb NIC 2 empty 3 HP SC08Ge 3Gb SAS Host Bus Adapter 4 empty 5 empty 6 HP SC08Ge 3Gb SAS Host Bus Adapter 1 HP SC08Ge 3Gb SAS Host Bus Adapter 2 empty 3 NC522SFP dual 10Gb NIC 4 empty 5 empty 6 SATA 1G
C Spare parts list for 9300 systems This appendix lists spare parts (both customer replaceable and non customer replaceable) for the 9300 Network Storage Gateway components. The spare parts information is current as of the publication date of this document. For the latest spare parts information, go to http:// partsurfer.hp.com. Spare parts are categorized as follows: • Mandatory. Parts for which customer self repair is mandatory.
Description Spare part number Customer self repair SPS-CAGE, HD, SFF 496074-001 Mandatory SPS-CAGE, DVD OPT DRIVE 496076-001 Mandatory SPS-BD, PCIX 496077-001 Optional SPS-BD, PCIE 496078-001 Optional SPS-BEZEL 496080-001 Mandatory SPS-DIMM,8GB PC3-10600R,512MX4,ROHS 501536-001 Optional SPS-DRV,HD,146GB,15K 2.
Description Spare part number Customer self repair SPS-BD, SID 496073-001 Mandatory SPS-CAGE, HD, SFF 496074-001 Mandatory SPS-CAGE, DVD OPT DRIVE 496076-001 Mandatory SPS-BD, PCIX 496077-001 Optional SPS-BD, PCIE 496078-001 Optional SPS-BEZEL 496080-001 Mandatory SPS-DIMM,4GB PC3-10600R,256MX4,ROHS 501534-001 Mandatory SPS - HW PLASTICS KIT DL180 G6 507260-001 Mandatory SPS-DRV,HD,146GB,10K 2.
Description Spare part number SPS-CA KIT, MISC 532393-001 SPS-CA ASSY,SATA,PWR/DATA 536398-001 SPS-DIMM 4GB PC3 10600R 512Mx4 595424-001 SPS-SYS IO BD DL380 G7 599038-001 SPS-AIR BAFFLE 599039-001 SPS-HOOD 602505-001 SPS-PROC E5620 2.40 12MB/1066 4C 614732-001 SPS-CAGE DL38X PCI 614778-001 SPS-RISER CAGE CONV KIT 617516-001 SPS-DRV HD 300GB 10K 2.
Description Spare part number Customer self repair SPS-CAGE, DVD OPT DRIVE 496076-001 Mandatory SPS-BD, PCIX 496077-001 Optional SPS-BD, PCIE 496078-001 Optional SPS-BEZEL 496080-001 Mandatory SPS-DIMM,8GB PC3-10600R,512MX4,ROHS 501536-001 Optional SPS-DRV,HD,146GB,15K 2.
Description Spare part number Customer self repair SPS-BD, PCIX 496077-001 Optional SPS-BD, PCIE 496078-001 Optional SPS-BEZEL 496080-001 Mandatory SPS-DIMM,4GB PC3-10600R,256MX4,ROHS 501534-001 Mandatory SPS - HW PLASTICS KIT DL180 G6 507260-001 Mandatory SPS-DRV,HD,146GB,10K 2.
Description Spare part number SPS-AIR BAFFLE 599039-001 SPS-HOOD 602505-001 SPS-PROC E5620 2.40 12MB/1066 4C 614732-001 SPS-CAGE DL38X PCI 614778-001 SPS-RISER CAGE CONV KIT 617516-001 SPS-DRV HD 300GB 10K 2.5 HP 6G SAS 618518-001 SPS-UPS R/T3KVA 2U DTC HV INTL G2 638842-001 IB Network Storage Gateway (AW541A) Description 182 Spare part number Customer self repair SPS-CORD,AC PWR IEC/IEC 6 FT 142258-001 Mandatory SPS-CORD,AC PWR IEC/IEC 8.
Description Spare part number Customer self repair SPS - HW PLASTICS KIT DL180 G6 507260-001 Mandatory SPS-BACKPLANE,SAS 507690-001 Optional SPS-POWER SUPPLY, 750W 511778-001 Optional SPS-BD,4X QDR,PCIE,G2,DUAL PORT 519132-001 Optional SPS-TRAY, DVD 532390-001 Mandatory SPS-HARDWARE MTG KIT 574765-001 Mandatory Spare part number Customer self repair HUB/SWITCH ACCESSORY KIT 5069-5705 Mandatory CABLE, CONSOLE D-SUB9 - RJ45 L250 5188-6699 Mandatory PWR-CORD OPT-918 3-COND 2.
Description Spare part number Customer self repair SPS-PANEL,SIDE,10642,10KG2 385971-001 Mandatory SPS-STABLIZER,600MM,10GK2 385973-001 Mandatory SPS-SHOCK PALLET,600MM,10KG2 385976-001 Mandatory SPS-HARDWARE KIT,10KG2 385978-001 Mandatory SPS-SWITCH,SVR CNSL,KVM,0X1X8 396630-001 Optional SPS- CA,SRL/DWNLD,9PIN M/F 6' 397641-001 No SPS-SWITCH,SVR CNSL,KVM,0X2X16,USB 410529-001 Mandatory SPS-RACK,BUS BAR & WIRE TRAY 457015-001 Optional SPS-STICK,4X FIXED,C-13,OFFSET,WW 483915-001
D System component and cabling diagrams for 9320 systems System component diagrams Front view of 9300c array controller or 9300cx 3.
Rear view of 9300c array controller Item Description 1 Power supplies 2 Power switches 3 Host ports 4 CLI port 5 Network port 6 Service port (used by service personnel only) 7 Expansion port (connects to drive enclosure) Rear view of 9300cx 3.
Front view of file serving node Item Description 1 Quick-release levers (2) 2 HP Systems Insight Manager display 3 Hard drive bays 4 SATA optical drive bay 5 Video connector 6 USB connectors (2) Rear view of file serving node Item Description 1 PCI slot 5 2 PCI slot 6 3 PCI slot 4 4 PCI slot 2 5 PCI slot 3 6 PCI slot 1 7 Power supply 2 (PS2) 8 Power supply 1 (PS1) 9 USB connectors (2) 10 Video connector 11 NIC 1 connector 12 NIC 2 connector System component diagrams
Item Description 13 Mouse connector 14 Keyboard connector 15 Serial connector 16 iLO 2 connector 17 NIC 3 connector 18 NIC 4 connector 188 System component and cabling diagrams for 9320 systems
Server PCIe card PCI slot HP SC08Ge 3Gb SAS Host Bus Adapter 1 NC364T Quad 1Gb NIC 2 empty 3 empty 4 empty 5 empty 6 HP SC08Ge 3Gb SAS Host Bus Adapter 1 empty 2 empty 3 NC522SFP dual 10Gb NIC 4 empty 5 empty 6 HP SC08Ge 3Gb SAS Host Bus Adapter 1 NC364T Quad 1Gb NIC 2 empty 3 HP SC08Ge 3Gb SAS Host Bus Adapter 4 empty 5 empty 6 HP SC08Ge 3Gb SAS Host Bus Adapter 1 HP SC08Ge 3Gb SAS Host Bus Adapter 2 empty 3 NC522SFP dual 10Gb NIC 4 empty 5 empty 6 SATA 1G
Cabling diagrams Cluster network cabling diagram 190 System component and cabling diagrams for 9320 systems
SATA option cabling Line Description SAS I/O pathController A SAS I/O pathController B Cabling diagrams 191
SAS option cabling Line Description SAS I/O pathArray 1: Controller A SAS I/O pathArray 1: Controller B SAS I/O pathArray 2: Controller A SAS I/O pathArray 2: Controller B 192 System component and cabling diagrams for 9320 systems
Drive enclosure cabling Item Description 1 SAS controller in 9300c controller enclosure 2 I/O modules in four 9300cx drive enclosures Cabling diagrams 193
E Spare parts list for 9320 systems This appendix lists spare parts (both customer replaceable and non customer replaceable) for the 9320 Network Storage System components. The spare parts information is current as of the publication date of this document. For the latest spare parts information, go to http:// partsurfer.hp.com. Spare parts are categorized as follows: • Mandatory. Parts for which customer self repair is mandatory.
Description Spare part number CSR type SPS-DRV,ODD, SLIM SATA DVD RW 481429-001 Optional SPS-BD,8 PORT EXT,SAS,HBA 489103-001 Optional SPS-PROC,NEHALEM EP 2.
Description Spare part number CSR type SPS-DRV,ODD, SLIM SATA DVD RW 481429-001 Optional SPS-BD,8 PORT EXT,SAS,HBA 489103-001 Optional SPS-PROC,NEHALEM EP 2.
Description Spare part number CSR type SPS-DRV,ODD, SLIM SATA DVD RW 481429-001 Optional SPS-BD,8 PORT EXT,SAS,HBA 489103-001 Optional SPS-PROC,NEHALEM EP 2.
Description Spare part number CSR type SPS-DRV,ODD, SLIM SATA DVD RW 481429-001 Optional SPS-BD,8 PORT EXT,SAS,HBA 489103-001 Optional SPS-PROC,NEHALEM EP 2.
Description Spare part number CSR type SPS-DRV,ODD, SLIM SATA DVD RW 481429-001 Optional SPS-BD,8 PORT EXT,SAS,HBA 489103-001 Optional SPS-PROC,NEHALEM EP 2.
Description Spare part number CSR type SPS-DRV,ODD, SLIM SATA DVD RW 481429-001 Optional SPS-BD,8 PORT EXT,SAS,HBA 489103-001 Optional SPS-PROC,NEHALEM EP 2.
Description Spare part number CSR type SPS-DRV,ODD, SLIM SATA DVD RW 481429-001 Optional SPS-BD,8 PORT EXT,SAS,HBA 489103-001 Optional SPS-PROC,NEHALEM EP 2.
Description Spare part number CSR type SPS-DRV,ODD, SLIM SATA DVD RW 481429-001 Optional SPS-BD,8 PORT EXT,SAS,HBA 489103-001 Optional SPS-PROC,NEHALEM EP 2.
Description Spare part number CSR type SPS-DRV,ODD, SLIM SATA DVD RW 481429-001 Optional SPS-BD,8 PORT EXT,SAS,HBA 489103-001 Optional SPS-PROC,NEHALEM EP 2.
Description Spare part number CSR type SPS-DRV,ODD, SLIM SATA DVD RW 481429-001 Optional SPS-BD,8 PORT EXT,SAS,HBA 489103-001 Optional SPS-PROC,NEHALEM EP 2.
Description Spare part number CSR type SPS-DRV,ODD, SLIM SATA DVD RW 481429-001 Optional SPS-BD,8 PORT EXT,SAS,HBA 489103-001 Optional SPS-PROC,NEHALEM EP 2.
Description Spare part number CSR type SPS-DRV,ODD, SLIM SATA DVD RW 481429-001 Optional SPS-BD,8 PORT EXT,SAS,HBA 489103-001 Optional SPS-PROC,NEHALEM EP 2.
Description Spare part number CSR type SPS-DRV,ODD, SLIM SATA DVD RW 481429-001 Optional SPS-BD,8 PORT EXT,SAS,HBA 489103-001 Optional SPS-PROC,NEHALEM EP 2.
Description Spare part number CSR type SPS-DRV,ODD, SLIM SATA DVD RW 481429-001 Optional SPS-BD,8 PORT EXT,SAS,HBA 489103-001 Optional SPS-PROC,NEHALEM EP 2.
Description Spare part number CSR type SPS-DRV,ODD, SLIM SATA DVD RW 481429-001 Optional SPS-BD,8 PORT EXT,SAS,HBA 489103-001 Optional SPS-PROC,NEHALEM EP 2.
Description Spare part number CSR type SPS-DRV,ODD, SLIM SATA DVD RW 481429-001 Optional SPS-BD,8 PORT EXT,SAS,HBA 489103-001 Optional SPS-PROC,NEHALEM EP 2.
Description Spare part number CSR type SPS-DRV,ODD, SLIM SATA DVD RW 481429-001 Optional SPS-BD,8 PORT EXT,SAS,HBA 489103-001 Optional SPS-PROC,NEHALEM EP 2.
Description Spare part number CSR type SPS-DRV,ODD, SLIM SATA DVD RW 481429-001 Optional SPS-BD,8 PORT EXT,SAS,HBA 489103-001 Optional SPS-PROC,NEHALEM EP 2.
Description Spare part number CSR type SPS-DRV,ODD, SLIM SATA DVD RW 481429-001 Optional SPS-BD,8 PORT EXT,SAS,HBA 489103-001 Optional SPS-PROC,NEHALEM EP 2.
Description Spare part number CSR type SPS-DRV,ODD, SLIM SATA DVD RW 481429-001 Optional SPS-BD,8 PORT EXT,SAS,HBA 489103-001 Optional SPS-PROC,NEHALEM EP 2.
Description Spare part number CSR type SPS-DRV,ODD, SLIM SATA DVD RW 481429-001 Optional SPS-BD,8 PORT EXT,SAS,HBA 489103-001 Optional SPS-PROC,NEHALEM EP 2.
Description Spare part number CSR type SPS-DRV,ODD, SLIM SATA DVD RW 481429-001 Optional SPS-BD,8 PORT EXT,SAS,HBA 489103-001 Optional SPS-PROC,NEHALEM EP 2.
Description Spare part number CSR type SPS-DRV,ODD, SLIM SATA DVD RW 481429-001 Optional SPS-BD,8 PORT EXT,SAS,HBA 489103-001 Optional SPS-PROC,NEHALEM EP 2.
Description Spare part number CSR type SPS-DRV,ODD, SLIM SATA DVD RW 481429-001 Optional SPS-BD,8 PORT EXT,SAS,HBA 489103-001 Optional SPS-PROC,NEHALEM EP 2.
Description Spare part number CSR type SPS-DRV,ODD, SLIM SATA DVD RW 481429-001 Optional SPS-BD,8 PORT EXT,SAS,HBA 489103-001 Optional SPS-PROC,NEHALEM EP 2.
Description Spare part number CSR type SPS-DRV,ODD, SLIM SATA DVD RW 481429-001 Optional SPS-BD,8 PORT EXT,SAS,HBA 489103-001 Optional SPS-PROC,NEHALEM EP 2.
Description Spare part number CSR type SPS-DRV,ODD, SLIM SATA DVD RW 481429-001 Optional SPS-BD,8 PORT EXT,SAS,HBA 489103-001 Optional SPS-PROC,NEHALEM EP 2.
Description Spare part number CSR type SPS-DRV,ODD, SLIM SATA DVD RW 481429-001 Optional SPS-BD,8 PORT EXT,SAS,HBA 489103-001 Optional SPS-PROC,NEHALEM EP 2.
Description Spare part number CSR type SPS-DRV,ODD, SLIM SATA DVD RW 481429-001 Optional SPS-BD,8 PORT EXT,SAS,HBA 489103-001 Optional SPS-PROC,NEHALEM EP 2.
Description Spare part number CSR type SPS-DRV,ODD, SLIM SATA DVD RW 481429-001 Optional SPS-BD,8 PORT EXT,SAS,HBA 489103-001 Optional SPS-PROC,NEHALEM EP 2.
Description Spare part number CSR type SPS-POWER SUPPLY 481320-001 Optional SPS-CHASSIS, W/MIDPLANE 481321-001 Optional SPS-CABLE KIT 481322-001 Optional SPS-ENCLOSURE,I/O MODULE 481342-001 Optional SPS-BLANK,CNTRL 481343-001 Mandatory SPS-BLANK,HDD 481344-001 Mandatory SPS-SFP,XCVR 481345-001 Mandatory SPS-DRV,ODD, SLIM SATA DVD RW 481429-001 Optional SPS-BD,8 PORT EXT,SAS,HBA 489103-001 Optional SPS-PROC,NEHALEM EP 2.
Description Spare part number CSR type SPS-TRAY, DVD 532390-001 Mandatory SPS-HARDWARE MTG KIT 574765-001 Mandatory SPS-CHASSIS, 2U12 6G W/MIDPLANE 582938-001 Mandatory SPS-CHASSIS, 2U24 6G W/MIDPLANE 582939-001 Mandatory SPS-BD MEZZ 4X QDR IB CX-2 G2 DUAL PORT 593412-001 Optional SPS-DRV HD 450GB 6G 15K 3.
9320 Capacity Block HP IBRIX 9320 24 TB Capacity Block Starter Kit (QP333A) Description Spare part number CSR type SPS-CORD,AC PWR IEC/IEC 6 FT 142258-001 Mandatory SPS-BD,NIC,X4 PCI-E,4 PORT,1000 BASE-T 436431-001 Mandatory SPS-DRV, HD 146G SAS 2.
Description Spare part number CSR type SPS-PWR SUPPLY 595W 592267-001 Optional SPS-DRV HD 1TB 6G 7.2K 3.
9320 Storage HP IBRIX 9320 72 TB LFF ML Storage Starter Kit (QZ722A) Description Spare part number CSR type SPS-CA,EXT MINI SAS, 2M 408767-001 Mandatory SPS-CONTROLLER-P2000 G3 SAS 582934-001 Optional SPS-CHASSIS 2U12 6G w/MIDPLANE 582938-001 Mandatory SPS-CONTROLLER BD IO 6GB 592262-001 Optional SPS-SIDE BEZEL 2U12/2U24 592263-001 Mandatory SPS-CABLE CLI USB 592266-001 Mandatory SPS-PWR SUPPLY 595W 592267-001 Optional SPS-DRV HD 3TB SAS 7.2K 6G 3.
Description Spare part number CSR type SPS-PWR SUPPLY 595W 592267-001 Optional SPS-DRV HD 3TB SAS 7.2K 6G 3.5 656102-001 Mandatory HP IBRIX 9320 7.2 TB SFF Ent Storage Starter Kit (QZ724B) Description Spare part number CSR type 5183-2685 Mandatory SPS-CA,EXT MINI SAS, 2M 408767-001 Mandatory SPS-DRV HD 300GB 10K SFF M6625 SAS 583711-001 Mandatory CABLE-CAT5E, RJ45, male/male HP IBRIX 9320 7.
Description Spare part number CSR type SPS-ASSY,CHASSIS, M6412 DISK SHELF 530834-001 No SPS-POWER SUPPLY, 460W 536404-001 Mandatory SPS-DRV HD 300GB 10K SFF M6625 SAS 583711-001 Mandatory HP IBRIX 9320 21.
Description Spare part number CSR type SPS-FAN ASSY,SAS,2600/2700 519325-001 Mandatory SPS-ASSY,CHASSIS, M6412 DISK SHELF 530834-001 No SPS-POWER SUPPLY, 460W 536404-001 Mandatory SPS-DRV HD 900GB 6G 10K SFF M6625 SAS 665749-001 Mandatory 232 Spare parts list for 9320 systems
F Warnings and precautions Electrostatic discharge information To prevent damage to the system, be aware of the precautions you need to follow when setting up the system or handling parts. A discharge of static electricity from a finger or other conductor could damage system boards or other static-sensitive devices. This type of damage could reduce the life expectancy of the device.
Equipment symbols If the following symbols are located on equipment, hazardous conditions could exist. WARNING! Any enclosed surface or area of the equipment marked with these symbols indicates the presence of electrical shock hazards. Enclosed area contains no operator serviceable parts. To reduce the risk of injury from electrical shock hazards, do not open this enclosure. WARNING! Any RJ-45 receptacle marked with these symbols indicates a network interface connection.
WARNING! Verify that the AC power supply branch circuit that provides power to the rack is not overloaded. Overloading AC power to the rack power supply circuit increases the risk of personal injury, fire, or damage to the equipment. The total rack load should not exceed 80 percent of the branch circuit rating. Consult the electrical authority having jurisdiction over your facility wiring and installation requirements.
CAUTION: Protect the installed solution from power fluctuations and temporary interruptions with a regulating Uninterruptible Power Supply (UPS). This device protects the hardware from damage caused by power surges and voltage spikes, and keeps the system in operation during a power failure. CAUTION: To properly ventilate the system, you must provide at least 7.6 centimeters (3.0 inches) of clearance at the front and back of the device.
G Regulatory compliance notices Regulatory compliance identification numbers For the purpose of regulatory compliance certifications and identification, this product has been assigned a unique regulatory model number. The regulatory model number can be found on the product nameplate label, along with all required approval markings and information. When requesting compliance information for this product, always refer to this regulatory model number.
off and on, the user is encouraged to try to correct the interference by one or more of the following measures: • Reorient or relocate the receiving antenna. • Increase the separation between the equipment and receiver. • Connect the equipment into an outlet on a circuit that is different from that to which the receiver is connected. • Consult the dealer or an experienced radio or television technician for help.
Japanese notices Japanese VCCI-A notice Japanese VCCI-B notice Japanese VCCI marking Japanese power cord statement Korean notices Class A equipment Class B equipment Japanese notices 239
Taiwanese notices BSMI Class A notice Taiwan battery recycle statement Turkish recycling notice Türkiye Cumhuriyeti: EEE Yönetmeliğine Uygundur Vietnamese Information Technology and Communications compliance marking Laser compliance notices English laser notice This device may contain a laser that is classified as a Class 1 Laser Product in accordance with U.S. FDA regulations and the IEC 60825-1. The product does not emit hazardous laser radiation.
The Center for Devices and Radiological Health (CDRH) of the U.S. Food and Drug Administration implemented regulations for laser products on August 2, 1976. These regulations apply to laser products manufactured from August 1, 1976. Compliance is mandatory for products marketed in the United States.
Italian laser notice Japanese laser notice Spanish laser notice 242 Regulatory compliance notices
Recycling notices English recycling notice Disposal of waste equipment by users in private household in the European Union This symbol means do not dispose of your product with your other household waste. Instead, you should protect human health and the environment by handing over your waste equipment to a designated collection point for the recycling of waste electrical and electronic equipment.
Estonian recycling notice Äravisatavate seadmete likvideerimine Euroopa Liidu eramajapidamistes See märk näitab, et seadet ei tohi visata olmeprügi hulka. Inimeste tervise ja keskkonna säästmise nimel tuleb äravisatav toode tuua elektriliste ja elektrooniliste seadmete käitlemisega egelevasse kogumispunkti. Küsimuste korral pöörduge kohaliku prügikäitlusettevõtte poole.
Italian recycling notice Smaltimento di apparecchiature usate da parte di utenti privati nell'Unione Europea Questo simbolo avvisa di non smaltire il prodotto con i normali rifi uti domestici. Rispettare la salute umana e l'ambiente conferendo l'apparecchiatura dismessa a un centro di raccolta designato per il riciclo di apparecchiature elettroniche ed elettriche. Per ulteriori informazioni, rivolgersi al servizio per lo smaltimento dei rifi uti domestici.
Romanian recycling notice Casarea echipamentului uzat de către utilizatorii casnici din Uniunea Europeană Acest simbol înseamnă să nu se arunce produsul cu alte deşeuri menajere. În schimb, trebuie să protejaţi sănătatea umană şi mediul predând echipamentul uzat la un punct de colectare desemnat pentru reciclarea echipamentelor electrice şi electronice uzate. Pentru informaţii suplimentare, vă rugăm să contactaţi serviciul de eliminare a deşeurilor menajere local.
Battery replacement notices Dutch battery notice French battery notice Battery replacement notices 247
German battery notice Italian battery notice 248 Regulatory compliance notices
Japanese battery notice Spanish battery notice Battery replacement notices 249
Glossary ACE Access control entry. ACL Access control list. ADS Active Directory Service. ALB Advanced load balancing. BMC Baseboard Management Configuration. CIFS Common Internet File System. The protocol used in Windows environments for shared folders. CLI Command-line interface. An interface comprised of various commands which are used to control operating system responses. CSR Customer self repair. DAS Direct attach storage.
SELinux Security-Enhanced Linux. SFU Microsoft Services for UNIX. SID Secondary controller identifier number. SNMP Simple Network Management Protocol. TCP/IP Transmission Control Protocol/Internet Protocol. UDP User Datagram Protocol. UID Unit identification. VACM SNMP View Access Control Model. VC HP Virtual Connect. VIF Virtual interface. WINS Windows Internet Naming Service. WWN World Wide Name. A unique identifier assigned to a Fibre Channel device. WWNN World wide node name.
Index Symbols /etc/sysconfig/i18n file, 13 9000 clients add to host group, 64 change IP address, 103 identify a user network interface, 101 monitor status, 75 prefer a user network interface, 102 start or stop processes, 91 troubleshooting, 134 tune, 91 tune locally, 94 user interface, 18 view process status, 91 9300 system components, 10 configuration, 12 features, 10 management interfaces, 13 shut down, 88 software, 10 start, 89 9320 system components, 10 configuration, 12 features, 10 management interfac
operational states, 75 power management, 90 prefer a user network interface, 102 remove from cluster, 100 rolling reboot, 90 run health check, 135 start or stop processes, 91 statistics, 80 troubleshooting, 134 tune, 91 view process status, 91 file system migrate segments, 95 firewall configuration, 19 firmware, upgrade, 125 Fusion Manager agile, 36 back up configuration, 59 failover, 36 G grounding methods, 233 GUI add users, 17 change password, 19 customize, 17 Details panel, 16 Navigator, 16 open, 14 vi
add routing table entries, 104 bonded and virtual interfaces, 101 defined, 100 delete, 105 delete routing table entries, 104 guidelines, 31 viewing, 105 Network Storage System configuration, 12 management interfaces, 13 NIC failover, 32 NTP servers, 20 P passwords, change GUI password, 19 Phone Home, 22 ports, open, 19 power sources, server, 44 install, 81 log files, 87 maintain configuration, 85 processes, 86 reports, 83 space requirements, 84 troubleshooting, 86 uninstall, 87 upgrade, 82 Storage softwar
Windows 9000 clients, upgrade, 121, 153 255