Dell Compellent Storage Center Fibre Channel Storage Arrays With Microsoft Windows Server Failover Clusters Hardware Installation and Troubleshooting Guide
Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION indicates potential damage to hardware or loss of data if instructions are not followed. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. ___________________ Information in this publication is subject to change without notice. © 2011 Dell Inc. All rights reserved.
Contents 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . Cluster Solution . . . . . . . . . . . . . . . . . . . . . . 6 . . . . . . . . . . . . . . . . . . . . 6 Cluster Storage . . . . . . . . . . . . . . . . . . . . Other Documents You May Need 2 . . . . . . . . . . . . . Cabling Your Cluster Hardware . . . . . . . . Cabling the Mouse, Keyboard, and Monitor Cabling the Power Supplies . 7 9 11 . . . . . . 11 . . . . . . . . . . . . . .
Installing the Fibre Channel HBA Drivers . . . . . . 28 . . . . . . . 28 . . . . . . . . . 29 . . . . . . . . . . . . . 30 . . . . . . . . . . . . . . . . . . 31 Implementing Zoning on a Fibre Channel Switched Fabric . . . . . . . . . . . . . . Installing and Configuring the Shared Storage System . . . . . . . . . . . . . Setting Up the Controllers Create a Server . Create a Server Cluster . . . . . . . . . . . . . . . Create a Volume for the Server Cluster.
1 Introduction A Dell Failover Cluster combines specific hardware and software components to provide enhanced availability for applications and services that are run on the cluster. A Failover Cluster is designed to reduce the possibility of any single point of failure within the system that can cause the clustered applications or services to become unavailable.
Implementing Fibre Channel technology in a cluster provides the following advantages: • Flexibility—Fibre Channel allows a distance of up to 10 km between switches without degrading the signal. • Availability—Fibre Channel components use redundant connections providing multiple data paths and greater availability for clients. • Connectivity—Fibre Channel allows more device connections than Small Computer System Interface (SCSI).
Table 1-1. Cluster Node Requirements (continued) Component Minimum Requirement Internal disk controller One controller connected to at least two internal hard drives for each node. Use any supported RAID controller or disk controller. Two hard drives are required for mirroring (RAID 1) and at least three are required for disk striping with parity (RAID 5). NOTE: It is highly recommended that you use hardware-based RAID or software-based disk-fault tolerance for the internal drives.
Table 1-2. Cluster Storage Requirements (continued) Hardware Components Requirement Fibre Channel switch At least two 8 Gbps Fibre Channel switches. The switches must support NPIV in order to support Virtual Port mode. Multiple clusters and stand-alone systems Can share a storage system. See "Installing and Configuring the Shared Storage System" on page 29. NOTE: NPIV allows multiple port IDs to share a single physical port.
Optional software for the shared storage system includes: • Data Progression—leverages cost and performance differences between storage tiers, allowing the maximum use of lower-cost drives for stored data, while maintaining high performance drives for frequently-accessed data. • Data Instant Replay—A Replay is a point-in-time copy of one or more volumes. Once an initial Replay of a volume is taken, subsequent Replays preserve pointers to data that has changed since the previous Replay.
• Operating system documentation describes how to install (if necessary), configure, and use the operating system software. • Documentation for any components you purchased separately provides information to configure and install those options. • The Dell PowerVault tape library documentation provides information for installing, troubleshooting, and upgrading the tape library. • Any other documentation that came with your server or storage system.
Cabling Your Cluster Hardware 2 Cabling the Mouse, Keyboard, and Monitor When installing a cluster configuration in a rack, you must include a switch box to connect the mouse, keyboard, and monitor to the nodes. See the documentation included with your rack for instructions on cabling connections of each node to the switch box. Cabling the Power Supplies See the documentation for each component in your cluster solution and ensure that the specific power requirements are satisfied.
Figure 2-1. Power Cabling Example With Two Power Supplies in PowerEdge Systems cluster node 1 cluster node 2 Compellent Storage Center controller 2 Compellent Storage Center controller 1 primary power supplies on one AC power strip [or one AC PDU (not shown)] redundant power supplies on one AC power strip [(or one AC PDU not shown)] Cabling Your Cluster for Public and Private Networks The network adapters in the cluster nodes provide at least two network connections for each node. See Table 2-1.
Figure 2-2 shows an example of cabling in which dedicated network adapters in each node are connected to each other (for the private network) and the remaining network adapters are connected to the public network. Figure 2-2.
Table 2-2. Private Network Hardware Components and Connections Method Hardware Components Connection Network switch Gigabit or 10 Gigabit Depending on the hardware, connect Ethernet network adapters the CAT5e or CAT6 cables, the multiand switches mode optical cables with Local Connectors (LCs), or the twinax cables from the network adapters in the nodes to a switch.
Cabling the Storage System This section provides information on cabling your cluster to a storage system in a SAN-attached configuration. Cabling a Cluster to a Compellent Storage Center Storage System A SAN-attached cluster is a cluster configuration where all cluster nodes that are attached to the storage system through SAN use a redundant switch fabric. SAN-attached cluster configurations provide flexibility, expandability, and performance.
Figure 2-3.
Figure 2-4.
Cabling the Compellent Storage Center Back-End For information on how to cable the Compellent Storage Center back-end, see the Compellent Storage Center documentation. The following are two examples on how to connect the back-end cables. Figure 2-5.
Figure 2-6. Back-End Cabling With Multiple SAS Chains Storage Center Controller 1 Storage Center Controller 2 Storage Center SAS Storage Enclosures I/O Cards Cabling the Cluster Nodes and the Compellent Storage Center Front-End The cluster nodes attach to the storage system using a redundant switch fabric and Fibre optic cables with duplex LC multimode connectors.
Each HBA port is cabled to a port on a Fibre Channel switch. One or more cables connect from the outgoing ports on a switch to a storage controller on a Compellent storage system. 1 Connect cluster node 1 to the SAN: a b Connect a cable from HBA port 0 to Fibre Channel switch 0 (sw0). Connect a cable from HBA port 1 to Fibre Channel switch 1 (sw1). 2 Repeat step 1 for each additional cluster node.
Figure 2-7. Cabling a SAN-Attached Cluster to the Compellent Storage System Cluster Node 2 Cluster Node 1 Fibre Channel Switch 0 (sw0) Domain 1 Domain 1 Domain 1 Domain 1 Storage Center Controller 1 Domain 2 Domain 2 Domain 2 Domain 2 Fibre Channel Switch 1 (sw1) Storage Center Controller 2 NOTE: Additional cables can be connected from the fibre channel switches to the storage system if there are available front-end fibre channel ports on the storage processors.
3 For each additional cluster, repeat step 1 and step 2. 4 Connect the storage system to the SAN: a Connect a cable from Fibre Channel switch 0 (sw0) to the first frontend fibre channel port on the Storage Center Controller 1. b Connect a cable from Fibre Channel switch 0 (sw0) to the first frontend fibre channel port on the Storage Center Controller 2. c Connect a cable from Fibre Channel switch 1 (sw1) to the second front-end fibre channel port on the Storage Center Controller 1.
Figure 2-8. Cabling a Storage System and a Tape Library cluster node cluster node private network Fibre Channel switch Fibre Channel switch tape library storage system Obtaining More Information See the storage and tape backup documentation for more information on configuring these components. Configuring Your Cluster With SAN Backup You can provide centralized backup for your clusters by sharing your SAN with multiple clusters, storage systems, and a tape library.
Figure 2-9.
Preparing Your Systems for Clustering 3 WARNING: Only trained service technicians are authorized to remove and access any of the components inside the system. See your safety information for complete information about safety precautions, working inside the computer, and protecting against electrostatic discharge. Cluster Configuration Overview 1 Ensure that your site can handle the cluster’s power requirements. Contact your sales representative for information about your region's power requirements.
5 Configure each cluster node as a member in the same Microsoft Windows Active Directory Domain. NOTE: You can configure the cluster nodes as Domain Controllers. For more information, see the “Selecting a Domain Model” section of Dell Failover Clusters with Microsoft Windows Server 2008 Installation and Troubleshooting Guide at support.dell.com/manuals.
13 Configure client systems to access the highly-available applications and services that are hosted on your failover cluster. Installation Overview Each node in your Dell Failover Cluster must be installed with the same release, edition, service pack, and processor architecture of the Windows Server operating system. For example, all nodes in your cluster may be configured with Windows Server 2008 R2, Enterprise x64 Edition.
Installing the Fibre Channel HBA Drivers For more information about installing and configuring HBAs, see the following: • Compellent HBAs—Compellent documentation that is included with your HBA kit. • Emulex HBAs—Emulex support at emulex.com or the Dell Support at support.dell.com. • QLogic HBAs—QLogic support at qlogic.com or Dell Support at support.dell.com. For more information about supported HBA controllers and drivers, see the Dell Cluster Configuration Support Matrices at dell.com/ha.
A WWN is a unique numeric identifier assigned to Fibre Channel interfaces, such as HBA ports, storage controller ports, and Fibre Channel to SCSI bridges or storage network controllers (SNCs). A WWN consists of an 8-byte hexadecimal number with each byte separated by a colon. For example, 10:00:00:60:69:00:00:8a is a valid WWN. WWN port name zoning allows you to move cables between switch ports within the fabric without having to update the zones.
The following pre-installation documentation is provided by your Storage Architect or Business Partner: • List of hardware needed to support storage requirements • Optional connectivity diagrams to illustrate cabling between the controllers, enclosures, network, and servers • Optional network information, such as IP addresses, subnet masks, and gateways These documents provide information about site-specific settings to configure the controllers. Setting Up the Controllers 1 Turn on each controller.
c Check for Storage Center updates Create a Server 1 From the system tree in the Storage Management window, select the Servers node. 2 From the shortcut menu, select Create Server. The Create Server Wizard appears. The wizard lists Host Bus Adapters (HBAs) recognized by the Storage Center. 3 Select one or more HBAs belonging to the server. 4 Click Continue. A window allowing you to name the server is displayed. 5 Enter a name for the server or accept the default.
Create a Server Cluster A server cluster is a collection of servers. A server that is a member of a server cluster is referred to as a cluster node. Volumes can be mapped directly to a server cluster. All volumes mapped to a server cluster are automatically mapped to all nodes in the cluster. 1 From the system tree in the Storage Management window, select the Servers node. 2 From the shortcut menu, select Create Server Cluster. 3 Choose Add Existing Server.
Create a Volume for the Server Cluster Volumes are configured through the Configure Volume Defaults window. To create a volume for your cluster: 1 On the Storage Management window, select CreateVolume. The Create Volume Wizard asks you to enter a volume size. 2 Enter a volume size in GB, TB, or PB. The maximum size of a volume is 10 PB. NOTE: If your User Volume Defaults allow you to modify cache settings or Storage Profiles, an Advanced button appears. 3 Click Continue.
13 Click Create Now. The volume is mapped. 14 If you want to create another volume for the cluster server, select Create Volume and repeat step 2 through step 14. Otherwise, click Close to exit the Wizard. Optional Storage Features Your Compellent Storage Center storage array may be configured to provide optional features that can be used in conjunction with your cluster. 34 • Enterprise Manager: It is a separately-licensed application that manages and monitors multiple Storage Center systems.
This allows for more efficient link utilization and data transfer optimization. It also means that in the event of a local failure, writes present on the source system may not be present on the remote system. Remote Instant Replay can be initiated through either Storage Center or Enterprise Manager.
Preparing Your Systems for Clustering
4 Troubleshooting This appendix provides troubleshooting information for your cluster configuration. Table 4-1 describes general cluster problems you may encounter and the probable causes and solutions for each problem. Table 4-1. General Cluster Troubleshooting Problem Probable Cause Corrective Action The nodes cannot access the storage system, or the cluster software is not functioning with the storage system.
Table 4-1. General Cluster Troubleshooting (continued) Problem Probable Cause Corrective Action One of the nodes The node-to-node takes a long time to network has failed due join the cluster. to a cabling or hardware failure. or Check the network cabling. Ensure that the node-to-node interconnection and the public network are connected to the correct NICs. One or more nodes may have the Internet Connection Firewall enabled, blocking Remote Procedure Call (RPC) communications between the nodes.
Table 4-1. General Cluster Troubleshooting (continued) Problem Probable Cause Attempts to connect to a cluster using Cluster Administrator fail. The Cluster Service has not been started. Corrective Action Verify that the Cluster Service is running and that the cluster is formed. A cluster has not been Use the Event Viewer and look for the formed on the system.
Table 4-1. General Cluster Troubleshooting (continued) Problem Probable Cause Corrective Action You are prompted to configure one network instead of two during MSCS installation. The TCP/IP configuration is incorrect. The node-to-node network and public network must be assigned static IP addresses on different subnets.
Table 4-1. General Cluster Troubleshooting (continued) Problem Probable Cause Corrective Action Public network clients cannot access the applications or services that are provided by the cluster. One or more nodes may have the Internet Connection Firewall enabled, blocking RPC communications between the nodes. Configure the Internet Connection Firewall to allow communications that are required by the MSCS and the clustered applications or services.
Troubleshooting
5 Zoning Configuration Form Node HBA WWPNs Storage or Alias WWPNs or Names Alias Names Zone Name Zone Set for Configuration Name Zoning Configuration Form 43
Zoning Configuration Form
6 Cluster Data Form You can attach the following form in a convenient location near each cluster node or rack to record information about the cluster. Use the form when you call for technical support. Table 6-1. Cluster Information Cluster Information Cluster Solution Cluster name and IP address Server type Installer Date installed Applications Location Notes Table 6-2.
Additional Networks Table 6-3.
Index C H cable configurations cluster interconnect, 13 for client networks, 13 for mouse, keyboard, and monitor, 11 for power supplies, 11 HBA drivers installing and configuring, 28 cabling, 21 multiple SAN-Attached clusters to a Compellent storage system, 21 cluster storage requirements, 7 host bus adapter configuring the Fibre Channel HBA, 27 K keyboard cabling, 11 clustering overview, 5 M D mouse cabling, 11 drivers installing and configuring Emulex, 28 monitor cabling, 11 MSCS installing a
O W overview installation, 27 warranty, 9 Z P power supplies cabling, 11 private network cabling, 12-13 hardware components, 14 hardware components and connections, 13 public network cabling, 12 S SAN configuring SAN backup in your cluster, 23 single initiator zoning about, 29 T tape library connecting to a PowerEdge cluster, 22 troubleshooting connecting to a cluster, 39 shared storage subsystem, 37 48 Index zones implementing on a Fibre Channel switched fabric, 28