Dell/EMC CX4-series Fibre Channel Storage Arrays With Microsoft® Windows Server® Failover Clusters Hardware Installation and Troubleshooting Guide
Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. ___________________ Information in this document is subject to change without notice. © 2008 Dell Inc. All rights reserved.
Contents 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . Cluster Solution . . . . . . . . . . . . . . . . . . . . . . Cluster Hardware Requirements Cluster Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cluster Storage . . . . . . . . . . . . . . . . . . . 8 9 10 . . . . . . . . . . . 12 12 . . . . . . . . . . . . . . . 13 Direct-Attached Cluster Other Documents You May Need 2 8 . . . . . . . . . . . . . . Supported Cluster Configurations .
3 Preparing Your Systems for Clustering . . . . . . . . . . . . . . . . . . . . . . . . 39 . . . . . . . . . . . . 39 . . . . . . . . . . . . . . . . . . 41 Cluster Configuration Overview . Installation Overview Installing the Fibre Channel HBAs . . . . . . . . . . . . . . . . . 42 . . . . . . . . 42 Installing the Fibre Channel HBA Drivers . Implementing Zoning on a Fibre Channel Switched Fabric . . . . . . . . . . . . . Using Zoning in SAN Configurations Containing Multiple Hosts . . . . .
A Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 . . . . . . . . . . . . . . . . . .
Contents
Introduction A Dell™ Failover Cluster combines specific hardware and software components to provide enhanced availability for applications and services that are run on the cluster. A Failover Cluster is designed to reduce the possibility of any single point of failure within the system that can cause the clustered applications or services to become unavailable.
Cluster Solution Your cluster implements a minimum of two nodes to a maximum of either eight nodes (for Windows Server 2003) or sixteen nodes (for Windows Server 2008) and provides the following features: • 8-Gbps and 4-Gbps Fibre Channel technology • High availability of resources to network clients • Redundant paths to the shared storage • Failure recovery for applications and services • Flexible maintenance capabilities, allowing you to repair, maintain, or upgrade a node or storage system witho
Cluster Nodes Table 1-1 lists the hardware requirements for the cluster nodes. Table 1-1. Cluster Node Requirements Component Minimum Requirement Cluster nodes A minimum of two identical PowerEdge servers are required. The maximum number of nodes that are supported depend on the variant of the Windows Server operating system used in your cluster, and on the physical topology in which the storage system and nodes are interconnected.
Cluster Storage Table 1-2 lists supported storage systems and the configuration requirements for the cluster nodes and stand-alone systems connected to the storage systems. Table 1-2. Cluster Storage Requirements Hardware Components Requirement Supported storage systems One to four supported Dell/EMC storage systems. See Table 1-3 for specific storage system requirements.
Each storage system in the cluster is centrally managed by one host system (also called a management station) running EMC Navisphere® Manager—a centralized storage management application used to configure Dell/EMC storage systems. Using a graphical user interface (GUI), you can select a specific view of your storage arrays, as shown in Table 1-4. Table 1-4.
Supported Cluster Configurations The following sections describe the supported cluster configurations. Direct-Attached Cluster In a direct-attached cluster, all the nodes of the cluster are directly attached to a single storage system. In this configuration, the RAID controllers (or storage processors) on the storage system are connected by cables directly to the Fibre Channel HBA ports in the nodes. Figure 1-1 shows a basic direct-attached, single-cluster configuration. Figure 1-1.
SAN-Attached Cluster In a SAN-attached cluster, all nodes are attached to a single storage system or to multiple storage systems through a SAN using redundant switch fabrics. SAN-attached clusters are superior to direct-attached clusters in configuration flexibility, expandability, and performance. Figure 1-2 shows a SAN-attached cluster. Figure 1-2.
• The Getting Started Guide provides an overview of initially setting up your system. • For more information on deploying your cluster with Windows Server 2003 operating systems, see the Dell Failover Clusters with Microsoft Windows Server 2003 Installation and Troubleshooting Guide. • For more information on deploying your cluster with Windows Server 2008 operating systems, see the Dell Failover Clusters with Microsoft Windows Server 2008 Installation and Troubleshooting Guide.
Cabling Your Cluster Hardware NOTE: To configure Dell blade server modules in a Dell PowerEdge cluster, see the Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document located on the Dell Support website at support.dell.com. Cabling the Mouse, Keyboard, and Monitor When installing a cluster configuration in a rack, you must include a switch box to connect the mouse, keyboard, and monitor to the nodes.
1 1 0 2 primary power supplies on one AC power strip (or on one AC Power Distribution Unit [not shown]) 0 1 0 0 2 1 3 Power Cabling Example With One Power Supply in the PowerEdge Systems 3 Figure 2-1. redundant power supplies on one AC power strip (or on one AC PDU [not shown]) NOTE: This illustration is intended only to demonstrate the power distribution of the components.
3 1 2 1 0 0 1 0 primary power supplies on one AC power strip (or on one AC PDU [not shown]) 0 2 1 3 Figure 2-2. Power Cabling Example With Two Power Supplies in the PowerEdge Systems redundant power supplies on one AC power strip (or on one AC PDU [not shown]) NOTE: This illustration is intended only to demonstrate the power distribution of the components.
Table 2-1. Network Connections Network Connection Description Public network All connections to the client LAN. At least one public network must be configured for Mixed mode for private network failover. Private network A dedicated connection for sharing cluster health and status information only.
Cabling the Private Network The private network connection to the nodes is provided by a different network adapter in each node. This network is used for intra-cluster communications. Table 2-2 describes three possible private network configurations. Table 2-2. Private Network Hardware Components and Connections Method Hardware Components Network switch Gigabit Ethernet network Connect standard Ethernet cables adapters and switches from the network adapters in the nodes to a Gigabit Ethernet switch.
Cabling Storage for Your Direct-Attached Cluster A direct-attached cluster configuration consists of redundant Fibre Channel host bus adapter (HBA) ports cabled directly to a Dell/EMC storage system. Figure 2-4 shows an example of a direct-attached, single cluster configuration with redundant HBA ports installed in each cluster node. Figure 2-4.
Cabling a Cluster to a Dell/EMC Storage System Each cluster node attaches to the storage system using two Fibre optic cables with duplex local connector (LC) multimode connectors that attach to the HBA ports in the cluster nodes and the storage processor (SP) ports in the Dell/EMC storage system. These connectors consist of two individual Fibre optic connectors with indexed tabs that must be aligned properly into the HBA ports and SP ports.
Figure 2-5. Cabling a Two-Node Cluster to a CX4-120 or CX4-240 Storage System cluster node 2 cluster node 1 HBA ports (2) HBA ports (2) 10 01 SP-A 1 3 0 2 1 0 0 0 1 2 1 3 SP-B CX4-120 or CX4-240 storage system Figure 2-6.
Figure 2-7. Cabling a Two-Node Cluster to a CX4-960 Storage System cluster node 2 cluster node 1 HBA ports (2) SP-B HBA ports (2) 01 10 0 1 2 0 1 2 0 3 3 1 0 1 2 3 0 1 2 3 0 1 SP-A CX4-960 storage system Cabling a Multi-Node Cluster to a Dell/EMC Storage System You can configure a cluster with more than two nodes in a direct-attached configuration using a Dell/EMC storage system, depending on the availability of front-end fibre channel ports.
2 Connect cluster node 2 to the storage system: a Install a cable from cluster node 2 HBA port 0 to the second front-end fibre channel port on SP-A. b Install a cable from cluster node 2 HBA port 1 to the second front-end fibre channel port on SP-B. 3 Connect cluster node 3 to the storage system: a Install a cable from cluster node 3 HBA port 0 to the third front-end fibre channel port on SP-A. b Install a cable from cluster node 3 HBA port 1 to the third front-end fibre channel port on SP-B.
Cabling Two Two-Node Clusters to a Dell/EMC Storage System The following steps are an example of how to cable a two two-node cluster. The Dell/EMC storage system needs to have at least 4 front-end fibre channel ports available on each storage processor. 1 In the first cluster, connect cluster node 1 to the storage system: a Install a cable from cluster node 1 HBA port 0 to the first front-end fibre channel port on SP-A.
Figure 2-8 shows an example of a two node SAN-attached cluster. Figure 2-9 shows an example of an eight-node SAN-attached cluster. Similar cabling concepts can be applied to clusters that contain a different number of nodes. NOTE: The connections listed in this section are representative of one proven method of ensuring redundancy in the connections between the cluster nodes and the storage system. Other methods that achieve the same type of redundant connectivity may be acceptable. Figure 2-8.
Figure 2-9.
Cabling a SAN-Attached Cluster to a Dell/EMC Storage System The cluster nodes attach to the storage system using a redundant switch fabric and Fibre optic cables with duplex LC multimode connectors. The switches, the HBA ports in the cluster nodes, and the SP ports in the storage system use duplex LC multimode connectors.
Cabling a SAN-Attached Cluster to a Dell/EMC CX4-120 or CX4-240 Storage System 1 Connect cluster node 1 to the SAN: a b Connect a cable from HBA port 0 to Fibre Channel switch 0 (sw0). Connect a cable from HBA port 1 to Fibre Channel switch 1 (sw1). 2 Repeat step 1 for each additional cluster node. 3 Connect the storage system to the SAN: a b c d Connect a cable from Fibre Channel switch 0 (sw0) to the first frontend fibre channel port on SP-A.
Figure 2-10. Cabling a SAN-Attached Cluster to the Dell/EMC CX4-120 or CX4-240 cluster node 1 cluster node 2 HBA ports (2) HBA ports (2) 0 1 0 1 sw0 sw1 0 2 1 0 0 2 0 1 1 3 SP-A 1 3 SP-B CX4-120 or CX4-240 storage system Cabling a SAN-Attached Cluster to the Dell/EMC CX4-480 or CX4-960 Storage System 1 Connect cluster node 1 to the SAN: a Connect a cable from HBA port 0 to Fibre Channel switch 0 (sw0). b Connect a cable from HBA port 1 to Fibre Channel switch 1 (sw1).
d Connect a cable from Fibre Channel switch 0 (sw0) to the second front-end fibre channel port on SP-B. e Connect a cable from Fibre Channel switch 1 (sw1) to the third frontend fibre channel port on SP-A. f Connect a cable from Fibre Channel switch 1 (sw1) to the third frontend fibre channel port on SP-B. g Connect a cable from Fibre Channel switch 1 (sw1) to the fourth front-end fibre channel port on SP-A.
Figure 2-12.
Cabling Multiple SAN-Attached Clusters to the CX4-120 or CX4-240 Storage System 1 In the first cluster, connect cluster node 1 to the SAN: a Connect a cable from HBA port 0 to Fibre Channel switch 0 (sw0). b Connect a cable from HBA port 1 to Fibre Channel switch 1 (sw1). 2 In the first cluster, repeat step 1 for each additional cluster node. 3 For each additional cluster, repeat step 1 and step 2.
c Connect a cable from Fibre Channel switch 0 (sw0) to the second front-end fibre channel port on SP-A. d Connect a cable from Fibre Channel switch 0 (sw0) to the second front-end fibre channel port on SP-B. e Connect a cable from Fibre Channel switch 1 (sw1) to the third frontend fibre channel port on SP-A. f Connect a cable from Fibre Channel switch 1 (sw1) to the third frontend fibre channel port on SP-B.
• MSCS is limited to 22 drive letters. Because drive letters A through D are reserved for local disks, a maximum of 22 drive letters (E to Z) can be used for your storage system disks. • Windows Server 2003 and 2008 support mount points, allowing greater than 22 drives per cluster. Figure 2-13 provides an example of cabling the cluster nodes to four Dell/EMC storage systems. See "Implementing Zoning on a Fibre Channel Switched Fabric" on page 42 for more information. Figure 2-13.
NOTE: While tape libraries can be connected to multiple fabrics, they do not provide path failover. Figure 2-14. Cabling a Storage System and a Tape Library cluster node cluster node private network Fibre Channel switch Fibre Channel switch tape library storage system Obtaining More Information See the storage and tape backup documentation for more information on configuring these components.
Figure 2-15.
Cabling Your Cluster Hardware
Preparing Your Systems for Clustering WARNING: Only trained service technicians are authorized to remove and access any of the components inside the system. See your safety information for complete information about safety precautions, working inside the computer, and protecting against electrostatic discharge. Cluster Configuration Overview 1 Ensure that your site can handle the cluster’s power requirements. Contact your sales representative for information about your region's power requirements.
5 Configure each cluster node as a member in the same Windows Active Directory Domain. NOTE: You can configure the cluster nodes as Domain Controllers. For more information, see the “Selecting a Domain Model” section of Dell Failover Clusters with Microsoft Windows Server 2003 Installation and Troubleshooting Guide or Dell Failover Clusters with Microsoft Windows Server 2008 Installation and Troubleshooting Guide located on the Dell Support website at support.dell.com.
12 Configure highly-available applications and services on your Failover Cluster. Depending on your configuration, this may also require providing additional LUNs to the cluster or creating new cluster resource groups. Test the failover capabilities of the new resources. 13 Configure client systems to access the highly-available applications and services that are hosted on your failover cluster.
Installing the Fibre Channel HBAs For dual-HBA configurations, it is recommended that you install the Fibre Channel HBAs on separate peripheral component interconnect (PCI) buses. Placing the adapters on separate buses improves availability and performance. For more information about your system's PCI bus configuration and supported HBAs, see the Dell Cluster Configuration Support Matrix on the Dell High Availability website at www.dell.com/ha.
Zoning automatically and transparently enforces access of information to the zone devices. More than one PowerEdge cluster configuration can share Dell/EMC storage system(s) in a switched fabric using Fibre Channel switch zoning and with Access Control enabled. By using Fibre Channel switches to implement zoning, you can segment the SANs to isolate heterogeneous servers and storage systems from each other.
Table 3-1. Port Worldwide Names in a SAN Environment (continued) Identifier Description xx:xx:xx:60:45:xx:xx:xx PowerVault 132T and 136T tape libraries xx:xx:xx:E0:02:xx:xx:xx PowerVault 128T tape autoloader xx:xx:xx:C0:01:xx:xx:xx PowerVault 160T tape library and Fibre Channel tape drives xx:xx:xx:C0:97:xx:xx:xx PowerVault ML6000 Fibre Channel tape drives CAUTION: When you replace a Fibre Channel HBA in a PowerEdge server, reconfigure your zones to provide continuous client data access.
Installing and Configuring the Shared Storage System See "Cluster Hardware Requirements" on page 8 for a list of supported Dell/EMC storage systems. To install and configure the Dell/EMC storage system in your cluster: 1 Update the core software on your storage system and enable Access Control (optional) and install any additional software options, including EMC SnapView™, EMC MirrorView™, and SAN Copy™. See your EMC Navisphere® documentation for more information.
Access Control is enabled using Navisphere Manager. After you enable Access Control and connect to the storage system from a management station, Access Control appears in the Storage System Properties window of Navisphere Manager. After you enable Access Control, the host system can only read from and write to specific LUNs on the storage system. This organized group of LUNs and hosts is called a storage group.
Table 3-2. Storage Group Properties Property Description Unique ID A unique identifier that is automatically assigned to the storage group that cannot be changed. Storage group name The name of the storage group. The default storage group name is formatted as Storage Group n, where n equals the existing number of storage groups plus one. Connected hosts Lists the host systems connected to the storage group.
Navisphere Manager Navisphere Manager provides centralized storage management and configuration from a single management console. Using a graphical user interface (GUI), Navisphere Manager allows you to configure and manage the disks and components in one or more shared storage systems. You can access Navisphere Manager through a web browser. Using Navisphere Manager, you can manage a Dell/EMC storage system either locally on the same LAN or through an Internet connection.
2 Add the following two separate lines to the agentID.txt file, with no special formatting: • First line: Fully qualified hostname. For example, enter node1.domain1.com, if the host name is node1 and the domain name is domain1. • Second line: IP address that you want the agent to register and use to communicate with the storage system.
3 Enter the IP address of the storage management server on your storage system and then press . NOTE: The storage management server is usually one of the SPs on your storage system. 4 In the Enterprise Storage window, click the Storage tab. 5 Right-click the icon of your storage system. 6 In the drop-down menu, click Properties. The Storage Systems Properties window appears. 7 Click the Storage Access tab. 8 Select the Access Control Enabled check box.
d Repeat step b and step c to add additional hosts. e Click Apply. 16 Click OK to exit the Storage Group Properties dialog box. Configuring the Hard Drives on the Shared Storage System(s) This section provides information for configuring the hard drives on the shared storage systems. The shared storage system hard drives must be configured before use. The following sections provide information on these configurations.
Assigning LUNs to Hosts If you have Access Control enabled in Navisphere Manager, you must create storage groups and assign LUNs to the proper host systems. Optional Storage Features Your Dell/EMC CX4-series storage array may be configured to provide optional features that can be used in conjunction with your cluster. These features include MirrorView, SnapView, and SANCopy.
Updating a Dell/EMC Storage System for Clustering If you are updating an existing Dell/EMC storage system to meet the cluster requirements for the shared storage subsystem, you may need to install additional Fibre Channel disk drives in the shared storage system. The size and number of drives you add depend on the RAID level you want to use and the number of Fibre Channel disk drives currently in your system.
Preparing Your Systems for Clustering
Troubleshooting This appendix provides troubleshooting information for your cluster configuration. Table A-1 describes general cluster problems you may encounter and the probable causes and solutions for each problem. Table A-1. General Cluster Troubleshooting Problem Probable Cause Corrective Action The nodes cannot access the storage system, or the cluster software is not functioning with the storage system.
Table A-1. General Cluster Troubleshooting (continued) Problem Probable Cause Corrective Action One of the nodes The node-to-node takes a long time to network has failed due join the cluster. to a cabling or hardware failure. or Check the network cabling. Ensure that the node-to-node interconnection and the public network are connected to the correct NICs. One or more nodes may have the Internet Connection Firewall enabled, blocking Remote Procedure Call (RPC) communications between the nodes.
Table A-1. General Cluster Troubleshooting (continued) Problem Probable Cause Attempts to connect to a cluster using Cluster Administrator fail. The Cluster Service has not been started. Corrective Action Verify that the Cluster Service is running and that a cluster has been A cluster has not been formed. Use the Event Viewer and look formed on the system.
Table A-1. General Cluster Troubleshooting (continued) Problem Probable Cause Corrective Action You are prompted to configure one network instead of two during MSCS installation. The TCP/IP configuration is incorrect. The node-to-node network and public network must be assigned static IP addresses on different subnets.
Table A-1. General Cluster Troubleshooting (continued) Problem Probable Cause Corrective Action Unable to add a The new node cannot Ensure that the new cluster node can node to the cluster. access the shared enumerate the cluster disks using disks. Windows Disk Administration. If the The shared disks are disks do not appear in Disk Administration, check the following: enumerated by the operating system differently on the cluster nodes.
Table A-1. General Cluster Troubleshooting (continued) Problem Probable Cause Corrective Action Cluster Services does not operate correctly on a cluster running Windows Server 2003 and the Internet Firewall enabled. The Windows Perform the following steps: Internet Connection 1 On the Windows desktop, right-click Firewall is enabled, My Computer and click Manage. which may conflict 2 In the Computer Management with Cluster Services. window, double-click Services.
Zoning Configuration Form Node HBA WWPNs Storage or Alias WWPNs or Names Alias Names Zone Name Zone Set for Configuration Name Zoning Configuration Form 61
Zoning Configuration Form
Cluster Data Form You can attach the following form in a convenient location near each cluster node or rack to record information about the cluster. Use the form when you call for technical support. Table C-1. Cluster Information Cluster Information Cluster Solution Cluster name and IP address Server type Installer Date installed Applications Location Notes Table C-2.
Additional Networks Table C-3.
Index A D Access Control about, 45 Dell/EMC CX4-series, 7 Cabling a Multi-Node Cluster, 23 Cabling a Two-Node Cluster, 21 Cabling Description, 28 Cabling Multiple Clusters, 24 Cabling Multiple SAN-Attached Clusters, 32 Cabling Storage, 25 CX4-120, 10 CX4-240, 10 CX4-480, 10 CX4-960, 10 Zoning, 34 C cable configurations cluster interconnect, 19 for client networks, 18 for mouse, keyboard, and monitor, 15 for power supplies, 15 cluster optional configurations, 12 cluster configurations connecting to multi
H HBA drivers installing and configuring, 42 host bus adapter configuring the Fibre Channel HBA, 42 Navisphere Manager about, 11, 48 hardware view, 11 storage view, 11 network adapters cabling the private network, 18-19 cabling the public network, 18 K keyboard cabling, 15 O L operating system Windows Server 2003, Enterprise Edition installing, 41 LUNs assigning to hosts, 52 configuring and managing, 51 P power supplies cabling, 15 M MirrorView about, 11 PowerPath about, 49 mouse cabling, 15 priv
S SAN configuring SAN backup in your cluster, 36 SAN-Attached Cluster, 13 SAN-attached cluster about, 25 configurations, 12 shared storage assigning LUNs to hosts, 52 single initiator zoning about, 44 SnapView about, 11 storage groups about, 46 troubleshooting connecting to a cluster, 57 shared storage subsystem, 55 W warranty, 13 worldwide port name zoning, 43 Z zones implementing on a Fibre Channel switched fabric, 42 in SAN configurations, 43 using worldwide port names, 43 storage management software
Index