Dell™ PowerEdge™ Cluster FE100/FL100 Datacenter Server USER’S GUIDE www.dell.com support.dell.
Notes, Notices, Cautions, and Warnings Throughout this guide, blocks of text may be accompanied by an icon and printed in bold type or in italic type. These blocks are notes, notices, cautions, and warnings, and they are used as follows: NOTE: A NOTE indicates important information that helps you make better use of your computer system. NOTICE: A NOTICE indicates either potential damage to hardware or loss of data and tells you how to avoid the problem.
Preface This guide provides information about the Dell PowerEdge Cluster FE100/FL100 Datacenter Server solution. This information includes procedures for installing, configuring, and troubleshooting the hardware and software components of PowerEdge Cluster FE100/FL100 Datacenter Server configurations. The chapters and appendixes in this guide are summarized as follows: • • • • • • • • • • Chapter 1, “Getting Started,” provides an overview of PowerEdge Cluster FE100/FL100 Datacenter Server.
• • Appendix A, “Troubleshooting,” provides information to help you troubleshoot problems with installing and configuring clusters. Appendix B, “Cluster Data Sheets,” provides worksheets on which to record your specific configurations. Warranty and Return Policy Information Dell Computer Corporation (“Dell”) manufactures its hardware products from parts and components that are new or equivalent to new in accordance with industry-standard practices.
• • • Dell PowerEdge Expandable RAID Controller Battery Backup Module User's Guide. The Microsoft Cluster Server Administrator's Guide for the Windows 2000 Cluster Service documentation describes the clustering software used on PowerEdge Cluster FE100/FL100 Datacenter. The Microsoft Windows 2000 Datacenter Server documentation describes how to install (if necessary), configure, and use the Windows 2000 Datacenter Server operating system.
• Filenames and directory names are presented in lowercase bold. Examples: autoexec.bat and c:\windows • Syntax lines consist of a command and all its possible parameters. Commands are presented in lowercase bold; variable parameters (those for which you substitute a value) are presented in lowercase italics; constant parameters are presented in lowercase bold. The brackets indicate items that are optional.
Contents Chapter 1 Getting Started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1 Overview of Microsoft Windows 2000 Datacenter Server . . . . . . . . . . . . . . . . . . . 1-1 Overview of a Dell PowerEdge Cluster FE100/FL100 Datacenter Server Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2 SAN-Attached Cluster Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3 PowerEdge Cluster FE100/FL100 Identification . . .
Chapter 2 Installation Overview . . . . . . . . . . . . . . . . . . . . . . . . . 2-1 Chapter 3 Preparing PowerEdge and PowerVault Systems for Clustering . . . . . . . . . . . . . . . 3-1 Adding Peripherals to Your Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-1 Configuring Fibre Channel HBAs on Separate PCI Buses . . . . . . . . . . . . . . . . 3-2 Configuring PowerVault DPE and DAE Enclosure Addresses . . . . . . . . . . . . .
Installing and Configuring Your Windows 2000 Datacenter Server Network . . . . . 6-6 Overview of a Windows 2000 Datacenter Server Network Installation. . . . . . 6-6 Updating the Host Bus Adapter Driver. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-8 Installing the Dell OpenManage Storage Management Software for the PowerVault Storage System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-9 Configuring Shared Drives Using the Windows 2000 Disk Management Tool . . . . . .
Chapter 9 Maintaining the Cluster. . . . . . . . . . . . . . . . . . . . . . . . 9-1 Connecting to Your PowerVault Storage Systems Using Dell OpenManage Storage Management Software . . . . . Connecting to the PowerVault Shared Storage Systems Using Data Agent. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Connecting to Data Agent Using Data Administrator . . . . . . . . . . . . . . . . . . . Identifying the Cluster Name . . . . . . . . . . . . . . . . . . . . .
Appendix B Cluster Data Sheets . . . . . . . . . . . . . . . . . . . . . . . . . . . B-1 PowerEdge Cluster FE100/FL100 Datacenter Server Configuration Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-1 Cluster Data Sheets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-3 Index Figures Tables Figure 1-1. Figure 1-2. Figure 1-3. Figure 1-4. Figure 1-5. Figure 4-1. Figure 4-2. Figure 4-3. SAN-Attached Cluster Configuration . . .
xiv
CHAPTER 1 Getting Started This chapter provides an overview of the following information for the Dell™ PowerEdge™ Cluster FE100/FL100 Datacenter Server configuration: • • • • • • Microsoft® Windows® 2000 Datacenter Server operating system Configuration and operation Cluster identification Failover options Minimum system requirements Support configuration requirements Overview of Microsoft Windows 2000 Datacenter Server Windows 2000 Datacenter Server is geared specifically for organizations implementing
NOTE: Since Datacenter Server is one of four operating systems in the Windows 2000 platform, some of the core services incorporated within Datacenter Server are common to all Windows 2000 platforms. In the following sections, “Windows 2000” is used to identify the services common to all Windows 2000 platforms and “Windows 2000 Datacenter Server” is used to identify services and components specific to the Datacenter Server platform.
NOTE: For more information on failover, failback, and groups, see “Configuring Failover and Failback Support” in Chapter 6, “Configuring the System Software.” SAN-Attached Cluster Configuration A PowerEdge Cluster FE100/FL100 Datacenter Server configuration is a SAN-attached cluster configuration where all four cluster nodes are attached to a single PowerVault™ storage system or to multiple PowerVault storage systems through a Dell PowerVault SAN using a redundant Fibre Channel switch fabric.
LAN/WAN private network switch PowerEdge server PowerEdge server PowerEdge server PowerEdge server Fibre Channel Switch PowerVault storage system Fibre Channel Switch PowerVault storage system PowerVault storage system Figure 1-1.
Table 1-1 provides an overview of the differences between PowerEdge Cluster FE100 and FL100 Datacenter Server configurations. Table 1-1.
However, this configuration is appropriate for business-critical systems since the application can use the full power of another cluster node in case one cluster node fails. NOTE: For clarity, future references of activen/active and activen/passive configurations will use “n” to equal the number of active cluster nodes. For example, an active/active/active/active configuration consisting of four active cluster nodes will be referred to as an active4 configuration.
cluster node 1 cluster node 2 cluster node 3 cluster node 4 (backup) Figure 1-2. N+1 Failover Table 1-3 provides a N+1 failover configuration for the cluster shown in Figure 1-2. For each cluster resource group, the failover order in the Preferred Owners list provides the order that you want that resource group to failover. If that resource group or its cluster node fails, the cluster will try to fail that resource group to the first available node in the list.
Disadvantage: • Must ensure that the failover cluster nodes have ample resources available to handle the additional workload. Figure 1-3 shows an example of multiway failover configuration. cluster node 1 Application C cluster node 3 cluster node 2 Application A Application B cluster node 4 Figure 1-3. Example of a 4-Node Multiway Failover Table 1-4 provides an example of a multiway failover configuration for the cluster shown in Figure 1-3.
Advantage: • High resource availability to users. Disadvantage: • The cluster node next in line for failover may not have ample resources available to handle the additional workload of the failed node. Figure 1-4 shows an example of cascading failover configuration. applications failed cluster node 1 cluster node 2 cluster node 3 cluster node 4 Figure 1-4.
cluster node 1 cluster node 2 Application A cluster node 3 cluster node 4 Figure 1-5. Example of a 4-Node N-Way Migration Solution Table 1-5 provides an overview of the failover types implemented with Datacenter Server. Table 1-5.
PowerEdge Cluster FE100/FL100 Datacenter Server Minimum System Requirements Dell PowerEdge Cluster FE100/FL100 Datacenter Server configurations require the following hardware and software components: • • • • Cluster nodes Cluster storage Cluster interconnects Operating system and system management software Cluster Nodes Cluster nodes require the following hardware resources: • • • • Two to four supported Dell PowerEdge systems, each with at least two microprocessors.
• For each cluster, a network switch or Giganet cLAN cluster switch to connect the cluster nodes. NOTE: If you have a two-node PowerEdge Cluster FE100/FL100 Datacenter Server configuration that will not expand the configuration to a three or four node cluster, a crossover cable or cLAN cable can be used to connect the nodes rather than using a private network switch.
PowerEdge Cluster FE100/FL100 Datacenter Server Support Configuration Requirements The following tables provide configuration information for the following cluster components and configurations: • • • Cluster nodes Shared storage systems SAN-attached clusters Required Configuration Requirements for the PowerEdge Cluster FE100/FL100 Datacenter Server Table 1-6 provides the cluster component requirements for a PowerEdge Cluster FE100/FL100 Datacenter Server configuration. Table 1-6.
Table 1-6. Cluster Node Requirements (continued) Rule/Guideline Description RAID controller One PowerEdge Expandable RAID controller 2/DC (PERC 2/DC) with firmware version 1.01 and driver version 2.62 Cluster management (optional) Dell OpenManage™ Cluster Assistant With ClusterX®, version 3.0.1 with Service Pack 2 or later Remote server management (optional) Dell OpenManage Remote Assistant Card (DRAC) with firmware version 2.3 and driver version 2.3.0.
SAN-Attached Cluster Requirements Table 1-8 provides the requirements for a SAN-attached cluster configuration. Table 1-8. SAN-Attached Cluster Requirements Rule/Guideline Description SAN version SAN 3.0 HBA QLogic QLA2200/66 with firmware version 1.45 and driver version 7.04.08.02 HBA failover driver Dell OpenManage ATF version 2.3.2.5 Fibre Channel switch PowerVault 51F Fibre Channel switch with firmware version 2.1.7 PowerVault 56F Fibre Channel switch with firmware version 2.1.
1-16 User’s Guide
CHAPTER 2 Installation Overview This chapter provides an overview for installing and implementing a Dell PowerEdge Cluster FE100/FL100 Datacenter Server configurations. More detailed instructions are provided later in this document. NOTICE: Before installing the cluster, ensure that your site can handle the power requirements of the cluster equipment. Contact your Dell sales representative for information about your region's power requirements.
6. During the installation, check the appropriate box to install the Cluster Service files when prompted. NOTICE: Do not configure the Cluster Service in this step. 7. Configure the public and private networks in each node, and place each network on separate subnets with static Internet protocol (IP) addresses. NOTE: The public network refers to the NIC used for client connections. The private network refers to the cluster interconnect that connects the cluster nodes together. 8.
CHAPTER 3 Preparing PowerEdge and PowerVault Systems for Clustering This chapter provides the necessary steps for performing the following procedures: • • • Adding peripherals to your cluster Configuring Fibre Channel host bus adapters (HBAs) on separate peripheral component interconnect (PCI) buses Configuring disk processor enclosure (DPE) and disk array enclosure (DAE) addresses Adding Peripherals to Your Cluster WARNING: Hardware installation should be performed only by trained service technicians.
For instructions on installing expansion cards or hard-disk drives in your node, see the Installation and Troubleshooting Guide for your PowerEdge system. Configuring Fibre Channel HBAs on Separate PCI Buses Dell recommends configuring Fibre Channel HBAs on separate PCI buses. While configuring the adapters on separate buses improves availability and performance, this recommendation is not a requirement.
CHAPTER 4 Cabling the Cluster Hardware This chapter provides information on the following components and procedures: • • • • • • Cluster cabling components Fibre Channel copper connectors Cabling your public network Cabling your private network Protecting your cluster from power failure Cabling your mouse, keyboard, and monitor in a Dell rack Cluster Cabling Components Dell PowerEdge Cluster FE100/FL100 Datacenter Server configurations require cabling for the Fibre Channel storage systems, cluster inte
• Power connection - Provides a connection between the power supplies in your system and the power source. By using power strips or power distribution units (PDUs) and separate AC circuits, the cluster can fully utilize the redundant power supplies. Fibre Channel Copper Connectors To connect a PowerVault storage system to a PowerEdge system (cluster node), Dell uses the DB-9 connector and the high-speed serial data connector (HSSDC).
Fibre Channel devices using HSSDC connections must not be connected directly to Giganet devices using HSSDC connections. Cabling Your Public Network The NICs in the PowerEdge systems (cluster nodes) provide at least two network connections for each cluster node—a dedicated private network (cluster interconnect) between the nodes and a public network connection to the local area network (LAN).
Figure 4-3 shows a cluster configuration that implements Broadcom NetExtreme Gigabit NICs for the private network. LAN/WAN PowerEdge server PowerEdge server PowerEdge server PowerEdge server Gigabit network switch Figure 4-3.
LAN/WAN PowerEdge server PowerEdge server PowerEdge server PowerEdge server GigaNet cLAN Cluster Switch Figure 4-4.
power supplies PowerVault 65xF storage system storage processor-B storage processor-A SPS 1 SPS 2 Figure 4-5. Cable Configuration of PowerVault 65xF Power Supplies See your PowerVault documentation for additional information about the standby power supplies.
CHAPTER 5 Configuring Storage Systems (Low-Level Configuration) This chapter provides the necessary steps for configuring the Dell PowerVault shared storage hard-disk drives attached to the PowerEdge Cluster FE100/FL100 Datacenter Server configuration. NOTES: Prior to installing the operating system, be sure to make the necessary lowlevel software configurations (if applicable) to your PowerEdge FE100/FL100 Datacenter Server cluster.
Configuring the LUNs and RAID Level for the Shared Storage Subsystem The storage system hard-disk drives must be bound into logical unit numbers (LUNs) using the Dell OpenManage Data Supervisor or Dell OpenManage Data Administrator. All LUNs, especially the LUN used for the Microsoft Cluster Server (MSCS) quorum resource, should be bound using a redundant array of independent disks (RAID) level to ensure high availability.
CHAPTER 6 Configuring the System Software This chapter describes how to perform the following procedures: • • • • • • • • Preparing for Microsoft Windows 2000 Datacenter Server installation Configuring the cluster nodes in a Windows 2000 domain Configuring the Windows 2000 Cluster Service Configuring the public and private networks Installing and configuring your Windows 2000 Datacenter Server network (which includes information on Dell OpenManage software and using the Windows 2000 Disk Management tool
• Configure the storage systems and small computer system interface (SCSI) drives attached to your Dell PowerEdge Clusters FE100/FL100 Datacenter Server system (see Chapter 5, “Configuring Storage Systems (Low-Level Configuration))” NOTICE: When you install Datacenter Server, do not enable the standby mode or hibernation mode incorporated in Windows 2000 Datacenter Server. These modes are not supported in cluster configurations.
NOTE: Domain controller functions may cause additional logon, authentication, and replication traffic and overhead on the node. Configuring the Windows 2000 Cluster Service To configure the Windows 2000 Cluster Service during the Windows 2000 Datacenter Server installation, perform the following steps: 1. Ensure that you have performed the tasks in the section, “Preparing for Microsoft Windows 2000 Datacenter Server Installation,” found earlier in this chapter. 2.
2. Insert the CD labeled Windows 2000 Datacenter Server CD-ROM into the CDROM drive. 3. Click the Start button, and select Settings—> Control Panel. 4. Click on Add/Remove Programs. 5. Click Add/Remove Windows Components. 6. Select Cluster Service in the Windows Components Wizard. 7. Click Next to copy the Cluster Service files to the system hard-disk drive. When the files are copied to the system, the Cluster Service Configuration Wizard appears. 8.
both the public and private networks, the minimum number of static IP addresses required for a four-node clustering is nine—two for each NIC in the cluster nodes and one for the cluster. Cluster-aware applications running on the cluster may require additional IP addresses. For example, Microsoft SQL Server requires at least one static IP address for the virtual server (Microsoft SQL Server does not use the cluster's IP address).
will respond to ping commands and will appear online after installing Cluster Service (MSCS) on the cluster nodes. If the IP address resources are not set up correctly, the cluster nodes may not be able to communicate with the domain and the Windows 2000 Cluster Configuration wizard may not allow you to configure all of your networks.
Table 6-2. IP Addresses (continued) Use Cluster Node 1 Cluster Node 2 Cluster Node 3 Cluster Node 4 Private network static IP address cluster interconnect 10.0.0.1 10.0.0.2 10.0.0.3 10.0.0.4 Private network subnet mask 255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0 DNS servers Primary 192.168.1.11 Primary 192.168.1.11 Primary 192.168.1.11 Primary 192.168.1.11 Secondary 192.168.1.12 Secondary 192.168.1.12 Secondary 192.168.1.12 Secondary 192.168.1.
8. If you are configuring WINS servers, click Advanced and choose the WINS tab to enter the IP addresses for the WINS servers. For example, in Table 6-2, the IP addresses for the WINS servers in Cluster Node 1 are 192.168.1.11 and 192.168.1.12. NOTE: Some Windows environments may not use WINS servers. 9. Click OK to return to the Windows Networking Components wizard and repeat steps 2 through 8 to configure the next adapter.
See the readme.txt file on the HBA driver diskette for more information on installing and updating the driver. Installing the Dell OpenManage Storage Management Software for the PowerVault Storage System To manage and configure the storage system attached to the Dell PowerEdge Cluster FE100/FL100, you must install Dell OpenManage software to manage the storage systems attached to the cluster nodes.
Table 6-4. Dell OpenManage Software (continued) Dell OpenManage Product Description Dell OpenManage Managed Node (Data Agent) Provides connectivity from the cluster node (host) to the PowerVault storage system, thereby allowing Data Supervisor and Data Administrator to send and receive information to and from the PowerVault 65xF connected to a Windows 2000 Datacenter Server host (cluster node).
For more information on installing Dell OpenManage ATF, Dell OpenManage Managed Node Agent, Dell OpenManage Data Supervisor, or Dell OpenManage Data Administrator, see the PowerVault documentation that came with the storage system. Configuring Shared Drives Using the Windows 2000 Disk Management Tool For disk configuration, Windows 2000 uses a disk management tool called Computer Management located at Administrative Tools—> Computer Management.
The right column shows the shared drives as raw, unformatted drives, with unallocated disk capacity. The left column shows Basic disks. If the left column shows Dynamic disks, right-click that box and select Revert to Basic for each disk in the shared storage system. NOTICE: Reverting disks to Basic destroys all data on the drive. 6. For each shared disk, run the Create Partition wizard by performing the following steps: e. In the row for the first shared disk, right-click over the right-column box. f.
Verifying Cluster Readiness Before you install Cluster Service in the PowerEdge cluster nodes and the PowerVault storage systems, check the system and verify that the cluster meets the following conditions: • • • All cluster nodes are able to log in to the domain. The shared drives are partitioned, formatted, and named on each node. All IP addresses and network names for each NIC and each cluster node can communicate with each other.
Cluster Resource Group When you create a new cluster, the cluster will contain (by default) a cluster group that contain the default settings and resources for the cluster.
The status of any applications and data using the quorum disk will not affect the status of the Cluster Group. NOTE: For information on moving the quorum files, see Microsoft's online help. Because the quorum disk plays a crucial role in cluster operation, losing the quorum disk will cause the entire cluster to fail.
To verify that the cluster resources are online, perform the following steps on the monitoring cluster node: 1. Click the Start button and select Programs—> Administrative Tools—> Cluster Administrator. 2. Open a connection to the cluster and check the running state of each recovery group. If a group has failed, one or more of its resources may be offline.
any one time. Only one Data Agent should be running to ensure that the nodes have a consistent view of the PowerVault storage system. To install the Data Agent as a cluster resource, perform the following steps: 1. Confirm that Dell OpenManage Managed Node (Data Agent) is installed on all of the cluster nodes and is configured to start manually. 2. Open the Cluster Administrator. 3. Right-click Cluster Group, point to New, and click Resource. 4. In the Name field, type Manage Node Agent. 5.
Configuring Failover and Failback Support When an individual application or user resource (also known as a cluster resource) fails on a cluster node, Cluster Service will detect the application failure and try to restart the application on the cluster node. If the restart attempt reaches a preset threshold, Cluster Service brings the running application offline, moves the application and its resources to another cluster node, and restarts the application on the other cluster node(s).
You can also change the failover order for your resources by modifying the Preferred Owners list in the Modify Preferred Owners window, which is accessed through Cluster Administrator. If some of your cluster nodes lack the resources to handle the additional workload of a failover application, rather than using the “Cascading failover” order (node 1—> node, 2—> node 3—> node 4), you can change the failover configuration to a preselected order of cluster nodes.
6-20 User’s Guide
CHAPTER 7 Installing Cluster Management and Systems Management Software This chapter provides information on configuring and administering your cluster using the following cluster management software: • • Microsoft Windows 2000 Cluster Administrator Dell OpenManage Cluster Assistant With ClusterX Microsoft Cluster Administrator Cluster Administrator is a built-in tool in Windows 2000 Datacenter Server for configuring and administering a cluster.
2. Click the CD icon and locate the \i386 directory. 3. Double-click ADMINPAK.MSI to install Cluster Administrator and Windows 2000 Administrative Tools. 4. Click the Start button and select Administrative Tools. 5. Verify that Cluster Administrator appears in the window.
Installing Dell OpenManage Cluster Assistant With ClusterX (Optional) After you complete your cluster installation, you can install Cluster Assistant With ClusterX on your systems management console. Cluster Assistant With ClusterX is a cluster management solution that is designed to provide setup, configuration, and management of all Microsoft Cluster Service (MSCS) clusters in your environment from a single-management console.
7-4 User’s Guide
CHAPTER 8 Upgrading Your PowerEdge System to a Cluster Configuration This chapter provides information for performing the following procedures: • • Upgrading your PowerEdge system for use in a Dell PowerEdge Cluster FE100/FL100 Datacenter Server configuration. Installing the appropriate version of Microsoft Windows 2000 on your PowerEdge system. NOTE: There is no upgrade path for customers who want to migrate to Microsoft Windows 2000 Datacenter Server.
hardware components outlined in this guide. Using non-Dell hardware or software components may lead to data loss or corruption. 2. Install the required hardware and network interface controllers (NICs). 3. Set up and cable the system hardware. 4. Install and configure the Windows 2000 Datacenter Server operating system with the latest Service Pack and hotfixes (if applicable). 5. Configure the Cluster Service.
CHAPTER 9 Maintaining the Cluster This chapter provides information on the following cluster maintenance procedures: • • • • Connecting to your attached PowerVault storage systems using Dell OpenManage storage management software Using the QLogic Fibre Channel Configuration software for PowerVault 65xF storage processor replacement Determining the redundant array of independent disks (RAID) levels of the shared disk volumes Configuring your cluster nodes using Microsoft Windows 2000 Datacenter Server co
PowerVault storage systems using one of the following graphical user interfaces (GUIs): • • Data Supervisor—allows you to configure and manage the disks and components in a single PowerVault storage system, as well as bind and unbind logical unit numbers (LUNs), change configuration settings, and create LUNs. Data Administrator—provides the same capabilities as Data Supervisor, but also allows you to configure and manage multiple PowerVault storage systems in a single GUI window.
Connecting to Data Agent Using Data Supervisor To ensure that the Dell OpenManage Data Supervisor can connect to the Data Agent regardless of which node is running Data Agent, perform the following steps: 1. Start Data Supervisor. 2. In the Dell OpenManage Data Supervisor Query dialog box, enter the name of the cluster running Data Agent. NOTE: Do not enter the cluster node (server) name.
(ATF) from the failed logical unit number (LUN). ATF will reestablish the communication link between the cluster node and storage device, or reroute the connection through a secondary path. If multiple LUN failures occur in your cluster, run ATF from all of the cluster nodes. To run ATF, perform the following steps: 1. Open a Command Prompt window. 2. Change to the directory where the ATF executable programs are stored.
If you cannot determine the RAID level using Disk Management, you can use the Dell OpenManage Data Agent Configurator to view the RAID configuration of each volume. To view the RAID configuration of a volume using Data Agent Configurator, perform the following steps: 1. Start the Dell OpenManage Data Agent Configurator. 2. From the Main Menu, select Devices and then click Scan Devices. A window appears listing all available disk volumes and their associated RAID levels.
• • Cluster Service is installed on all cluster nodes. NICs in each cluster node are configured properly See Table 6-2 in “IP Addresses,” in Chapter 6, “Configuring the System Software,” for a sample IP configuration scheme of Windows 2000 Datacenter Server. To install a third NIC into a cluster node, you must transfer the resources from the cluster node where the NIC will be installed to another node in the cluster.
8. Follow the onscreen instructions to install the NIC 3 driver in node 1. 9. Enter the NIC 3 IP address, ensuring that the network ID portion of the IP address is not identical to NICs 1 and 2. For example, if NICs 1 and 2 in node 1 have an address of 192.168.1.101 and 192.168.1.102 with subnet masks of 255.255.255.0, respectively, you might enter the following IP address and sub-net mask for NIC 3: IP Address: 192.168.1.111 Subnet Mask: 255.255.255.
Uninstalling Cluster Service You may need to uninstall the Cluster Service for cluster node maintenance, such as upgrading the node and/or replacing the node. Before you can uninstall MSCS from a node, perform the following steps: 1. Take all resource groups offline or move them to the other node. 2. Stop the Cluster Service running on the node that you want to uninstall. 3. Click the Start button and select Programs—> Control Panel—> Add/Remove Programs. 4. Uninstall the Cluster Service. 5.
2. On the remaining cluster nodes, run Cluster Administrator and evict the failed cluster node by right-clicking the failed node and selecting Evict Node. 3. Make sure that the replacement cluster node is physically disconnected from the PowerVault storage system. 4. Power on the replacement server and install Windows 2000 Datacenter Server, along with the latest Service Pack. 5. Reboot the system. 6.
9-10 User’s Guide
CHAPTER 10 SAN Components This chapter provides an overview of a PowerVault storage area network (SAN), additional cluster maintenance procedures, and detailed information on the following storage area network (SAN) components for the Dell PowerEdge Cluster FE100/FL100 Datacenter Server configuration: • • • SAN-attached clusters PowerVault Fibre Channel switches PowerVault storage systems attached to a SAN-attached cluster Overview of a PowerVault SAN A PowerVault SAN is a configuration of server and s
NOTE: See the Dell PowerVault SAN documentation and the appropriate SAN component documentation for configuration information. SAN-Attached Clusters SAN attached clusters are cluster configurations where redundant Fibre Channel HBAs are cabled to a redundant Fibre Channel switch fabric. Connecting the cluster to the storage system is achieved through the switch fabric. These SAN-attached configurations can share certain resources with other servers, storage systems, and backup systems on the SAN.
LAN/WAN interconnect switch PowerEdge server PowerEdge server PowerEdge server PowerEdge server Fibre Channel switch Fibre Channel switch PowerVault storage system Figure 10-1. SAN-Attached Cluster Configuration Fibre Channel Fabrics A Fibre Channel switch fabric is an active, intelligent, and private connection of one or more fibre channel switches that provide high-speed, point-to-point connections between servers and storage devices.
switch fabric can support up to seven hops without performance degradation, a typical PowerVault SAN implementation usually includes fewer than seven hops. Attaching a SAN-Attached Cluster Configuration to a Network SAN-attached clusters are cluster configurations where redundant Fibre Channel HBAs are cabled to a redundant Fibre Channel switch fabric. Connecting the cluster to the storage system is achieved through the switch fabric.
LAN/WAN Interconnect Switch PowerEdge server PowerEdge server PowerEdge server PowerEdge server Fibre Channel switch Fibre Channel switch Fibre Channel bridge PowerVault 130T PowerVault storage system Figure 10-2. SAN-attached Clusters Using a Public, Private, and SAN Network support.dell.
Using Dell PowerVault Fibre Channel Switches You can connect cluster nodes to the PowerVault shared storage system by using redundant PowerVault Fibre Channel switches. When cluster nodes are connected to the storage system through Fibre Channel switches, the cluster configuration is technically attached to a SAN.
Connecting a PowerVault 130T DLT Library and PowerVault 35F Fibre Channel Bridge to a ClusterAttached PowerVault SAN You can add tape backup devices to your PowerVault SAN to provide additional backup to your cluster. To implement this configuration, use the PowerVault 35F Fibre Channel SCSI bridge to support the PowerVault 130T DLT library on PowerEdge Cluster FE100 Datacenter Server configurations.
4. Assign drive letters and volume labels to the disks. To assign drive letters and volume labels to the disks, perform the following steps: a. Power down all cluster nodes except node 1. b. Assign drive letters on node 1, using the Windows 2000 Disk Management utility to create the drive letters and volume labels. For example, create volumes labeled “Volume E” for disk E and “Volume F” for disk F. c. Power down the cluster node. d.
APPENDIX A Troubleshooting This appendix provides troubleshooting information for the Dell PowerEdge Cluster FE100/FL100 Datacenter configurations. Table A-1 describes general cluster problems you may encounter and the probable causes and solutions for each problem. Table A2 is specific to Windows 2000 cluster configurations. Table A-1.
Table A-1. General Cluster Troubleshooting (continued) Problem Probable Cause Corrective Action Attempts to connect to a cluster using Cluster Administrator fail. The Cluster Service has not been started. Verify that the Cluster Service is running and that a cluster has been formed. Use the Event Viewer and look for the following events logged by the Cluster Service: A cluster has not been formed on the system. The system has just been booted and services are still starting.
APPENDIX B Cluster Data Sheets The configuration matrix and data sheets on the following pages are provided for the system installer to record pertinent information about PowerEdge FE100/FL100 Datacenter Server Cluster configurations. The data sheets are for installing Microsoft Windows 2000 Datacenter Server clusters.
Table B-1.
Cluster Data Sheets The data sheets on the following pages are provided for the system installer to record applicable information about the PowerEdge Cluster FE100/FL100 Datacenter configuration. The data sheets are for installing Windows 2000 Datacenter Server clusters. Make a copy of the appropriate data sheet to use for the installation or upgrade. Complete the requested information on the sheet and have the completed sheet available if you need to call Dell for technical assistance.
Windows 2000 Datacenter Server Operating System Installation and Configuration Install Windows 2000 Datacenter Server, including: G Network name for node 1_________________________________ G Network name for node 2_________________________________ G Network name for node 3_________________________________ G Network name for node 4_________________________________ G Select Cluster Service during initial installation. G Node 1 network IP configuration: Public network IP Address: ___.______._____.
Windows 2000 Datacenter Server Operating System Installation and Configuration (Continued) G Node 4 network IP configuration: Public network IP Address: ______.______._____._____ Subnet Mask: 255.______._____._____ Primary DNS Server: ______.______._____._____ (same IP address as node 1) Secondary DNS Server: _____.______._____._____ (same IP address as node 1) Primary WINS Server: ______.______._____.____ (same IP address as node 1) Secondary WINS Server: _____.______._____.
Cluster Service Configuration (Continued) G Name of network 2 is Private (for node-to-node interconnect). G Assign a static IP address for management: Management IP Address: ______.______._____._____ Subnet Mask: 255.______._____._____ G Join the cluster. Post-Microsoft Cluster Service Installation G Reapply the latest Windows 2000 service pack. G Install Dell OpenManage Cluster Assistant With ClusterX on management client (optional). G Install and configure cluster application programs.
Dell PowerEdge Cluster FE100/FL100 Installer Data Sheet and Checklist for an Upgrade Installation to Windows 2000 Datacenter Server Instructions: Before configuring the systems for clustering, use this checklist to gather information and prepare your systems for a successful installation. This data sheet assumes that Windows 2000 Datacenter Server was factory or customer installed on each node. If you are installing these systems for the first time, use the complete installation data sheet.
Configuring the Shared Storage System G Configure and initialize the RAID volumes. G The format of the PowerVault shared storage is NTFS. G Drive letters for the PowerVault storage system: No. 1 __________No. 2 _________ No. 3 __________ No. 4 __________ No. 5 __________No. 6 __________ No. 7 __________ No. 8 __________ No. 9__________ No. 10 _________ No. 11 _________ No. 12__________ No. 13_________ No. 14__________No. 15__________No. 16 __________ No. 17_________ No. 18__________No. 19 _________ No.
Index B Broadcom NetExtreme Gigabit Ethernet NIC in your private network, 1-5, 1-11, 1-13, 4-1 cabling, 4-3 implementing, 4-3 in your public network, 1-12, 4-3 C cabling in a Dell rack, 4-6 cluster administrator about, 7-1 installing on a remote console running Windows 2000 Advanced Server, 7-2 running Windows 2000 Professional, 7-1 running Windows NT 4.
Dell PowerEdge Clusters FE100/FL100 Datacenter Server about, 1-2 cabling your private network, 4-3 cabling your public network, 4-3 configuration requirements, 1-13 configurations, 1-5 installation overview, 2-1 Dell PowerVault SAN about, 10-1 SAN-attached clusters, 10-2 attaching to the network, 10-4 Dell PowerVault storage area network (SAN) components, 10-1 disk management tool, 6-11 determining RAID levels of the shared disk volumes, 9-4 G Giganet cLAN NIC in your private network, 1-5, 1-11, 1-13 in yo
PowerEdge clusters about, 1-2 active/active configurations, 1-5 active/passive configurations, 1-5 adding cluster peripherals, 3-1 additional maintenance procedures, 10-8 attaching one cluster to one PoweVault storage system, 10-6 configuring enclosure addresses, 3-2 configuring Fibre Channel HBAs, 3-2 preparing for clustering, 3-1 SAN-attached cluster configurations, 1-3 SAN-attached clusters attaching a PowerVault storage system, 10-6 upgrading to a FE100/FL100 configuration requirements, 1-13 private ne
switches about, 10-6 T troubleshooting, 1 connecting to a cluster, 2 shared storage subsystem, 1 U uninterruptible power supplies (UPS), 4-5 V virtual servers, 1-2 4 User’s Guide W Windows 2000 Datacenter Server about, 1-1 configuring the nodes, 6-2 configuring the public and private networks, 6-4 configuring your cluster nodes, 9-5, 9-8 adding a third NIC to a cluster node, 9-5 changing the IP address, 9-7 running chkdsk /f on a quorum disk, 9-8 uninstalling the Cluster Service (MSCS), 9-8 failover a