Dell Storage Center SCv2000 and SCv2020 Storage System Deployment Guide
Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. Copyright © 2016 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries.
Contents About this Guide................................................................................................................ 7 Revision History..................................................................................................................................................................7 Audience.............................................................................................................................................................................
Storage Controller Redundancy.................................................................................................................................. 31 Multipath I/O............................................................................................................................................................. 32 Cabling SAN-Attached Host Servers................................................................................................................................
Set Administrator Information.......................................................................................................................................... 73 Configure iSCSI Fault Domains.........................................................................................................................................74 Confirm the Storage Center Configuration.......................................................................................................................
Release the Disks in the Expansion Enclosure............................................................................................................ 93 Disconnect the SC100/SC120 Expansion Enclosure from the A-Side of the Chain.................................................... 93 Disconnect the SC100/SC120 Expansion Enclosure from the B-Side of the Chain....................................................95 B Troubleshooting Storage Center Deployment...................................................
About this Guide This guide describes the features and technical specifications of the SCv2000/SCv2020 storage system.
• Dell Storage Center Command Utility Reference Guide Provides instructions for using the Storage Center Command Utility. The Command Utility provides a command-line interface (CLI) to enable management of Storage Center functionality on Windows, Linux, Solaris, and AIX platforms.
1 About the SCv2000/SCv2020 Storage System The SCv2000/SCv2020 storage system provides the central processing capabilities for the Storage Center Operating System (OS) and management of RAID storage. The SCv2000/SCv2020 storage system holds the physical drives that provide storage for the Storage Center. If additional storage is needed, the SCv2000/SCv2020 also supports SC100/SC120 expansion enclosures.
Storage Center Architecture Options A Storage Center with an SCv2000/SCv2020 storage system can be deployed in the following configurations: • An SCv2000/SCv2020 storage system deployed without SC100/SC120 expansion enclosures. Figure 1. SCv2000/SCv2020 without Expansion Enclosures • An SCv2000/SCv2020 storage system deployed with one or more SC100/SC120 expansion enclosures. NOTE: An SCv2000/SCv2020 storage system with a single storage controller cannot be deployed with expansion enclosures.
Front-End Connectivity Front-end connectivity provides I/O paths from servers to a storage system and replication paths from one Storage Center to another Storage Center. The SCv2000/SCv2020 storage system provides the following types of front-end connectivity: • Fibre Channel: Hosts, servers, or Network Attached Storage (NAS) appliances access storage by connecting to the storage system Fibre Channel ports through one or more Fibre Channel switches.
Item Description Speed Communication Type 4 SC100/SC120 Expansion Enclosures 6 Gbps per channel Back End 5 Remote Storage Center connected via iSCSI for replication 1 Gbps or 10 Gbps Front End 6 Ethernet switch 1 Gbps or 10 Gbps (Management/Replication) Front End 7 Management network (computer connected to the storage system through the Ethernet switch) Up to 1 Gbps System Administration SCv2000/SCv2020 Storage System with iSCSI Front-End Connectivity An SCv2000/SCv2020 storage system wi
SCv2000/SCv2020 Storage System with Front-End SAS Connectivity An SCv2000/SCv2020 storage system with front-end SAS connectivity may communicate with the following components of a Storage Center system. Figure 5.
SCv2000/SCv2020 Storage System Hardware The SCv2000/SCv2020 storage system ships with Dell Enterprise drives, two redundant power supply/cooling fan modules, and either one storage controller or two redundant storage controllers. Each storage controller contains the front-end, back-end, and management communication ports of the storage system.
SCv2000/SCv2020 Storage System Back-Panel Features and Indicators The back panel of the SCv2000/SCv2020 contains the storage controller indicators and power supply indicators. Figure 7. SCv2000/SCv2020 Storage System Back-Panel View Item Name Icon Description 1 Power supply/cooling fan module (PSU) (2) — Contains a 580 W power supply and fans that provide cooling for the storage system.
Item Name Icon Description • Blinking amber: PSU is in programming mode 8 Power socket (2) — Accepts a standard computer power cord. 9 Power switch (2) — Controls power for the storage system. Each PSU has one switch. SCv2000/SCv2020 Storage System Storage Controller Features and Indicators The SCv2000/SCv2020 storage system includes up to two storage controllers in two interface slots.
Item Control/Feature Icon Description NOTE: To use the RELP port as a front-end connection to host servers, a Flex Port license is required. 5 SAS activity indicators — There are four SAS PHYs per SAS port.
SCv2000/SCv2020 Storage System Storage Controller with iSCSI Front-End Ports The following figures show the features and indicators on a storage controller with iSCSI front-end ports. Figure 10. SCv2000/SCv2020 Storage System Storage Controller with Four 1 GbE iSCSI Front-End Ports Figure 11. SCv2000/SCv2020 Storage System Storage Controller with Two 10 GbE iSCSI Front-End Ports Item Control/Feature 1 Battery status indicator • • • Blinking green (on 0.5 sec. / off 1.5 sec.
Item Control/Feature 8 Storage controller fault • • • 9 Recessed reset button Not currently used 10 Identification LED • • • 11 USB port One USB 3.0 connector 12 Diagnostic LEDs (8) 13 Serial port (3.5 mm mini jack) 14 Two options: • • Icon — Description • • Off: No faults Steady amber: Firmware has detected an error Blinking amber:Storage controller is performing POST Off: Identification disabled Blinking blue (for 15 sec.
Item Control/Feature Icon Description 3 MGMT port (Slot 3/Port 1) — Ethernet/iSCSI port that is typically used for storage system management and access to the BMC NOTE: To use the MGMT port as an iSCSI port for replication to another Storage Center, a Flex Port license and replication license are required. To use the MGMT port as a front-end connection to host servers, a Flex Port license is required.
Figure 13. SCv2000/SCv2020 Storage System Drive Indicators Item Control/Feature Indicator Code 1 Drive activity indicator • • Blinking green: Drive activity Steady green: Drive is detected and has no faults 2 Drive status indicator • • • • Off: Normal operation Blinking amber (on 1 sec. / off 1 sec.): Drive identification is enabled Blinking amber (on 2 sec. / off 1 sec.
SC100/SC120 Expansion Enclosure Front-Panel Features and Indicators The SC100/SC120 front panel shows the expansion enclosure status and power supply status. Figure 16. SC100 Front-Panel Features and Indicators Figure 17. SC120 Front-Panel Features and Indicators Item Name 1 Expansion enclosure status indicator Lights when the expansion enclosure power is on. Power supply status indicator Lights when at least one power supply is supplying power to the expansion enclosure.
SC100/SC120 Expansion Enclosure Back-Panel Features and Indicators The SC100/SC120 back panel provides controls to power up and reset the expansion enclosure, indicators to show the expansion enclosure status, and connections for back-end cabling. Figure 18. SC100/SC120 Expansion Enclosure Back Panel Features and Indicators Item Name 1 DC power indicator Icon Description • • Green: Normal operation.
Item Name Icon Description 1 System status indicator Not used on SC100/SC120 expansion enclosures. 2 Serial port Not for customer use. 3 SAS port A (in) Connects to a storage controller or to other SC100/SC120 expansion enclosures. SAS ports A and B can be used for either input or output. However, for cabling consistency, use port A as an input port. 4 Port A link status • • • 5 SAS port B (out) Connects to a storage controller or to other SC100/SC120 expansion enclosures.
Item Name Indicator Code • Off: No power to the drive SC100/SC120 Expansion Enclosure Drive Numbering In an SC100/SC120 expansion enclosure, the drives are numbered from left to right starting from 0. Dell Storage Manager Client identifies drives as XX-YY, where XX is the unit ID of the expansion enclosure that contains the drive, and YY is the drive position inside the expansion enclosure. • An SC100 holds up to 12 drives, which are numbered in rows starting from 0 at the top-left drive. Figure 21.
2 Install the Storage Center Hardware This section describes how to unpack the Storage Center equipment, prepare for the installation, and mount the equipment in a rack. Unpack and Inventory the Storage Center Equipment Unpack the storage system and identify the items in your shipment. Figure 23. SCv2000/SCv2020 Storage System Components 1. Documentation 2. Storage system 3. Rack rails 4.
• Dell recommends that only individuals with rack-mounting experience install the SCv2000/SCv2020 in a rack. • Make sure the storage system is always fully grounded to prevent damage from electrostatic discharge. • When handling the storage system hardware, use an electrostatic wrist guard (not included) or a similar form of protection. The chassis must be mounted in a rack.
• Remove any jewelry or metal objects from your body. These items are excellent metal conductors that can create short circuits and harm you if they come into contact with printed circuit boards or areas where power is present. • Do not lift the storage system chassis by the handles of the power supply units (PSUs). They are not designed to hold the weight of the entire chassis, and the chassis cover could become bent. • Before moving the storage system chassis, remove the PSUs to minimize weight.
Figure 25. Insert the Screw into the Rack Mounting Screw Hole 5. Extend the rail to fit the rack and insert the two rail pins into the pin holes at the marked location at the back of the rack. Figure 26. Extend the Rail 6. Insert a screw into the rack mounting screw hole at the back of the rack and tighten the screw to secure the rail to the rack. 7. Repeat the previous steps for the second rail. 8. Slide the storage system chassis onto the rails.
Figure 27. Mount the SCv2000/SCv2020 Storage System Chassis 9. Secure the storage system chassis to the rack using the mounting screws within each chassis ear. a. Lift the latch on each chassis ear to access the screws. b. Tighten the screws to secure the chassis to the rack. c. Close the latch on each chassis ear. Figure 28. Secure the Chassis to the Rack 10. If the Storage Center system includes expansion enclosures, mount the expansion enclosures in the rack.
3 Connect the Front-End Cabling Front-end cabling refers to the connections between the storage system and external devices such as host servers or another Storage Center. Front‐end connections can be made using Fibre Channel, iSCSI, or SAS interfaces. Dell recommends connecting the storage system to host servers using the most redundant option available.
Multipath I/O MPIO allows a server to use multiple paths for I/O if they are available. MPIO software offers redundancy at the path level. MPIO typically operates in a round-robin manner by sending packets first down one path and then the other. If a path becomes unavailable, MPIO software continues to send packets down the functioning path. MPIO is required to enable redundancy for servers connected to a Storage Center with SAS front-end connectivity.
• Refer to the Dell Storage Compatibility Matrix for a list of supported Fibre Channel HBAs. Steps 1. Install Fibre Channel HBAs in the host servers. NOTE: Do not install Fibre Channel HBAs from different vendors in the same server. 2. Install supported drivers for the HBAs and make sure that the HBAs have the latest supported firmware. 3. Use the Fibre Channel cabling diagrams to cable the host servers to the switches.
7. Storage controller 2 Next steps Install or enable MPIO on the host servers. NOTE: After the Storage Center configuration is complete, run the host access wizard to configure host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage Center Best Practices document on the Dell TechCenter site (http://en.community.dell.com/techcenter/storage/).
Example Figure 30. Storage System with Dual 8 Gb Storage Controllers and Two FC Switches 1. Server 1 2. Server 2 3. FC switch 1 (fault domain 1) 4. FC switch 2 (fault domain 2) 5. Storage system 6. Storage controller 1 7. Storage controller 2 Next steps Install or enable MPIO on the host servers. NOTE: After the Storage Center configuration is complete, run the host access wizard to configure host server access and apply MPIO best practices.
Example Figure 31. Storage System with Dual 16 Gb Storage Controllers and One FC Switch 1. Server 1 2. Server 2 3. FC switch (Fault domain 1) 4. Storage system 5. Storage controller 1 6. Storage controller 2 Next steps Install or enable MPIO on the host servers. NOTE: After the Storage Center configuration is complete, run the host access wizard to configure host server access and apply MPIO best practices.
• Storage controller 1: port 4 to the FC switch • Storage controller 2: port 2 to the FC switch • Storage controller 2: port 4 to the FC switch Example Figure 32. Storage System with Dual 8 Gb Storage Controllers and One FC Switch 1. Server 1 2. Server 2 3. FC switch (fault domain 1 and fault domain 2) 4. Storage system 5. Storage controller 1 6. Storage controller 2 Next steps Install or enable MPIO on the host servers.
Example Figure 33. Storage System a Single 16 Gb Storage Controller and Two FC Switches 1. Server 1 2. Server 2 3. FC switch 1 (fault domain 1) 4. FC switch 2 (fault domain 2) 5. Storage system 6. Storage controller Next steps Install or enable MPIO on the host servers. NOTE: After the Storage Center configuration is complete, run the host access wizard to configure host server access and apply MPIO best practices.
Example Figure 34. Storage System with a Single 8 Gb Storage Controller and Two FC Switches 1. Server 1 2. Server 2 3. FC switch 1 (fault domain 1) 4. FC switch 2 (fault domain 2) 5. Storage system 6. Storage controller 1 Next steps Install or enable MPIO on the host servers. NOTE: After the Storage Center configuration is complete, run the host access wizard to configure host server access and apply MPIO best practices.
• Use only Dell-supported SFP+ transceiver modules with the SCv2000/SCv2020. Other generic SFP+ transceiver modules are not supported and may not work with the SCv2000/SCv2020. • The SFP+ transceiver module housing has an integral guide key that is designed to prevent you from inserting the transceiver module incorrectly. • Use minimal pressure when inserting an SFP+ transceiver module into an FC port. Forcing the SFP+ transceiver module into a port may damage the transceiver module or the port.
CAUTION: Touching the end of a fiber-optic cable damages the cable. Whenever a fiber-optic cable is not connected, replace the protective covers on the ends of the cable. 4. Insert the fiber-optic cable into the transceiver module until the latching mechanism clicks. 5. Insert the other end of the fiber-optic cable into the SFP+ transceiver module of a Fibre Channel switch.
Fibre Channel Zoning When using Fibre Channel for front-end connectivity, zones must be established to ensure that storage is visible to the servers. Use the zoning concepts discussed in this section to plan the front-end connectivity before starting to cable the storage system. Dell recommends creating zones using a single initiator host port and multiple Storage Center ports.
Figure 39. Wrap Label Around Cable 3. Apply a matching label to the other end of the cable. Connecting to iSCSI Host Servers Choose the iSCSI connectivity option that best suits the front‐end redundancy requirements and network infrastructure. Preparing Host Servers Install the iSCSI host bus adapters (HBAs) or iSCSI network adapters, install the drivers, and make sure that the latest supported firmware is installed. • Contact your solution provider for a list of supported iSCSI HBAs.
NOTE: If using jumbo frames, they must be enabled and configured on all devices in the data path, adapter ports, switches, and storage system. e. If the host uses network adapters for iSCSI traffic, add the VMkernel ports to the iSCSI software initiator. f. Use the iSCSI cabling diagrams to cable the host servers to the switches. Connecting host servers directly to the storage system without using Ethernet switches is not supported.
NOTE: After the Storage Center configuration is complete, run the host access wizard to configure host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage Center Best Practices document on the Dell TechCenter site (http://en.community.dell.com/techcenter/storage/).
3. Ethernet switch 1 (fault domain 1) 4. Ethernet switch 2 (fault domain 2) 5. Storage system 6. Storage controller 1 7. Storage controller 2 Next steps Install or enable MPIO on the host servers. NOTE: After the Storage Center configuration is complete, run the host access wizard to configure host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage Center Best Practices document on the Dell TechCenter site (http://en.community.dell.
5. Storage controller 1 6. Storage controller 2 Next steps Install or enable MPIO on the host servers. NOTE: After the Storage Center configuration is complete, run the host access wizard to configure host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage Center Best Practices document on the Dell TechCenter site (http://en.community.dell.com/techcenter/storage/).
Example Figure 43. Storage System with Dual 1 GbE Storage Controllers and One Ethernet Switch 1. Server 1 2. Server 2 3. Ethernet switch (fault domain 1 and fault domain 2) 4. Storage system 5. Storage controller 1 6. Storage controller 2 Next steps Install or enable MPIO on the host servers. NOTE: After the Storage Center configuration is complete, run the host access wizard to configure host server access and apply MPIO best practices.
Example Figure 44. Storage System with One 10 GbE Storage Controller and Two Ethernet Switches 1. Server 1 2. Server 2 3. Ethernet switch 1 (fault domain 1) 4. Ethernet switch 2 (fault domain 2) 5. Storage system 6. Storage controller Next steps Install or enable MPIO on the host servers. NOTE: After the Storage Center configuration is complete, run the host access wizard to configure host server access and apply MPIO best practices.
Example Figure 45. Storage System with One 1 GbE Storage Controller and Two Ethernet Switches 1. Server 1 2. Server 2 3. Ethernet switch 1 (fault domain 1) 4. Ethernet switch 2 (fault domain 2) 5. Storage system 6. Storage controller Next steps Install or enable MPIO on the host servers. NOTE: After the Storage Center configuration is complete, run the host access wizard to configure host server access and apply MPIO best practices.
Figure 46. Attach Label to Cable 2. Wrap the label around the cable until it fully encircles the cable. The bottom of each label is clear so that it does not obscure the text Figure 47. Wrap Label Around Cable 3. Apply a matching label to the other end of the cable. Cabling Direct-Attached Host Servers An SCv2000/SCv2020 storage system with front-end SAS ports connects directly to host servers. Each host bus adapter (HBA) can attach to one SAS fault domain.
NOTE: If deploying vSphere hosts, configure only one host at a time. SAS Virtual Port Mode To provide redundancy in SAS virtual port mode, the front-end ports on each storage controller must be directly connected to the server. In SAS virtual port mode, a volume is active on only one storage controller, but it is visible to both storage controllers. Asymmetric Logical Unit Access (ALUA) controls the path that a server uses to access a volume.
Example Figure 48. Storage System with Dual 12 Gb SAS Storage Controllers Connected to Four Host Servers 1. Server 1 2. Server 2 3. Server 3 4. Server 4 5. Storage system 6. Storage controller 1 7. Storage controller 2 Next steps Install or enable MPIO on the host servers. NOTE: After the Storage Center configuration is complete, run the host access wizard to configure host server access and apply MPIO best practices.
b. Connect a SAS cable from storage controller 2: port 1 to the first SAS HBA on host server 1. 2. Connect fault domain 2 (shown in blue) to host server 1. a. Connect a SAS cable from storage controller 1: port 2 to the second SAS HBA on host server 1. b. Connect a SAS cable from storage controller 2: port 2 to the second SAS HBA on host server 1. 3. Connect fault domain 3 (shown in gray) to host server 2. a. Connect a SAS cable from storage controller 1: port 3 to the first SAS HBA on host server 2. b.
Steps 1. Connect fault domain 1 to host server 1 by connecting a SAS cable from storage controller 1: port 1 to host server 1. 2. Connect fault domain 2 to host server 1 by connecting a SAS cable from storage controller 1: port 2 to host server 1. 3. Connect fault domain 3 to host server 2 by connecting a SAS cable from storage controller 1: port 3 to host server 2. 4. Connect fault domain 4 to host server 2 by connecting a SAS cable from storage controller 1: port 4 to host server 2.
Figure 51. Attach Label to Cable 2. Wrap the label around the cable until it fully encircles the cable. The bottom of each label is clear so that it does not obscure the text. Figure 52. Wrap Label Around Cable 3. Apply a matching label to the other end of the cable. Cabling the Ethernet Management Port To manage Storage Center, the Ethernet management (MGMT) port of each storage controller must be connected to an Ethernet switch that is part of the management network.
Figure 53. Storage System Connected to a Management Network 1. Corporate/management network 2. Ethernet switch 3. Storage system 4. Storage controller 1 5. Storage controller 2 NOTE: If the Flex Port license is installed, the management port becomes a shared iSCSI port. To use the management port as an iSCSI port, cable the management port to a network switch dedicated to iSCSI traffic. Special considerations must be taken into account when sharing the management port.
Figure 55. Wrap Label Around Cable 3. Apply a matching label to the other end of the cable. Cabling the Embedded Ports for iSCSI Replication If the Storage Center is licensed for replication, the replication port can be connected to an Ethernet switch and used for iSCSI replication. If the Storage Center is licensed for replication and the Flex Port license is installed, the management port and replication port can both be connected to an Ethernet switch and used for iSCSI replication.
4. To configure replication, refer to the Dell Enterprise Manager Administrator’s Guide. Related links Configure Embedded iSCSI Ports Cabling the Management Port and Replication Port for iSCSI Replication If replication is licensed and the Flex Port license is installed, the management (MGMT) port and replication (REPL) port can be used to replicate data to another Storage Center.
Cabling the Embedded Ports for iSCSI Host Connectivity If the Flex Port license is installed on the Storage Center, the management port and replication port can be connected to an Ethernet switch and used for iSCSI host connectivity. Dell recommends using two switches dedicated for iSCSI traffic. Refer to the iSCSI Settings appendix for a list of recommend and required settings.
Figure 58. Two iSCSI Networks using the Embedded Ethernet Ports on Dual Fibre Channel Storage Controllers 5. 1. Corporate/management network 2. Server 1 (FC) 3. Server 2 (iSCSI) 4. FC Switch 1 (fault domain 1 for FC fabric) 5. FC Switch 2 (fault domain 2 for FC fabric) 6. Ethernet switch 1 (fault domain 1) 7. Ethernet switch 2 (fault domain 2) 8. Storage system 9. Storage controller 1 10.
• If a physical port or Ethernet switch becomes unavailable, the storage system is accessed from the switch in the other fault domain. • If a storage controller becomes unavailable, the virtual ports on the offline storage controller move to the physical ports on the other storage controller. NOTE: The Flex Port feature allows both Storage Center system management traffic and iSCSI traffic to use the same physical network ports.
4 Connect the Back-End Cabling and Connect the Power Back-end cabling refers to the connections between the storage system and expansion enclosures. After the back-end cabling is complete, connect power cables to the storage system components and turn on the hardware. An SCv2000/SCv2020 storage system can be deployed with or without expansion enclosures. • When an SCv2000/SCv2020 storage system is deployed without expansion enclosures, the storage controllers must be interconnected using SAS cables.
Back-End Connections for an SCv2000/SCv2020 Storage System Without Expansion Enclosures When you deploy an SCv2000/SCv2020 storage system without expansion enclosures, you must interconnect the storage controllers using SAS cables. A single controller system should not connect a back-end SAS cable from port A to port B. NOTE: The top storage controller is storage controller 1 and the bottom storage controller is storage controller 2. Figure 61. SCv2000/SCv2020 Without Expansion Enclosures 1.
SCv2000/SCv2020 and One SC100/SC120 Expansion Enclosure This figure shows an SCv2000/SCv2020 storage system cabled to one SC100/SC120 expansion enclosure. Figure 62. SCv2000/SCv2020 and One SC100/SC120 Expansion Enclosure 1. Storage system 2. Storage controller 1 3. Storage controller 2 4. Expansion enclosure The following table describes the back-end SAS connections from an SCv2000/SCv2020 storage system to one SC100/SC120 expansion enclosure. Table 4.
SCv2000/SCv2020 and Two or More SC100/SC120 Expansion Enclosures This figure shows an SCv2000/SCv2020 storage system cabled to two SC100/SC120 expansion enclosures. Figure 63. SCv2000/SCv2020 and Two SC100/SC120 Expansion Enclosures 1. Storage system 2. Storage controller 1 3. Storage controller 2 4. Expansion enclosure 1 5. Expansion enclosure 2 The following table describes the back-end SAS connections from an SCv2000/SCv2020 to two SC100/SC120 expansion enclosures. Table 5.
Figure 64. Attach Label to Cable 2. Wrap the label around the cable until it fully encircles the cable. The bottom of each label is clear so that it does not obscure the text. Figure 65. Wrap Label Around Cable 3. Apply a matching label to the other end of the cable. Connect Power Cables and Turn On the Storage System Connect power cables to the storage system components and turn on the hardware.
Figure 66. Connect the Power Cables 3. Plug the other end of the power cables into a grounded electrical outlet or a separate power source such as an uninterrupted power supply (UPS) or a power distribution unit (PDU). 4. Press both power switches on the rear of the storage system chassis to turn on the storage system. Figure 67. Turn On the Storage System When the SCv2000/SCv2020 storage system is powered on, a delay occurs while the storage system prepares to start up.
5 Discover and Configure the Storage Center The Discover and Configure Uninitialized SCv2000 Series Storage Centers wizard allows you to set up a Storage Center to make it ready for volume creation. Use the Dell Storage Manager Client to discover and configure the Storage Center. After configuring a Storage Center, you can set up a localhost, VMware vSphere host, or VMware vCenter host using the host setup wizards.
Subnet mask ___ . ___ . ___ . ___ Gateway IPv4 address ___ . ___ . ___ . ___ Domain name ________________ DNS server address ___ . ___ . ___ . ___ Secondary DNS server address ___ . ___ . ___ . ___ Table 7.
Table 10. NTP, SMTP, and Proxy Servers NTP server IPv4 address ___ . ___ . ___ . ___ SMTP server IPv4 address ___ . ___ . ___ . ___ Backup SMTP server IPv4 address ___ . ___ . ___ . ___ SMTP server login ID ________________ SMTP server password ________________ Proxy server IPv4 address ___ . ___ . ___ .
• SUSE Linux Enterprise 12 or later • Windows Server 2008 R2 or later Install and Use the Dell Storage Manager Client You must start the Dell Storage Manager Client as an Administrator to run the Discover and Configure Uninitialized Storage Centers wizard. 1. Go to www.dell.com/support, navigate to the SCv2000/SCv2020 product support page, and download the Windows or Linux version of the Dell Storage Manager Client. 2. Install the Dell Storage Manager Client on the host server.
6. If the Storage Center is partially configured, the Storage Center login pane appears. Enter the management IPv4 address and the Admin password for the Storage Center, then click Next to continue. Deploy the Storage Center Using the Direct Connect Method Use the direct connect method to manually deploy the Storage Center when it is not discoverable. 1. Use an Ethernet cable to connect the computer running the Dell Storage Manager Client to the management port of the top controller. 2.
Configure iSCSI Fault Domains For a Storage Center with iSCSI front-end ports, use the Configure Fault Tolerance page and the Fault Domain pages to enter network information for the fault domains and ports. 1. (Optional) On the Configure Fault Tolerance page, click More information about fault domains or How to set up an iSCSI network to learn more about these topics. 2. Click Next. NOTE: If any iSCSI ports are down, a dialog box appears that allows you to unconfigure these ports.
Review Fibre Channel Front-End Configuration For a Storage Center with Fibre Channel front-end ports, the Fault Domains page displays an example of a fault domain topology based on the number of controllers and type of front-end ports. The Review Front-End Configuration page displays information about the fault domains created by the Storage Center. 1. (Optional) On the Fault Tolerance page, click More information about fault domains to learn more about fault domains. 2. Click Next. 3.
• To use SMTP, type the Storage Center fully qualified domain name in the Hello Message (HELO) field. • To use ESMTP, select the Send Extended Hello (EHLO) check box, then type the Storage Center fully qualified domain name in the Extended Hello Message (EHLO) field. g. If the SMTP server requires clients to authenticate before sending email, select the Use Authorized Login (AUTH LOGIN) check box, then type a user name and password in the Login ID and Password fields. 3. Click Next.
4. Type a shipping address where replacement Storage Center components can be sent. 5. Click Next. Update Storage Center The Storage Center attempts to contact the SupportAssist Update Server to check for updates. If you are not using SupportAssist, you must use the Storage Center Update Utility to update the Storage Center operating system before continuing. • • • If no update is available, the Storage Center Up to Date page appears. Click Next.
Set Up a localhost or VMware Host After configuring a Storage Center, you can set up block-level storage for a localhost, VMware vSphere host, or VMware vCenter. Set Up a localhost from Initial Setup Configure a localhost to access block-level storage on the Storage Center. It is recommended that you perform this procedure for each host that is connected to the Storage Center. Prerequisites • Client must be running on a system with a 64-bit operating system.
3. Select an available port, and then click Create Server. The server definition is created on the Storage Center. 4. The Host Setup Successful page displays the best practices that were set by the wizard and best practices that were not set. Make a note of any best practices that were not set by the wizard. It is recommended that these updates be applied manually before starting IO to the Storage Center. 5. (Optional) Select Create a Volume for this host to create a volume after finishing host setup.
a. Select the server. b. Click Next. The Volume Summary page is displayed. 7. Click Finish. Set the Default Storage Profile for New Volumes The default Storage Profile is used when a new volume is created unless the user selects a different Storage Profile. You can prevent the Storage Profile from being changed during volume creation by clearing the Allow Storage Profile Selection checkbox. 1. Click the Storage view. 2. In the Storage pane, select a Storage Center. 3.
6 Perform Post-Setup Tasks Perform connectivity and failover tests to make sure that the Storage Center deployment was successful. NOTE: Before testing failover, set the operation mode of the Storage Center to Maintenance. When you are finished testing failover, set the operation mode of the Storage Center back to Normal. Verify Connectivity and Failover This section describes how to verify that the Storage Center is set up properly and performs failover correctly.
Test Basic Connectivity Verify basic connectivity by copying data to the test volumes. 1. Connect to the server to which the volumes are mapped. 2. Create a folder on the TestVol1 volume, copy at least 2 GB of data to the folder, and verify that the data copied successfully. 3. Create a folder on the TestVol2 volume, copy at least 2 GB of data to the folder, and verify that the data copied successfully.
Clean Up Test Volumes After testing is complete, delete the volumes used for testing. 1. Connect to the server to which the volumes are mapped and remove the volumes. 2. Use the Dell Storage Manager Client to connect to the Storage Center. 3. Click the Storage tab. 4. From the Storage tab navigation pane, select the Volumes node. 5. Select the volumes to delete. 6. Right-click on the selected volumes and select Delete. The Delete dialog box opens. 7. Click OK.
A Adding or Removing an Expansion Enclosure This section describes how to add an expansion enclosure to a storage system and how to remove an expansion enclosure from a storage system. Adding Multiple Expansion Enclosures to a Storage System Deployed without Expansion Enclosures Use caution when adding expansion enclosures to a live Storage Center system to preserve the integrity of the existing data.
Figure 70. Cable the Expansion Enclosures Together 1. 3. Expansion enclosure 1 2. Expansion enclosure 2 Repeat the previous steps to connect additional expansion enclosures to the chain. Check the Current Disk Count before Adding Expansion Enclosures Use the Dell Storage Manager Client to determine the number of drives that are currently accessible to the Storage Center. 1. Connect to the Storage Center using the Dell Storage Manager Client. 2. Select the Storage tab. 3.
Figure 72. Connect the A-Side Cables to the Expansion Enclosures 1. Storage system 2. Storage controller 1 3. Storage controller 2 4. Expansion enclosure 1 5. Expansion enclosure 2 Add the SC100/SC120 Expansion Enclosures to the B-Side of the Chain Connect the expansion enclosures to one side of the chain at a time to maintain drive availability. 1. Remove the B-side cable (shown in blue) that connects storage controller 1: port B to storage controller 2: port A.
5. 2. Expansion enclosure 2 Cable the expansion enclosures to the B-side of the chain. a. Connect a SAS cable from storage controller 1: port B to expansion enclosure 2: bottom EMM, port B. b. Connect a SAS cable from storage controller 2: port A to expansion enclosure 1: bottom EMM, port A. Figure 74. Connect the B-Side Cables to the Expansion Enclosures 1. Storage system 2. Storage controller 1 3. Storage controller 2 4. Expansion enclosure 1 5.
Figure 75. Attach Label to Cable 2. Wrap the label around the cable until it fully encircles the cable. The bottom of each label is clear so that it does not obscure the text. Figure 76. Wrap Label Around Cable 3. Apply a matching label to the other end of the cable. Adding a Single Expansion Enclosure to a Chain Currently in Service Use caution when adding an expansion enclosure to a live Storage Center system to preserve the integrity of the existing data.
d. Turn on the expansion enclosure. When the drives spin up, make sure that the front panel and power status LEDs show normal operation. e. Click Next. f. Add the expansion enclosure to the A-side chain. Click Next to validate the cabling. g. Add the expansion enclosure to the B-side chain. Click Next to validate the cabling. h. Click Finish. 5. To manually manage new unassigned disks: a. b. c. d. e. f. Click the Storage tab. In the Storage tab navigation pane, select the Disks node.
Figure 77. Disconnect A-Side Cable from the Existing Expansion Enclosure 1. Storage system 2. Storage controller 1 3. Storage controller 2 4. Expansion enclosure 1 3. Use a new SAS cable to connect expansion enclosure 1: top EMM, port B to the new expansion enclosure (2): top EMM, port A. 4. Connect the A-side cable that was disconnected in step 2 to the new expansion enclosure (2): top EMM, port B. Figure 78. Connect A-Side Cables to the New Expansion Enclosure 1. Storage system 2.
Figure 79. Disconnect B-Side Cable from the Existing Expansion Enclosure 1. Storage system 2. Storage controller 1 3. Storage controller 2 4. Expansion enclosure 1 5. New expansion enclosure (2) 2. Use a new SAS cable to connect expansion enclosure 1: bottom EMM, port B to the new expansion enclosure (2): bottom EMM, port A. 3. Connect the B-side cable that was disconnected in step 1 to the new expansion enclosure (2): bottom EMM, port B. Figure 80.
Label the Back-End Cables Label the back-end cables that interconnect the storage controllers or label the back-end cables that connect the storage system to the expansion enclosures. Prerequisites Locate the cable labels provided with the expansion enclosures. About this task Apply cable labels to both ends of each SAS cable to indicate the chain number and side (A or B). Steps 1. Starting with the top edge of the label, attach the label to the cable near the connector. Figure 81.
CAUTION: Make sure that your data is backed up before removing an expansion enclosure. Before physically removing an expansion enclosure, make sure that none of the drives in the enclosure are managed by the Storage Center Operating System. Steps 1. Connect to the Storage Center using the Dell Storage Manager Client. 2. Use the Dell Storage Manager Client to release the disks in the expansion enclosure. 3. Select the expansion enclosure to remove and click Remove Enclosure.
Figure 83. Disconnecting the A-Side Cables from the Expansion Enclosure 3. 1. Storage system 2. Storage controller 1 3. Storage controller 2 4. Expansion enclosure 1 5. Expansion enclosure 2 Connect the A-side cable to expansion enclosure 2: top EMM, port A. Figure 84. Reconnecting the A-Side Cable to the Remaining Expansion Enclosure 94 1. Storage system 2. Storage controller 1 3. Storage controller 2 4. Expansion enclosure 1 5.
Disconnect the SC100/SC120 Expansion Enclosure from the B-Side of the Chain Disconnect the B-side cables from the expansion enclosure that you want to remove. 1. Disconnect the B-side cable (shown in blue) from expansion enclosure 1: bottom EMM, port A. The A-side cables continue to carry I/O while the B-side is disconnected. 2. Remove the B-side cable between expansion enclosure 1: bottom EMM, port B and expansion enclosure 2: bottom EMM, port A. Figure 85.
Figure 86. Reconnecting the B-Side Cable to the Remaining SC100/SC120 Expansion Enclosure 96 1. Storage system 2. Storage controller 1 3. Storage controller 2 4. Disconnected expansion enclosure 5.
B Troubleshooting Storage Center Deployment This section contains troubleshooting steps for common Storage Center deployment issues. Troubleshooting Storage Controllers To troubleshoot storage controllers: 1. Check the status of the storage controller using the Dell Storage Manager Client. 2. Check the position of the storage controllers. The storage controller with the lower HSN should be on the top, and the storage controller with the higher HSN should be on the bottom. 3.