Dell Storage Center SCv2080 Storage System Deployment Guide
Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. Copyright © 2016 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries.
Contents About this Guide................................................................................................................ 6 Revision History..................................................................................................................................................................6 Audience............................................................................................................................................................................
Types of Redundancy for Front-End Connections........................................................................................................... 40 Port Redundancy....................................................................................................................................................... 40 Storage Controller Redundancy................................................................................................................................. 40 Fault Domains...........
Set Administrator Information.......................................................................................................................................... 78 Configure iSCSI Fault Domains.........................................................................................................................................79 Confirm the Storage Center Configuration.......................................................................................................................
About this Guide This guide describes the features and technical specifications of the SCv2080 storage system.
Provides instructions for using the Storage Center Command Utility. The Command Utility provides a command-line interface (CLI) to enable management of Storage Center functionality on Windows, Linux, Solaris, and AIX platforms. • Dell Storage Center Command Set for Windows PowerShell Provides instructions for getting started with Windows PowerShell cmdlets and scripting objects that interact with the Storage Center using the PowerShell interactive shell, scripts, and PowerShell hosting applications.
1 About the SCv2080 Storage System The SCv2080 storage system provides the central processing capabilities for the Storage Center Operating System (OS) and management of RAID storage. The SCv2080 storage system holds the physical drives that provide storage for the Storage Center. If additional storage is needed, the SCv2080 also supports a single SC180 expansion enclosure.
Figure 1. SCv2080 without an Expansion Enclosure • An SCv2080 storage system with one SC180 expansion enclosure Figure 2. SCv2080 with One Expansion Enclosure 1. SC180 expansion enclosure 2. SCv2080 storage system Storage Center Replication Storage Center sites can be collocated or remotely connected and data can be replicated between sites. Storage Center replication can duplicate volume data to another site in support of a disaster recovery plan or to provide local access to a remote data volume.
• iSCSI: Hosts, servers, or Network Attached Storage (NAS) appliances access storage by connecting to the storage system iSCSI ports through one or more Ethernet switches. Connecting host servers directly to the storage system, without using Ethernet switches, is not supported. When replication is licensed, the SCv2080 can use the front-end iSCSI ports to replicate data to another Storage Center. • SAS: Hosts or servers access storage by connecting directly to the storage system SAS ports.
Item Description Speed Communication Type 5 Remote Storage Center connected via iSCSI for replication 1 Gbps or 10 Gbps Front End 6 Ethernet switch 1 Gbps or 10 Gbps (Management/Replication) Front End 7 Management network (computer connected to the storage system through the Ethernet switch) Up to 1 Gbps System Administration SCv2080 Storage System with iSCSI Front-End Connectivity An SCv2080 storage system with iSCSI front-end connectivity may communicate with the following components of a
Item Description Speed Communication Type 6 Management network (computer connected to the storage system through the Ethernet switch) Up to 1 Gbps System Administration SCv2080 Storage System with Front-End SAS Connectivity An SCv2080 storage system with front-end SAS connectivity may communicate with the following components of a Storage Center system. Figure 5.
Back-End Connectivity Back-end connectivity is strictly between the storage system and the expansion enclosure, which holds the physical drives that provide back-end expansion storage. The SCv2080 supports SAS connectivity to a single SC180 expansion enclosure. System Administration To perform system administration, the Storage Center communicates with computers using the Ethernet management (MGMT) port on the storage controllers.
Item Name Panel Description – Amber — Sideplane card fault or drive failure causing loss of availability or redundancy • Logical Fault – Amber (steady) — Host indicated drive fault – Amber (flashing) — Arrays in impacted state • Cable Fault – Amber — Cable fault 3 Drawer-specific left and right side storage system activity indicators Activity Bar Graph — Six variable-intensity LEDs dynamically displaying access of the drives in that specific drawer 4 Status indicator for storage system • • Unit
SCv2080 Storage System Back-Panel Features and Indicators The SCv2080 back panel contains the storage system power, connectivity, and fault indicators. Figure 7. SCv2080 Storage System Back-Panel Features and Indicators Item Name Icon Description 1 Optional cable retention positions (4) — Locations for optional cable retention brackets.
SCv2080 Storage System Storage Controller Features and Indicators The SCv2080 storage system includes two storage controllers in two interface slots. SCv2080 Storage System Storage Controller with Fibre Channel Front-End Ports The following figures show the features and indicators on a storage controller with Fibre Channel front-end ports. Figure 8. SCv2080 Storage System Storage Controller with Four 8 Gb Fibre Channel Front-End Ports Figure 9.
Item Control/Feature 6 Storage controller status On: Storage controller completed POST 7 Recessed power off button Not currently used 8 Storage controller fault • • • 9 Recessed reset button Not currently used 10 Identification LED • • • 11 USB port One USB 3.0 connector 12 Diagnostic LEDs (8) 13 Serial port (3.
SCv2080 Storage System Storage Controller with iSCSI Front-End Ports The following figures show the features and indicators on a storage controller with iSCSI front-end ports. Figure 10. SCv2080 Storage System Storage Controller with Four 1 GbE iSCSI Front-End Ports Figure 11. SCv2080 Storage System Storage Controller with Two 10 GbE iSCSI Front-End Ports Item Control/Feature 1 Battery status indicator • • • Blinking green (on 0.5 sec. / off 1.5 sec.): Battery heartbeat Fast blinking green (on 0.
Item Control/Feature 8 Storage controller fault • • • 9 Recessed reset button Not currently used 10 Identification LED • • • 11 USB port One USB 3.0 connector 12 Diagnostic LEDs (8) 13 Serial port (3.5 mm mini jack) 14 Two options: • • Icon — Description • • Off: No faults Steady amber: Firmware has detected an error Blinking amber:Storage controller is performing POST Off: Identification disabled Blinking blue (for 15 sec.
Item Control/Feature Icon Description 3 MGMT port (Slot 3/Port 1) — Ethernet/iSCSI port that is typically used for storage system management and access to the BMC NOTE: To use the MGMT port as an iSCSI port for replication to another Storage Center, a Flex Port license and replication license are required. To use the MGMT port as a front-end connection to host servers, a Flex Port license is required.
SCv2080 Storage System Cooling Fan Module Features and Indicators SCv2080 Storage Systems include five cooling fan modules in five interface slots. Figure 13. SCv2080 Storage System Cooling Fan Module Features and Indicators Item Control/Feature Icon Description 1 Release latch — Press the release latch to remove the cooling fan module.
Item Control/Feature Icon Description 4 Power OK 5 Power outlet — Power outlet for the storage system 6 Power switch — Controls power for the storage system • • Green (steady) — This PSU is providing power Green (flashing) — AC power is present, but this PSU is in standby mode (the other PSU is providing power) Separate and unique conditions are indicated if all three LEDs are in the same state: • If all three LEDs are off, then neither PSU has AC power.
SCv2080 Storage System Drive Numbering In the SCv2080 storage system, the drive slots are numbered 1–42 in the top drawer and 43–84 in the bottom drawer. Dell Storage Manager Client identifies drives as XX-YY, where XX is the number of the unit ID of the storage system, and YY is the drive position inside the storage system. Figure 16. SCv2080 Storage System Drawers and Drive Numbering 1. Bottom drawer viewed from above 2.
Item Name Panel Description 1 Drawer-specific antitamper locks Locks the drawer shut using a Torx T20 screwdriver until the red arrows point to the locked icon (away from the center of the chassis).
Item Name Panel Description – Amber — Drive, cable, or sideplane fault has occurred in drawer 1 • Drawer 2 Fault – Amber — Drive, cable, or sideplane fault has occurred in drawer 2 NOTE: Both drawer fault LEDs (and all contained DDIC LEDs) will flash when the expansion enclosure indicator is set to On in Storage Manager Client. SC180 Expansion Enclosure Back-Panel Features and Indicators The SC180 back panel shows power, connectivity, and fault indicators. Figure 18.
SC180 Expansion Enclosure EMM Features and Indicators An SC180 includes two enclosure management modules (EMMs) in two Storage Bridge Bay (SBB) interface slots. Figure 19.
Item Control/Feature Icon Description 1 Release latch — Press the release latch to remove the fan module. 2 Module OK Green — Module OK 3 Fan fault Amber — Loss of communication with the fan module, or reported fan speed is out of tolerance SC180 Expansion Enclosure PSU Features and Indicators The SC180 includes two power supply units (PSUs) in two interface slots. Figure 21.
Figure 22. DDIC and Status Indicator Item Feature Indicator Code 1 DDIC fault indicator Amber — Drive fault Amber (flashing) — Flashes in 1–second intervals when the drive or enclosure indicator is enabled SC180 Expansion Enclosure Drive Numbering The drives in an SC180 are installed in a two‐drawer, three‐row, 14‐column configuration. The drive slots are numbered 1–42 in the top drawer and 43–84 in the bottom drawer.
2 Install the Storage Center Hardware This section describes how to unpack the Storage Center equipment, prepare for the installation, mount the equipment in a rack, and install the drives. Unpack and Inventory the Storage Center Equipment Unpack the storage system and identify the items in your shipment. Figure 24. SCv2080 Storage System Components 1. Hard drives 2. Storage system 3. Rack rails 4.
Installation Safety Precautions Follow these safety precautions: • Dell recommends that only individuals with rack-mounting experience install the SCv2080 in a rack. • You need at least two people to lift the storage system chassis from the shipping box and three people to install it in the rack. The empty chassis weighs approximately 62 kg (137 lbs). • Make sure the storage system is always fully grounded to prevent damage from electrostatic discharge.
General Safety Precautions Always follow general safety precautions to avoid injury and damage to Storage Center equipment. • Keep the area around the storage system chassis clean and free of clutter. • Place any system components that have been removed away from the storage system chassis or on a table so that they are not in the way of other people. • While working on the storage system chassis, do not wear loose clothing such as neckties and unbuttoned shirt sleeves.
Figure 25. Inside of rail 1. 3. Screws to loosen Position one of the rails at the marked location at the front of the rack and insert the four rail pins into the pin holes. NOTE: Dell recommends using two people to install the rail, one at the front of the rack and one at the back. Figure 26. Hole Locations for the Rail at the Front of the Rack 4. 32 1. Chassis mounting screw hole 2. Pin hole 3. Pin hole 4. Rack mounting screw hole 5. Pin hole 6. Pin hole 7.
Figure 27. Insert the Screw into the Rack Mounting Screw Hole 5. Extend the rail to fit the rack. Figure 28. Extend the Rail 6. At the rear of the rack, insert the four rail pins into the pin holes.
Figure 29. Hole Locations for the Rail at the Rear of the Rack 7. 1. Rack mounting screw hole 2. Pin hole 3. Pin hole 4. Pin hole 5. Pin hole 6. Rack mounting screw hole Insert screws into the top and bottom rack mounting screw holes at the back of the rack, then tighten the screws to secure the rail to the rack. Figure 30. Insert the Screws into the Rack Mounting Screw Holes 8. Tighten the four circled screws that were loosened in Step 2. 9. Repeat the previous steps for the second rail.
Figure 31. Mount the SCv2080 Storage System Chassis 11. Secure the storage system chassis to the front of the rack using the mounting screws within each chassis ear. a. Remove the plastic covers from the chassis ears. b. Insert screws into the top and bottom rack mounting screw holes, then tighten the screws to secure the chassis to the rack. c. Replace the plastic covers on the chassis ears. Figure 32. Secure the Chassis to the Rack 12.
Figure 33. Secure the Chassis to the Rail 13. If the Storage Center system includes an expansion enclosure, mount the expansion enclosure above the storage system. See the instructions included with the expansion enclosure for detailed steps. Installing the Hard Drives Hard drives are connected to the backplane of the drawers using Disk Drive in Carrier (DDIC) hard drive carriers.
Figure 34. Opening the Drawer 1. Drawer latches 2. 2. Top drawer Insert each drive into the drawer, one at a time. CAUTION: To maintain proper airflow, the drawers must be populated with drives in whole rows (each drawer has three rows of 14 drives). The number of populated rows between drawers must not differ by more than one. Populate the rows from the front to the rear of the drawer. a. Hold the drive by the DDIC and slide it most of the way into the slot. b.
Figure 35. Inserting a Drive into the Drawer c. While maintaining downward pressure, slide the top plate of the DDIC toward the back of the drawer until it clicks into place. Figure 36. Securing a Drive in the Drawer NOTE: It is possible for a drive to appear seated but not be fully locked into position, eventually causing it to dislodge itself. After installing a drive, check the release button in the center of the DDIC.
CAUTION: If the DDIC fails to latch, do not use it and request a replacement from Dell Technical Support. If a faulty DDIC unlatches within a closed drawer, you may not be able to open the drawer. 3. Close the drawer after adding all of the DDICs. a. Locate the two lock-release buttons situated midway along the runners on each side of the drawer. b. Press the lock-release buttons inward and use your body to push the drawer toward the chassis until the locks disengage. c.
3 Connect the Front-End Cabling Front-end cabling refers to the connections between the storage system and external devices such as host servers or another Storage Center. Front‐end connections can be made using Fibre Channel, iSCSI, or SAS interfaces. Dell recommends connecting the storage system to host servers using the most redundant option available.
Fault Domains Fault domains group front-end ports that are connected to the same network. Ports that belong to the same fault domain can fail over to each other because they have the same connectivity. Dell recommends configuring at least two connections from each storage controller to each fault domain. Multipath I/O MPIO allows a server to use multiple paths for I/O if they are available. MPIO software offers redundancy at the path level.
Connecting to Fibre Channel Host Servers Choose the Fibre Channel connectivity option that best suits the front‐end redundancy requirements and network infrastructure. Preparing Host Servers Install the Fibre Channel host bus adapters (HBAs), install the drivers, and make sure that the latest supported firmware is installed. About this task • Contact your solution provider for a list of supported Fibre Channel HBAs.
Example Figure 38. Storage System with Dual 16 Gb Storage Controllers and Two FC Switches 1. Server 1 2. Server 2 3. FC switch 1 (fault domain 1) 4. FC switch 2 (fault domain 2) 5. Storage system 6. Storage controller 1 7. Storage controller 2 Next steps Install or enable MPIO on the host servers. NOTE: After the Storage Center configuration is complete, run the host access wizard to configure host server access and apply MPIO best practices.
3. • Storage controller 2: port 3 to FC switch 1 Connect fault domain 2 (shown in blue) to fabric 2. • Storage controller 1: port 2 to FC switch 2 • Storage controller 1: port 4 to FC switch 2 • Storage controller 2: port 2 to FC switch 2 • Storage controller 2: port 4 to FC switch 2 Example Figure 39. Storage System with Dual 8 Gb Storage Controllers and Two FC Switches 1. Server 1 2. Server 2 3. FC switch 1 (fault domain 1) 4. FC switch 2 (fault domain 2) 5. Storage system 6.
NOTE: This configuration is vulnerable to switch unavailability, which results in a loss of connectivity between the host servers and storage system. Steps 1. Connect each server to the FC fabric. 2. Connect fault domain 1 (shown in orange) to the fabric. • Storage controller 1: port 1 to the FC switch • Storage controller 1: port 2 to the FC switch • Storage controller 2: port 1 to the FC switch • Storage controller 2: port 2 to the FC switch Example Figure 40.
NOTE: This configuration is vulnerable to switch unavailability, which results in a loss of connectivity between the host servers and the storage system. Steps 1. Connect each server to the FC fabric. 2. 3. Connect fault domain 1 (shown in orange) to the fabric.
Using SFP+ Transceiver Modules An SCv2080 storage system with 16 Gb Fibre Channel storage controllers uses short range small-form-factor pluggable (SFP+) transceiver modules. Figure 42. SFP+ Transceiver Module with a Bail Clasp Latch The SFP+ transceiver modules are installed into the front-end ports of a storage controller. Fiber-optic cables are connected from the SFP+ transceiver modules in a storage controller to SFP+ transceiver modules in Fibre Channel switches.
Steps 1. Position the transceiver module so that the key is oriented correctly to the port in the storage controller. Figure 43. Install the SFP+ Transceiver Module 1. 2. SFP+ transceiver module 2. Fiber-optic cable connector Insert the transceiver module into the port until it is firmly seated and the latching mechanism clicks. The transceiver modules are keyed so that they can only be inserted with the correct orientation.
CAUTION: Touching the end of a fiber-optic cable damages the cable. Whenever a fiber-optic cable is not connected, replace the protective covers on the ends of the cables. 2. Open the transceiver module latching mechanism. 3. Grasp the bail clasp latch on the transceiver module and pull the latch out and down to eject the transceiver module from the socket. 4. Slide the transceiver module out of the port. Figure 44. Remove the SFP+ Transceiver Module 1. SFP+ transceiver module 2.
Steps 1. Starting with the top edge of the label, attach the label to the cable near the connector. Figure 45. Attach Label to Cable 2. Wrap the label around the cable until it fully encircles the cable. The bottom of each label is clear so that it does not obscure the text. Figure 46. Wrap Label Around Cable 3. Apply a matching label to the other end of the cable.
NOTE: Do not install iSCSI HBAs or network adapters from different vendors in the same server. b. Install supported drivers for the HBAs or network adapters and make sure that the HBAs or network adapter have the latest supported firmware. c. Use the host operating system to assign IP addresses for each iSCSI port. The IP addresses must match the subnets for each fault domain. CAUTION: Correctly assign IP addresses to the HBAs or network adapters.
Example Figure 47. Storage System with Dual 10 GbE Storage Controllers and Two Ethernet Switches 1. Server 1 2. Server 2 3. Ethernet switch 1 (fault domain 1) 4. Ethernet switch 2 (fault domain 2) 5. Storage system 6. Storage controller 1 7. Storage controller 2 Next steps Install or enable MPIO on the host servers. NOTE: After the Storage Center configuration is complete, run the host access wizard to configure host server access and apply MPIO best practices.
3. • Storage controller 2: port 3 to Ethernet switch 1 Connect fault domain 2 (shown in blue) to iSCSI network 2. • Storage controller 1: port 2 to Ethernet switch 2 • Storage controller 2: port 2 to Ethernet switch 2 • Storage controller 1: port 4 to Ethernet switch 2 • Storage controller 2: port 4 to Ethernet switch 2 Example Figure 48. Storage System with Dual 1 GbE Storage Controllers and Two Ethernet Switches 1. Server 1 2. Server 2 3. Ethernet switch 1 (fault domain 1) 4.
NOTE: This configuration is vulnerable to switch unavailability, which results in a loss of connectivity between the host servers and storage system. Steps 1. Connect each server to the iSCSI network. 2. Connect fault domain 1 (shown in orange) to the iSCSI network. • Storage controller 1: port 1 to the Ethernet switch • Storage controller 1: port 2 to the Ethernet switch • Storage controller 2: port 1 to the Ethernet switch • Storage controller 2: port 2 to the Ethernet switch Example Figure 49.
NOTE: This configuration is vulnerable to switch unavailability, which results in a loss of connectivity between the host servers and the storage system. Steps 1. Connect each server to the iSCSI network. 2. 3. Connect fault domain 1 (shown in orange) to the iSCSI network.
Labeling the Front-End Cables Label the front-end cables to indicate the storage controller and port to which they are connected. Prerequisites Locate the pre-made front-end cable labels that shipped with the storage system. About this task Apply cable labels to both ends of each cable that connects a storage controller to a front-end fabric or network, or directly to host servers. Steps 1. Starting with the top edge of the label, attach the label to the cable near the connector. Figure 51.
Preparing Host Servers On each host server, install the SAS host bus adapters (HBAs), install the drivers, and make sure that the latest supported firmware is installed. About this task NOTE: Refer to the Dell Storage Compatibility Matrix for a list of supported SAS HBAs. Steps 1. Install the SAS HBAs in the host servers. NOTE: Do not install SAS HBAs from different vendors in the same server. 2.
a. Connect a SAS cable from storage controller 1: port 3 to the first SAS HBA on host server 2. b. Connect a SAS cable from storage controller 2: port 3 to the first SAS HBA on host server 2. 4. Connect fault domain 4 (shown in red) to host server 2. a. Connect a SAS cable from storage controller 1: port 4 to the second SAS HBA on host server 2 b. Connect a SAS cable from storage controller 2: port 4 to the second SAS HBA on host server 2 Example Figure 53.
3. Connect fault domain 3 (shown in gray) to host server 3. a. Connect a SAS cable from storage controller 1: port 3 to the SAS HBA on host server 3. b. Connect a SAS cable from storage controller 2: port 3 to the SAS HBA on host server 3. 4. Connect fault domain 4 (shown in red) to host server 4. a. Connect a SAS cable from storage controller 1: port 4 to the SAS HBA on host server 4. b. Connect a SAS cable from storage controller 2: port 4 to the SAS HBA on host server 4. Example Figure 54.
Steps 1. Starting with the top edge of the label, attach the label to the cable near the connector. Figure 55. Attach Label to Cable 2. Wrap the label around the cable until it fully encircles the cable. The bottom of each label is clear so that it does not obscure the text. Figure 56. Wrap Label Around Cable 3. Apply a matching label to the other end of the cable.
Figure 57. Storage System Connected to a Management Network 1. Corporate/management network 2. Ethernet switch 3. Storage system 4. Storage controller 1 5. Storage controller 2 NOTE: If the Flex Port license is installed, the management port becomes a shared iSCSI port. To use the management port as an iSCSI port, cable the management port to a network switch dedicated to iSCSI traffic. Special considerations must be taken into account when sharing the management port.
Figure 59. Wrap Label Around Cable 3. Apply a matching label to the other end of the cable. Cabling the Embedded Ports for iSCSI Replication If the Storage Center is licensed for replication, the replication port can be connected to an Ethernet switch and used for iSCSI replication. If the Storage Center is licensed for replication and the Flex Port license is installed, the management port and replication port can both be connected to an Ethernet switch and used for iSCSI replication.
3. Ethernet switch 2 (iSCSI network) 4. Storage system 5. Storage controller 1 6. Storage controller 2 3. To configure the fault domains and ports, click the Configure Embedded iSCSI Ports link on the Configuration Complete page of the Discover and Configure Uninitialized SCv2000 Series Storage Centers wizard. 4. To configure replication, refer to the Dell Storage Manager Administrator’s Guide.
Related links Configure Embedded iSCSI Ports Cabling the Embedded Ports for iSCSI Host Connectivity If the Flex Port license is installed on the Storage Center, the management port and replication port can be connected to an Ethernet switch and used for iSCSI host connectivity. Dell recommends using two switches dedicated for iSCSI traffic. Refer to the iSCSI Settings appendix for a list of recommend and required settings.
Figure 62. Two iSCSI Networks using the Embedded Ethernet Ports on Dual Fibre Channel Storage Controllers 5. 1. Corporate/management network 2. Server 1 (FC) 3. Server 2 (iSCSI) 4. FC Switch 1 (fault domain 1 for FC fabric) 5. FC Switch 2 (fault domain 2 for FC fabric) 6. Ethernet switch 1 (fault domain 1) 7. Ethernet switch 2 (fault domain 2) 8. Storage system 9. Storage controller 1 10.
• • If a physical port or Ethernet switch becomes unavailable, the storage system is accessed from the switch in the other fault domain. If a storage controller becomes unavailable, the virtual ports on the offline storage controller move to the physical ports on the other storage controller. NOTE: The Flex Port feature allows both Storage Center system management traffic and iSCSI traffic to use the same physical network ports.
NOTE: After the Storage Center configuration is complete, run the host access wizard to configure host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage Center Best Practices document on the Dell TechCenter site (http://en.community.dell.com/techcenter/storage/).
4 Connect the Back-End Cabling and Connect the Power Back-end cabling refers to the connections between the storage system and expansion enclosure. After the back-end cabling is complete, connect power cables to the storage system components and turn on the hardware. An SCv2080 storage system can be deployed with or without an expansion enclosure. • When an SCv2080 storage system is deployed without an expansion enclosure, the storage controllers must be interconnected using SAS cables.
Back-End Connections for an SCv2080 Storage System Without an Expansion Enclosure When you deploy an SCv2080 storage system without an expansion enclosure, you must interconnect the storage controllers using SAS cables. NOTE: The left storage controller is storage controller 1 and the right storage controller is storage controller 2. Figure 65. SCv2080 Without an Expansion Enclosure 1. SCv2080 storage system 3. Storage controller 2 2.
Figure 66. SCv2080 and One SC180 Expansion Enclosure 1. Expansion enclosure 2. Left EMM 3. Right EMM 4. Storage system 5. Storage controller 1 6. Storage controller 2 Table 4. Storage System Connected to an Expansion Enclosure Path Connections Chain 1: Side A (orange) Chain 1: Side B (blue) 1. 2. Storage controller 1: port A to expansion enclosure: left EMM, port C. Storage controller 2: port B to expansion enclosure: left EMM, port B. 1. 2.
Figure 67. Attach Label to Cable 2. Wrap the label around the cable until it fully encircles the cable. The bottom of each label is clear so that it does not obscure the text. Figure 68. Wrap Label Around Cable 3. Apply a matching label to the other end of the cable. Connect Power Cables and Turn on the Storage System Connect the power cables to the storage system components and turn on the hardware.
Figure 69. Connect the Power Cables 3. Plug the other end of the power cables into a grounded electrical outlet or a separate power source such as an uninterrupted power supply (UPS) or a power distribution unit (PDU). 4. Press both power switches on the rear of the storage system chassis to turn on the storage system. Figure 70. Turn on the Storage System When the SCv2080 storage system is powered on, there is a delay while the storage system prepares to start up.
Figure 71.
5 Discover and Configure the Storage Center The Discover and Configure Uninitialized SCv2000 Series Storage Centers wizard allows you to set up a Storage Center to make it ready for volume creation. Use the Dell Storage Manager Client to discover and configure the Storage Center. After configuring a Storage Center, you can set up a localhost, VMware vSphere host, or VMware vCenter host using the host setup wizards.
Subnet mask ___ . ___ . ___ . ___ Gateway IPv4 address ___ . ___ . ___ . ___ Domain name ________________ DNS server address ___ . ___ . ___ . ___ Secondary DNS server address ___ . ___ . ___ . ___ Table 6.
Table 9. NTP, SMTP, and Proxy Servers NTP server IPv4 address ___ . ___ . ___ . ___ SMTP server IPv4 address ___ . ___ . ___ . ___ Backup SMTP server IPv4 address ___ . ___ . ___ . ___ SMTP server login ID ________________ SMTP server password ________________ Proxy server IPv4 address ___ . ___ . ___ .
• SUSE Linux Enterprise 12 or later • Windows Server 2008 R2 or later Install and Use the Dell Storage Manager Client You must start the Dell Storage Manager Client as an Administrator to run the Discover and Configure Uninitialized Storage Centers wizard. 1. Go to www.dell.com/support, navigate to the SCv2080 product support page, and download the Windows or Linux version of the Dell Storage Manager Client. 2. Install the Dell Storage Manager Client on the host server.
6. If the Storage Center is partially configured, the Storage Center login pane appears. Enter the management IPv4 address and the Admin password for the Storage Center, then click Next to continue. Deploy the Storage Center Using the Direct Connect Method Use the direct connect method to manually deploy the Storage Center when it is not discoverable. 1. Use an Ethernet cable to connect the computer running the Dell Storage Manager Client to the management port of the top controller. 2.
Configure iSCSI Fault Domains For a Storage Center with iSCSI front-end ports, use the Configure Fault Tolerance page and the Fault Domain pages to enter network information for the fault domains and ports. 1. (Optional) On the Configure Fault Tolerance page, click More information about fault domains or How to set up an iSCSI network to learn more about these topics. 2. Click Next. NOTE: If any iSCSI ports are down, a dialog box appears that allows you to unconfigure these ports.
Review Fibre Channel Front-End Configuration For a Storage Center with Fibre Channel front-end ports, the Fault Domains page displays an example of a fault domain topology based on the number of controllers and type of front-end ports. The Review Front-End Configuration page displays information about the fault domains created by the Storage Center. 1. (Optional) On the Fault Tolerance page, click More information about fault domains to learn more about fault domains. 2. Click Next. 3.
• To use SMTP, type the Storage Center fully qualified domain name in the Hello Message (HELO) field. • To use ESMTP, select the Send Extended Hello (EHLO) check box, then type the Storage Center fully qualified domain name in the Extended Hello Message (EHLO) field. g. If the SMTP server requires clients to authenticate before sending email, select the Use Authorized Login (AUTH LOGIN) check box, then type a user name and password in the Login ID and Password fields. 3. Click Next.
4. Type a shipping address where replacement Storage Center components can be sent. 5. Click Next. Update Storage Center The Storage Center attempts to contact the SupportAssist Update Server to check for updates. If you are not using SupportAssist, you must use the Storage Center Update Utility to update the Storage Center operating system before continuing. • • • If no update is available, the Storage Center Up to Date page appears. Click Next.
Set Up a localhost or VMware Host After configuring a Storage Center, you can set up block-level storage for a localhost, VMware vSphere host, or VMware vCenter. Set Up a localhost from Initial Setup Configure a localhost to access block-level storage on the Storage Center. It is recommended that you perform this procedure for each host that is connected to the Storage Center. Prerequisites • Client must be running on a system with a 64-bit operating system.
3. Select an available port, and then click Create Server. The server definition is created on the Storage Center. 4. The Host Setup Successful page displays the best practices that were set by the wizard and best practices that were not set. Make a note of any best practices that were not set by the wizard. It is recommended that these updates be applied manually before starting IO to the Storage Center. 5. (Optional) Select Create a Volume for this host to create a volume after finishing host setup.
a. Select the server. b. Click Next. The Volume Summary page is displayed. 7. Click Finish. Set the Default Storage Profile for New Volumes The default Storage Profile is used when a new volume is created unless the user selects a different Storage Profile. You can prevent the Storage Profile from being changed during volume creation by clearing the Allow Storage Profile Selection checkbox. 1. Click the Storage view. 2. In the Storage pane, select a Storage Center. 3.
6 Perform Post-Setup Tasks Perform connectivity and failover tests to make sure that the Storage Center deployment was successful. NOTE: Before testing failover, set the operation mode of the Storage Center to Maintenance. When you are finished testing failover, set the operation mode of the Storage Center back to Normal. Verify Connectivity and Failover This section describes how to verify that the Storage Center is set up properly and performs failover correctly.
Test Basic Connectivity Verify basic connectivity by copying data to the test volumes. 1. Connect to the server to which the volumes are mapped. 2. Create a folder on the TestVol1 volume, copy at least 2 GB of data to the folder, and verify that the data copied successfully. 3. Create a folder on the TestVol2 volume, copy at least 2 GB of data to the folder, and verify that the data copied successfully.
Clean Up Test Volumes After testing is complete, delete the volumes used for testing. 1. Connect to the server to which the volumes are mapped and remove the volumes. 2. Use the Dell Storage Manager Client to connect to the Storage Center. 3. Click the Storage tab. 4. From the Storage tab navigation pane, select the Volumes node. 5. Select the volumes to delete. 6. Right-click on the selected volumes and select Delete. The Delete dialog box opens. 7. Click OK.
A Adding or Removing an Expansion Enclosure This section describes how to add an expansion enclosure to a storage system and how to remove an expansion enclosure from a storage system. Adding an Expansion Enclosure to a Storage System Use caution when adding an expansion enclosure to a live Storage Center system to preserve the integrity of the existing data. Prerequisites Install the expansion enclosure in a rack, but do not connect the expansion enclosure to the storage system.
Add the Expansion Enclosure to the A-Side Chain Connect the expansion enclosure to one chain at a time to maintain drive availability. 1. Disconnect the A-side chain that interconnects the storage controllers. Remove the SAS cable that connects storage controller 1: port A to storage controller 2: port B. Figure 73. Remove the A-Side Cable from the Storage Controllers 2. 1. Storage system 3. Storage controller 2 2. Storage controller 1 Add the expansion enclosure to the A-side chain. a.
Add the Expansion Enclosure to the B-Side Chain Connect the expansion enclosure to one chain at a time to maintain drive availability. 1. Disconnect the B-side chain that interconnects the storage controllers. Remove the SAS cable that connects storage controller 1: port B to storage controller 2: port A. Figure 75. Remove the B-Side Cable from the Storage Controllers 2. 1. Expansion enclosure 2. Storage system 3. Storage controller 1 4.
Figure 76. Connect the B-Side Cables to the Expansion Enclosure 1. Expansion enclosure 2. Storage system 3. Storage controller 1 4. Storage controller 2 Label the Back-End Cables Label the back-end cables that interconnect the storage controllers or label the back-end cables that connect the storage system to the expansion enclosures. Prerequisites Locate the cable labels provided with the expansion enclosures.
Figure 77. Attach Label to Cable 2. Wrap the label around the cable until it fully encircles the cable. The bottom of each label is clear so that it does not obscure the text. Figure 78. Wrap Label Around Cable 3. Apply a matching label to the other end of the cable. Removing an Expansion Enclosure from a Chain Currently in Service To remove an expansion enclosure, you disconnect the expansion enclosure from one side of the chain at a time.
5. Locate the expansion enclosure in the rack. Click Next. 6. Disconnect the A-side chain. a. Disconnect the A-side cables that connect the expansion enclosure to the storage system. Click Next. b. Reconnect the A-side cables to exclude the expansion enclosure from the chain. Click Next to validate the cabling. 7. Disconnect the B-side chain. a. Disconnect the B-side cables that connect the expansion enclosure to the storage system. Click Next. b.
Figure 79. Disconnecting the SC180 Expansion Enclosure from the A-side Chain 4. 1. Expansion enclosure 2. Storage system 3. Storage controller 1 4. Storage controller 2 Connect a SAS cable between storage controller 1: port A and storage controller 2: port B. Figure 80. Reconnecting the A-side Chain 1. Expansion enclosure 2. Storage system 3. Storage controller 1 4.
Disconnect the B-Side Chain from the SC180 Expansion Enclosure Disconnect the B-side chain from the expansion enclosure. 1. Remove the SAS cable between storage controller 2: port A and the expansion enclosure: right EMM, port B. 2. In the Dell Storage Manager Client, verify that port A on storage controller 1 and port B on storage controller 2 are Up. The Aside chain continues to carry IO while the B-side chain is disconnected. 3.
Figure 82. Expansion Enclosure Disconnected 1. Disconnected expansion enclosure 2. Storage system 3. Storage controller 1 4.
B Troubleshooting Storage Center Deployment This section contains troubleshooting steps for common Storage Center deployment issues. Troubleshooting Storage Controllers To troubleshoot storage controllers: 1. Check the status of the storage controller using the Dell Storage Manager Client. 2. Check the position of the storage controllers. The lower HSN should be on the left, and the higher HSN should be on the right. 3. Check the pins and reseat the storage controller. a.