Dell EMC PowerVault ME4 Series Storage System Deployment Guide July 2021 Rev.
Notes, cautions, and warnings NOTE: A NOTE indicates important information that helps you make better use of your product. CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. © 2018 – 2021 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries.
Contents Chapter 1: Before you begin.......................................................................................................... 6 Unpack the enclosure......................................................................................................................................................... 6 Safety guidelines..................................................................................................................................................................
Dual-controller module configurations................................................................................................................... 26 Chapter 5: Connect power cables and power on the storage system............................................30 Power cable connection.................................................................................................................................................. 30 Chapter 6: Perform system and storage setup...............................
Host I/O.........................................................................................................................................................................86 Dealing with hardware faults.................................................................................................................................... 86 Appendix A: Cabling for replication.............................................................................................
1 Before you begin This document describes the initial hardware setup for Dell EMC PowerVault ME4 Series storage systems. This document might contain third-party content that is not under the control of Dell EMC. The language in the third-party content might be in inconsistent with the current guidelines for Dell EMC content. Dell EMC reserves the right to update this document after the content is updated by the relevant third parties.
Figure 2. Unpacking the 5U84 enclosure 1. Storage system enclosure 3. Documentation 5. Rackmount right rail (5U84) 2. DDICs (Disk Drive in Carriers) 4. Rackmount left rail (5U84) 6. Drawers ○ DDICs ship in a separate container and must be installed into the enclosure drawers during product installation. For rackmount installations, DDICs are installed after the enclosure is mounted in the rack. See Populating drawers with DDICs on page 14.
● Fully configured 5U84 enclosures can weigh up to 135 kg (298 lb). An unpopulated enclosure weighs 46 kg (101 lb). ● Use a minimum of two people to lift the 5U84 enclosure from the shipping box and install it in the rack. Before lifting the enclosure: ● Avoid lifting the enclosure using the handles on any of the CRUs because they are not designed to take the weight. ● Do not lift the enclosure higher than 20U. Use mechanical assistance to lift above this height.
Rack system safety precautions The following safety requirements must be considered when the enclosure is mounted in a rack: ● The rack construction must support the total weight of the installed enclosures. The design should incorporate stabilizing features to prevent the rack from tipping or being pushed over during installation or in normal use. ● When loading a rack with enclosures, fill the rack from the bottom up; and empty the rack from the top down.
Table 1. Installation checklist (continued) Step Task Where to find procedure 9 Perform host setup: ● Attach the host servers. ● Install the required host software. See Host system requirements on page 43. See Windows hosts on page 43. See Linux hosts on page 50. See VMware ESXi hosts on page 57. See Citrix XenServer hosts on page 64. Perform the initial configuration tasks. 3 10 1 See Using guided setup on page 33.
● A 5U enclosure, which is delivered without DDICs installed, requires two people to lift it from the box. A mechanical lift is required to hoist the enclosure for positioning in the rack. Make sure that you wear an effective antistatic wrist or ankle strap and follow conventional ESD precautions when touching modules and components. Do not touch the midplane, motherboard, or module connectors.
● Each 2U24 drive slot holds a single low profile 5/8 inch high, 2.5 in. form factor disk drive in its carrier. The disk drives are vertical. The carriers have mounting locations for: ● Direct dock SAS drives. A sheet steel carrier holds each drive, which provides thermal conduction, radio frequency, and electro-magnetic induction protection, and physically protects the drive.
Blank drive carrier modules Blank drive carrier modules, also known as drive blanks, are provided in 3.5" (2U12) and 2.5" (2U24) form factors. They must be installed in empty disk slots to create a balanced air flow. Figure 6. Blank drive carrier modules: 3.5" drive slot (left); 2.5" drive slot (right) DDIC in a 5U enclosure Each disk drive is installed in a DDIC that enables secure insertion of the disk drive into the drawer with the appropriate SAS carrier transition card.
Figure 8. 2.5" drive in a 3.5" DDIC with a hybrid drive carrier adapter Populating drawers with DDICs The 5U84 enclosure does not ship with DDICs installed. Before populating drawers with DDICs, ensure that you adhere to the following guidelines: ● The minimum number of disks that are supported by the enclosure is 28, 14 in each drawer. ● DDICs must be added to disk slots in complete rows (14 disks at a time).
2 Mount the enclosures in the rack This section describes how to unpack the ME4 Series Storage System equipment, prepare for installation, and safely mount the enclosures into the rack. Topics: • • • • Rackmount rail kit Install the 2U enclosure Install the 5U84 enclosure Connect optional expansion enclosures Rackmount rail kit Rack mounting rails are available for use in 19-inch rack cabinets. The rails have been designed and tested for the maximum enclosure weight.
Table 3. Install the rail in the rack Item Description Item Description 1 Front rack post (square hole) 6 Clamping screw 2 Rail pins (two per rail) 7 Enclosure fastening screw 3 Left rail 8 2U Ops panel installation detail (exploded view) 4 Rear rack post (square hole) 9 Position locking screw 5 Clamping screw 10 Enclosure fastening screw e. Repeat the previous steps to install the other rail in the rack. 4. Install the enclosure into the rack: a.
c. Extend the rail to fit between the front and rear rack posts and insert the rail pins into the front rack post. NOTE: Ensure that the rail pins are fully inserted in the rack holes in the front and rear rack posts. d. Use the clamping screws to secure the rail to the rack posts and tighten the position locking screws on the rail. e. Ensure the four rear spacer clips (not shown) are fitted to the edge of the rack post. Figure 12.
Connect optional expansion enclosures ME4 Series controller enclosures support 2U12, 2U24, and 5U84 expansion enclosures. 2U12 and 2U24 expansion enclosures can be intermixed, however 2U expansion enclosures cannot be intermixed with 5U84 expansion enclosures in the same storage system. NOTE: To add expansion enclosures to an existing storage system, power down the controller enclosure before connecting the expansion enclosures.
Figure 13. Cabling connections between a 2U controller enclosure and 2U expansion enclosures 1. 3. 5. 7. 9. Controller module A (0A) IOM (1A) IOM (2A) IOM (3A) IOM (9A) 2. 4. 6. 8. 10. Controller module B (0B) IOM (1B) IOM (2B) IOM (3B) IOM (9B) Cabling connections between a 5U controller enclosure and 5U expansion enclosures on page 19 shows the maximum cabling configuration for a 5U84 controller enclosure with 5U84 expansion enclosures (four enclosures including the controller enclosure). Figure 14.
Cabling connections between a 2U controller enclosure and 5U84 expansion enclosures on page 20 shows the maximum cabling configuration for a 2U controller enclosure with 5U84 expansion enclosures (four enclosures including the controller enclosure). Figure 15. Cabling connections between a 2U controller enclosure and 5U84 expansion enclosures 1. 3. 5. 7. Controller module A (0A) IOM (1A) IOM (2A) IOM (3A) 2. 4. 6. 8.
3 Connect to the management network Perform the following steps to connect a controller enclosure to the management network: 1. Connect an Ethernet cable to the network port on each controller module. 2. Connect the other end of each Ethernet cable to a network that your management host can access, preferably on the same subnet. NOTE: If you connect the iSCSI and management ports to the same physical switches, Dell EMC recommends using separate VLANs. Figure 16.
4 Cable host servers to the storage system This section describes the different ways that host servers can be connected to a storage system. Topics: • • • Cabling considerations Connecting the enclosure to hosts Host connection Cabling considerations Host interface ports on ME4 Series controller enclosures can connect to respective hosts using direct-attach or switch-attach methods. Another important cabling consideration is cabling controller enclosures to enable the replication feature.
CNC ports used for host connection ME4 Series SFP+ based controllers ship with CNC ports that are configured for FC. If you must change the CNC port mode, you can do so using the PowerVault Manager. Alternatively, the ME4 Series enables you to set the CNC ports to use FC and iSCSI protocols in combination. When configuring a combination of host interface protocols, host ports 0 and 1 must be configured for FC, and host ports 2 and 3 must be configured for iSCSI.
Example iSCSI port address assignments The following figure and the supporting tables provide example iSCSI port address assignments featuring two redundant switches and two IPv4 subnets: NOTE: For each callout number, read across the table row for the addresses in the data path. Figure 18. Two subnet switch example (IPv4) Table 5. Two subnet switch example No. Device IP Address Subnet 1 A0 192.68.10.200 10 2 A1 192.68.11.210 11 3 A2 192.68.10.220 10 4 A3 192.68.11.230 11 5 B0 192.
SAS protocol ME4 Series SAS models use 12 Gb/s host interface protocol and qualified cable options for host connection. 12Gb HD mini-SAS host ports ME4 Series 12 Gb SAS controller enclosures support two controller modules. The 12 Gb/s SAS controller module provides four SFF-8644 HD mini-SAS host ports. These host ports support data rates up to 12 Gb/s. HD mini-SAS host ports are used for attachment to SAS hosts directly. The host computer must support SAS and optionally, multipath I/O.
12 Gb HD mini-SAS host connection To connect controller modules supporting HD mini-SAS host interface ports to a server HBA, using the SFF-8644 dual HD mini-SAS host ports on a controller, select a qualified HD mini-SAS cable option. For information about configuring SAS HBAs, see the SAS topics under Perform host setup on page 43. A qualified SFF-8644 to SFF-8644 cable option is used for connecting to a 12Gb/s enabled host. Qualified SFF-8644 to SFF-8644 options support cable lengths of 0.5 m (1.
Dual-controller module configurations – directly attached In the following figures, blue cables show controller module A paths, and green cables show controller module B paths for host connection: Figure 20. Connecting hosts: ME4 Series 2U direct attach – one server, one HBA, dual path 1. Server 2. Controller module in slot A 3. Controller module in slot B Figure 21. Connecting hosts: ME4 Series 5U direct attach – one server, one HBA, dual path 1. Server 2. Controller module in slot A 3.
Figure 24. Connecting hosts: ME4 Series 2U direct attach– four servers, one HBA per server, dual path 1. Server 1 3. Server 3 5. Controller module A 2. Server 2 4. Server 4 6. Controller module B Figure 25. Connecting hosts: ME4 Series 5U direct attach – four servers, one HBA per server, dual path 1. Server 1 3. Server 3 5. Controller module A 2. Server 2 4. Server 4 6.
Figure 26. Connecting hosts: ME4 Series 2U switch-attached – two servers, two switches 1. Server 1 3. Switch A 5. Controller module A 2. Server 2 4. Switch B 6. Controller module B Figure 27. Connecting hosts: ME4 Series 5U switch-attached – two servers, two switches 1. Server 1 3. Switch A 5. Controller module A 2. Server 2 4. Switch B 6.
5 Connect power cables and power on the storage system Before powering on the enclosure system, ensure that all modules are firmly seated in their correct slots. Verify that you have successfully completed the Installation checklist on page 9 instructions. Once you have completed steps 1–7, you can access the management interfaces using your web-browser to complete the system setup.
Testing enclosure connections See Powering on on page 31. Once the power-on sequence succeeds, the storage system is ready to be connected as described in Connecting the enclosure to hosts on page 22. Grounding checks The enclosure system must be connected to a power source that has a safety electrical grounding connection.
● A 5U84 enclosure must be left in a power on state for 30 seconds following resumption from standby before the enclosure can be placed into standby again. ● Although the enclosure supports standby, the expansion module shuts off completely during standby and cannot receive a user command to power back on. An AC power cycle is the only method to return the 5U84 to full power from standby.
6 Perform system and storage setup The following sections describe how to setup a Dell EMC PowerVault ME4 Series storage system: Topics: • • Record storage system information Using guided setup Record storage system information Use the System Information Worksheet on page 100 to record the information that you need to install the ME4 Series storage system. Using guided setup Upon completing the hardware installation, use PowerVault Manager to configure, provision, monitor, and manage the storage system.
● Password: !manage b. Read the Commercial Terms of Sale and End User License Agreement, and click Accept. The storage system displays the Welcome panel. The Welcome panel provides options for setting up and provisioning your storage system. 4. If the storage system is running G280 firmware: a. Click Get Started. b. Read the Commercial Terms of Sale and End User License Agreement, and click Accept. c. Type a new user name for the storage system in the Username field.
NOTE: Tabs with a red asterisk next to them contain required settings. 3. Save your settings and exit System Settings to return to the Welcome panel. 4. Click Storage Setup to access the Storage Setup wizard and follow the prompts to begin provisioning your system by creating disk groups and pools. For more information about using the Storage Setup wizard, see Configuring storage setup on page 40. 5. Save your settings and exit Storage Setup to return to the Welcome panel. 6.
● Controller A IP address: fd6e:23ce:fed3:19d1::1 ● Controller B IP address: fd6e:23ce:fed3:19d1::2 ● Gateway IP address: fd6e:23ce:fed3:19d1::3 CAUTION: Changing IP settings can cause management hosts to lose access to the storage system after the changes are applied in the confirmation step. Set IPv4 addresses for network ports Perform the following steps to set IPv4 addresses for the network ports: 1. In the Welcome panel, select System Settings, and then click the Network tab. 2. Select the IPv4 tab.
8. Sign out and use the new IP address to access PowerVault Manager. Setting up system notifications Dell EMC recommends enabling at least one notification service to monitor the system. Enable email notifications Perform the following steps to enable email notifications: 1. In the Welcome panel, select System Settings, and then click the Notifications tab. 2. Select the Email tab and ensure that the SMTP Server and SMTP Domain options are set. 3.
5. If the storage array does not have direct access to the Internet, you can use a web proxy server to send SupportAssist data to technical support. To use a web proxy, click the Web Proxy tab, select the Web Proxy checkbox, and type the web proxy server settings in the appropriate fields. 6. To enable CloudIQ, click the CloudIQ Settings tab and select the Enable CloudIQ checkbox. NOTE: For more information about CloudIQ, contact technical support or go to the CloudIQ product page. 7.
○ Controller A port 3: 10.11.10.220 ○ Controller B port 2: 10.10.10.210 ○ Controller B port 3: 10.11.10.230 ● Netmask: For IPv4, subnet mask for assigned port IP address. ● Gateway: For IPv4, gateway IP address for assigned port IP address. ● Default Router: For IPv6, default router for assigned port IP address. 3. In the Advanced Settings section of the panel, set the options that apply to all iSCSI ports: Table 6.
Table 7. iSCSI port-specific options IP Address For IPv4 or IPv6, the port IP address. For corresponding ports in each controller, assign one port to one subnet and the other port to a second subnet. Ensure that each iSCSI host port in the storage system is assigned a different IP address. For example, in a system using IPv4: ● Controller A port 2: 10.10.10.100 ● Controller A port 3: 10.11.10.120 ● Controller B port 2: 10.10.10.110 ● Controller B port 3: 10.11.10.
Select the storage type When you first access the wizard, you are prompted to select the type of storage to use for your environment. Read through the options and make your selection, and then click Next to proceed.
Open the guided disk group and pool creation wizard Perform the following steps to open the disk group and pool creation wizard: 1. Access Storage Setup by performing one of the following actions: ● From the Welcome panel, click Storage Setup. ● From the Home topic, click Action > Storage Setup. 2. Follow the on-screen directions to provision your system.
7 Perform host setup This section describes how to perform host setup for Dell EMC PowerVault ME4 Series storage systems. Dell EMC recommends performing host setup on only one host at a time. For a list of supported HBAs or iSCSI network adapters, see the Dell EMC PowerVault ME4 Series Storage System Support Matrix. For more information, see the topics about initiators, hosts, and host groups, and attaching hosts and volumes in the Dell EMC PowerVault ME4 Series Storage System Administrator’s Guide.
Attach a Windows host with FC HBAs to the storage system Perform the following steps to attach the Windows host with Fibre Channel (FC) HBAs to the storage system: 1. Ensure that all HBAs have the latest supported firmware and drivers as described on Dell.com/support. For a list of supported FC HBAs, see the Dell EMC ME4 Series Storage System Support Matrix on Dell.com/support. 2.
Enable MPIO for the volumes on the Windows host Perform the following steps to enable MPIO for the volumes on the Windows host: 1. Open the Server Manager. 2. Select Tools > MPIO. 3. Click the Discover Multi-Paths tab. 4. Select DellEMC ME4 in the Device Hardware Id list. If DellEMC ME4 is not listed in the Device Hardware Id list: a. Ensure that there is more than one connection to a volume for multipathing. b. Ensure that Dell EMC ME4 is not already listed in the Devices list on the MPIO Devices tab. 5.
Table 8. Example worksheet for host server with dual port iSCSI NICs (continued) Management IP Subnet 2 Server iSCSI NIC 1 172.2.96.46 ME4024 controller A port 1 172.2.101.128 ME4024 controller B port 1 172.2.201.129 ME4024 controller A port 3 172.2.103.128 ME4024 controller B port 3 172.2.203.129 Subnet Mask 255.255.0.0 NOTE: The following instructions document IPv4 configurations with a dual switch subnet for network redundancy and failover. It does not cover IPv6 configuration.
Configure the iSCSI Initiator on the Windows host Perform the following steps to configure the iSCSI Initiator on a Windows host: 1. Open the Server Manager. 2. Select Tools > iSCSI Initiator. The iSCSI Initiator Properties dialog box opens. If you are running the iSCSI initiator for the first time, click Yes when prompted to have it start automatically when the server reboots. 3. Click the Discovery tab, then click Discover Portal. The Discover Target Protocol dialog box opens. 4.
Enable MPIO for the volumes on the Windows host Perform the following steps to enable MPIO for the volumes on a Windows host: 1. 2. 3. 4. Open Server Manager. Select Tools > MPIO. Click the Discover Multi-Paths tab. Select DellEMC ME4 in the Device Hardware Id list. If DellEMC ME4 is not listed in the Device Hardware Id list: a. Ensure that there is more than one connection to a volume for multipathing. b. Ensure that Dell EMC ME4 is not already listed in the Devices list on the MPIO Devices tab. 5.
iii. Click Next until you reach the Features page. iv. Select Multipath I/O. v. Click Next, click Install, and click Close. vi. Reboot the Windows server. 4. Identify and document the SAS HBA WWNs: a. Open a Windows PowerShell console. b. Type Get-InitiatorPort and press Enter. c. Locate and record the SAS HBA WWNs . The WWNs are needed to map volumes to the server.
2. 3. 4. 5. 6. 7. Select Tools > Computer Management. Right-click on Disk Management and select Rescan Disks. Right-click on the new disk and select Online. Right-click on the new disk again select Initialize Disk. The Initialize Disk dialog box opens. Select the partition style for the disk and click OK. Right-click on the unallocated space, select the type of volume to create, and follow the steps in the wizard to create the volume.
3. Confirm that you have met the listed prerequisites, then click Next. 4. Type a hostname in the Host Name field. 5. Using the information from step 3 of Attach a Linux host with FC HBAs to the storage system on page 50 to identify the correct initiators, select the FC initiators for the host you are configuring, then click Next. 6. Group hosts together with other hosts. a. For cluster configurations, group hosts together so that all hosts within the group share the same storage.
Configure a Linux host with iSCSI network adapters The following sections describe how to configure a Linux host with iSCSI network adapters: ● Complete the PowerVault Manager guided system and storage setup process. ● Refer to the cabling diagrams within this guide before attaching a host to the storage system; careful planning ensures a successful deployment. ● Administrative or privileged user permissions are required to make system-level changes.
NOTE: If using jumbo frames, they must be enabled and configured on all devices in the data path, adapter ports, switches, and storage system. For RHEL 7 1. From the server terminal or console, run the nmtui command to access the NIC configuration tool (NetworkManager TUI). 2. Select Edit a connection to display a list of the Ethernet interfaces installed. 3. Select the iSCSI NIC that you want to assign an IP address to. 4. Change the IPv4 Configuration option to Manual. 5.
For SLES 12 1. From the server terminal or console, use the yast command to access YaST Control Center. 2. Select Network Service > iSCSI Initiator. 3. On the Service tab, select When Booting. 4. Select the Connected Targets tab. 5. Select Add. The iSCSI Initiator Discovery screen displays. 6. Using the Example worksheet for single host server with dual port iSCSI NICs you created earlier, enter the IP address for port A0 in the IP Address field, then click Next. For example: 172.1.100.128. 7.
a. Run the systemctl enable multipathd command to enable the service to run automatically. b. Run the systemctl start multipathd command to start the service. 4. Run the multipath command to load storage devices in conjunction with the configuration file. 5. Run the multipath –l command to list the Dell EMC PowerVault ME4 Series storage devices as configured under DM Multipath. Create a Linux file system on the volumes Perform the following steps to create and mount an XFS file system: 1.
a. For cluster configurations, group hosts together so that all hosts within the group share the same storage. ● If this host is the first host in the cluster, select create a new host group, then provide a name and click Next. ● If this host is being added to a host group that exists, select Add to existing host group. Select the group from the drop-down list, then click Next.
VMware ESXi hosts Ensure that the HBAs or network adapters are installed and the latest supported BIOS is installed. Fibre Channel host server configuration for VMware ESXi The following sections describe how to configure Fibre Channel host servers running VMware ESXi: Prerequisites ● Complete the PowerVault Manager guided system and storage setup process. ● Refer to the cabling diagrams within this guide before attaching a host to the storage system; careful planning ensures a successful deployment.
NOTE: The host must be mapped with the same access, port, and LUN settings to the same volumes or volume groups as every other initiator in the host group. b. For stand-alone hosts, select the Do not group this host option, then click Next. 7. On the Attach Volumes page, specify the name, size, and pool for each volume, and click Next. To add a volume, click Add Row. To remove a volume, click Remove. NOTE: Dell EMC recommends that you update the name with the hostname to better identify the volumes. 8.
● Install the required version of the VMware ESXi operating system and configure it on the host. ● Complete a planning worksheet with the iSCSI network IP addresses to be used, per the example in the following table. Table 10. Example worksheet for single host server with dual port iSCSI NICs Management IP Server Management 10.10.96.46 ME4024 Controller A Management 10.10.96.128 ME4024 Controller B Management 10.10.96.129 Subnet 1 Server iSCSI NIC 1 172.1.96.46 ME4024 controller A port 0 172.1.
6. On the Create Standard Switch page, click the plus (+) icon, then select vmnic > OK to connect to the subnet defined in step 4 of the “Attach hosts to the storage system” procedure. 7. Click Next. 8. Provide a network label, then update the port properties. 9. On the IPv4 settings page, select Static IP and assign an IP using your planning worksheet. 10. Click Next. 11. On the Ready to complete page, review the settings and then click Finish. 12. Repeat steps 1–11 for each NIC to use for iSCSI traffic.
a. For cluster configurations, group hosts together so that all hosts within the group share the same storage. ● If this host is the first host in the cluster, select Create a new host group, then provide a name and click Next. ● If this host is to be part of a host group that exists, select Add to existing host group. Select the group from the dropdown list, then click Next.
SAS host server configuration for VMware ESXi The following sections describe how to configure SAS host servers running VMware ESXi: Prerequisites ● Complete the PowerVault Manager guided system and storage setup process. ● Refer to the cabling diagrams within this guide before attaching a host to the storage system; careful planning ensures a successful deployment. ● Install the required version of the ESXi operating system and configure it on the host.
9. Click Yes to return to the Introduction page of the wizard, or click No to close the wizard. Enable multipathing on an ESXi host with SAS volumes Perform the following steps to enable multipathing on the ESXi host with SAS volumes: 1. Log in to the VMware vCenter Server, then click the ESXi host. 2. On the Configure tab, select Storage > Storage Adapters. 3. Select the SAS HBA and click Rescan Storage. The Rescan Storage dialog box opens. 4. Click OK. 5.
Citrix XenServer hosts Ensure that the HBAs or network adapters are installed and the latest supported BIOS is installed. Fibre Channel host server configuration for Citrix XenServer The following sections describe how to configure Fibre Channel host servers running Citrix XenServer: Prerequisites ● Complete the PowerVault Manager guided system and storage setup process.
Register a XenServer host with FC HBAs and create volumes Perform the following steps to register a XenServer host with Fibre Channel (FC) HBAs, and create volumes using the PowerVault Manager: 1. Log in to the PowerVault Manager. 2. Access the Host Setup wizard: ● From the Welcome screen, click Host Setup. ● From the Home topic, click Action > Host Setup. 3. Confirm that all the Fibre Channel prerequisites have been met, then click Next. 4. Type the hostname in the Host Name field. 5.
The new SR is displayed in the Resources pane, at the pool level. iSCSI host server configuration for Citrix XenServer The following sections describe how to configure iSCSI host servers running Citrix XenServer: Prerequisites ● Complete the PowerVault Manager guided setup process and storage setup process. ● See the cabling diagrams within this guide before attaching a host to the storage system; careful planning ensures a successful deployment.
NOTE: Configuring the switches with two different IP address ranges/subnets enables high availability. Configure a software iSCSI adapter on a XenServer host Perform the following steps to configure a software iSCSI adapter on a XenServer host: 1. Log in to XenCenter and select the XenServer host. 2. Select the pool in the Resources pane, and click the Networking tab. 3. Identify and document the network name that is used for iSCSI traffic. 4.
b. Type the iSCSI IQN that was specified for the XenServer host in Configure the iSCSI IQN on a XenServer host on page 67. c. Type a name for the initiator in the Initiator Name field. 3. Select the initiator. 4. Select Action > Add to Host. The Add to Host dialog box is displayed. 5. Type a hostname or select a host from the Host Select field and click OK. 6. Repeat the previous steps for all the XenServer hosts iSCSI IQNs. 7. Group hosts together with other hosts in a cluster. a.
SAS host server configuration for Citrix XenServer The following sections describe how to configure SAS host servers running Citrix XenServer: Prerequisites ● Complete the PowerVault Manager guided system and storage setup process. ● See the cabling diagrams within this guide before attaching a host to the storage system; careful planning ensures a successful deployment. ● Install and configure the required version of the XenServer operating system on the hosts.
6. Group hosts together with other hosts in a cluster. a. For cluster configurations, group hosts together so that all hosts within the group share the same storage. ● If this host is the first host in the cluster, select Create a new host group, type a name for the host group, and click Next. ● If this host is being added to a host group that exists, select Add to existing host group, select the group from the drop-down list, and click Next.
8 Troubleshooting and problem solving These procedures are intended to be used only during initial configuration for verifying that hardware setup is successful. They are not intended to be used as troubleshooting procedures for configured systems using production data and I/O. NOTE: For further troubleshooting help after setup, and when data is present, see Dell.com/support.
Table 12. Ops panel functions—2U enclosure front panel (continued) No. Indicator Status 2 Status/Health Constant blue: system is powered on and controller is ready Blinking blue (2 Hz): Enclosure management is busy Constant amber: module fault present Blinking amber: logical fault (2 s on, 1 s off) 3 Unit identification display Green (seven-segment display: enclosure sequence) 4 Identity Blinking blue (0.
Figure 31. Ops panel LEDs—5U enclosure front panel Table 13. Ops panel functions – 5U enclosure front panel No.
Initial start-up problems The following sections describe how to troubleshoot initial start-up problems: LED colors LED colors are used consistently throughout the enclosure and its components for indicating status: ● Green: good or positive indication ● Blinking green/amber: non-critical condition ● Amber: critical fault Troubleshooting a host-side connection with 10Gbase-T or SAS host ports The following procedure applies to ME4 Series controller enclosures employing external connectors in the host inte
NOTE: Do not perform more than one step at a time. Changing more than one variable at a time can complicate the troubleshooting process. 1. Stop all I/O to the storage system. See “Stopping I/O” in the Dell EMC PowerVault ME4 Series Storage System Owner’s Manual. 2. Check the host activity LED. If there is activity, stop all applications that access the storage system. 3. Check the Cache Status LED to verify that the controller cached data is flushed to the disk drives.
Table 14. PCM LED states (continued) PCM OK (Green) Fan Fail (Amber) AC Fail (Amber) DC Fail(Amber) Status Off On On On PCM fault (over temperature, over voltage, over current) Off Blinking Blinking Blinking PCM firmware download is in progress 2U Ops panel LEDs The Ops panel displays the aggregated status of all the modules. See also 2U enclosure Ops panel on page 71. Table 15.
● In ○ ○ ○ normal operation, the amber LED is: Off if there is no drive present. Off as the drive operates. On if there is a drive fault. Figure 32. LEDs: Drive carrier LEDs (SFF and LFF modules) used in 2U enclosures 1. Disk Activity LED 3. Disk Fault LED 2. Disk Fault LED 4. Disk Activity LED 5U enclosure LEDs Use the LEDs on the 5U enclosure to help troubleshoot initial start-up problems. NOTE: When the 5U84 enclosure is powered on, all LEDs are lit for a short period to ensure that they are working.
Table 17. FCM LED descriptions LED Status/description Module OK Constant green indicates that the FCM is working correctly. Off indicates that the fan module has failed. Follow the procedure in “Replacing an FCM” in the Dell EMC PowerVault ME4 Series Storage System Owner’s Manual. Fan Fault Amber indicates that the fan module has failed. Follow the procedure in “Replacing an FCM” in the Dell EMC PowerVault ME4 Series Storage System Owner’s Manual.
Figure 33. LEDs: DDIC – 5U enclosure disk slot in drawer 1. Slide latch (slides left) 2. Latch button (shown in the locked position) 3. Drive Fault LED Table 20.
Module LEDs Module LEDs pertain to controller modules and IOMs. Controller module LEDs Use the controller module LEDs on the face plate to monitor the status of a controller module. Table 21.
● If ○ ○ ● If the CRU Fault LED is on, a fault condition is detected. Restart this IOM using the PowerVault Manager or CLI. If the restart does not resolve the fault, remove the IOM and reinsert it. the previous actions do not resolve the fault, contact your supplier for assistance. IOM replacement may be necessary. Troubleshooting 2U enclosures Common problems that may occur with your 2U enclosure system.
Table 25. Troubleshooting thermal monitoring and control Symptom Cause Recommended action If the ambient air is below 25ºC (77ºF), and the fans increase in speed, some restriction on airflow may be causing the internal temperature to rise. NOTE: This symptom is not a fault condition. The first stage in the thermal control process is for the fans to automatically increase in speed when a thermal threshold is reached.
Table 27.
● ● ● ● Use the PowerVault Manager Use the CLI Monitor event notification View the enclosure LEDs Use the PowerVault Manager The PowerVault Manager uses health icons to show OK, Degraded, Fault, or Unknown status for the system and its components. The PowerVault Manager enables you to monitor the health of the system and its components. If any component has a problem, the system health is in a Degraded, Fault, or Unknown state. Use the PowerVault Manager to find each component that has a problem.
Use the PowerVault Manager to verify any faults found while viewing the LEDs. If the LEDs cannot be viewed due to the location of the system, use the PowerVault Manager to determine where the fault is occurring . This web-application provides you with a visual representation of the system and where the fault is occurring. The PowerVault Manager also provides more detailed information about CRUs, data, and faults. Review the event logs The event logs record all system events.
Host I/O When troubleshooting disk drive and connectivity faults, stop I/O to the affected disk groups from all hosts as a data protection precaution. As an extra data protection precaution, it is helpful to conduct regularly scheduled backups of your data. See “Stopping I/O” in the Dell EMC PowerVault ME4 Series Storage System Owner’s Manual. Dealing with hardware faults Make sure that you have a replacement module of the same type before removing any faulty module.
This step isolates the problem to the external data path (SFP+ transceiver, host cable, and host-side devices) or to the controller module port. Is the host link status/link activity LED on? ● Yes – You now know that the SFP+ transceiver, host cable, and host-side devices are functioning properly. Return the cable to the original port. If the link status LED remains off, you have isolated the fault to the controller module port. Replace the controller module. ● No – Proceed to the next step. 7.
● Yes – You now know that the host cable and host-side devices are functioning properly. Return the cable to the original port. If the link status LED remains off, you have isolated the fault to the controller module port. Replace the controller module. ● No – Proceed to the next step. 7. Verify that the switch, if any, is operating properly. If possible, test with another port. 8. Verify that the HBA is fully seated, and that the PCI slot is powered on and operational. 9.
● Yes – Replace the original cable. The fault has been isolated. ● No – It is likely that the controller module must be replaced.
A Cabling for replication The following sections describe how to cable storage systems for replication: Topics: • • • • Connecting two storage systems to replicate volumes Host ports and replication Example cabling for replication Isolating replication faults Connecting two storage systems to replicate volumes The replication feature performs asynchronous replication of block-level data from a volume in a primary system to a volume in a secondary system.
Host ports and replication ME4 Series Storage System controller modules can use qualified 10Gbase-T connectors or CNC-based ports for replication. CNC ports must use qualified SFP+ transceivers of the same type, or they can use a combination of qualified SFP+ transceivers supporting different interface protocols. To use a combination of different protocols, configure host ports 0 and 1 to use FC, and configure ports 2 and 3 to use iSCSI.
Use multiple switches to avoid a single point of failure inherent to using a single switch, and to physically isolate replication traffic from I/O traffic. Dual-controller module configuration for replication Cabling two ME4 Series controller enclosures that are equipped with dual-controller modules for replication.
● Connect two ports from each controller module in the left storage enclosure to the left switch. ● Connect two ports from each controller module in the right storage enclosure to the right switch. ● Connect two ports from the controller modules in each enclosure to the middle switch. Use multiple switches to avoid a single point of failure inherent to using a single switch, and to physically isolate replication traffic from I/O traffic. Figure 37.
Figure 39. Connecting two ME4 Series 2U storage systems for replication– multiple servers, multiple switches, two networks 1. 2U controller enclosures 3. Connection to host servers (network A) 5. Ethernet WAN 2. Two switches (I/O) 4. Connection to host servers (network B) Figure 40. Connecting two ME4 Series 5U storage systems for replication – multiple servers, multiple switches, two networks 1. 5U controller enclosures 3. Connection to host servers (network A) 5. Ethernet WAN 2. Two switches (I/O) 4.
1. Find the port address on the secondary system: Using the CLI, run the show ports command on the secondary system. 2. Verify that ports on the secondary system can be reached from the primary system using either of the following methods: ○ Run the query peer-connection CLI command on the primary system, using a port address obtained from the output of the show ports command. ○ In the PowerVault Manager Replications topic, select Action > Query Peer Connection. 3. Create a peer connection.
Table 28. Diagnostics for replication setup: Using the replication feature (continued) Answer Possible reasons Action ● Verify cabling paths between replication ports and switches are visible to one another. ● Verify that cable connections are securely fastened. ● Inspect cables for damage and replace if necessary. No A system does not have a pool that is configured. Configure each system to have a storage pool.
Table 30. Diagnostics for replication setup – Replicating a volume (continued) Answer Possible reasons Action No Communication link is down. Review event logs for indicators of a specific fault in a host or replication data path component. Has a replication run successfully? Table 31. Diagnostics for replication setup: Checking for a successful replication Answer Possible reasons Action Yes System functioning properly No action required.
B SFP+ transceiver for FC/iSCSI ports This section describes how to install the small form-factor pluggable (SFP+) transceivers ordered with the ME4 Series FC/iSCSI controller module. Locate the SFP+ transceivers Locate the SFP+ transceivers that shipped with the controller enclosure, which look similar to the generic SFP+ transceiver that is shown in the following figure: Figure 41. Install an SFP+ transceiver into the ME4 Series FC/iSCSI controller module 1. CNC-based controller module face 3.
4. Slide the SFP+ transceiver into the port until it locks securely into place. 5. Flip the actuator closed. 6. Connect a qualified fiber-optic interface cable into the duplex jack of the SFP+ transceiver. If you do not plan to use the SFP+ transceiver immediately, reinsert the plug into the duplex jack of SFP+ transceiver to keep its optics free of dust. Verify component operation View the port Link Status/Link Activity LED on the controller module face plate.
C System Information Worksheet Use the system information worksheet to record the information that is needed to install the ME4 Series Storage System. ME4 Series Storage System information Gather and record the following information about the ME4 Series storage system network and the administrator user: Table 32. ME4 Series Storage System network Item Information Service tag Management IPv4 address (ME4 Series Storage System management address) _____ . _____ . _____ .
Table 34. iSCSI Subnet 1 (continued) Item Information Gateway IPv4 address _____ . _____ . _____ . _____ IPv4 address for storage controller module A: port 0 _____ . _____ . _____ . _____ IPv4 address for storage controller module B: port 0 _____ . _____ . _____ . _____ IPv4 address for storage controller module A: port 2 _____ . _____ . _____ . _____ IPv4 address for storage controller module B: port 2 _____ . _____ . _____ . _____ Table 35.
Table 37. WWNs in fabric 1 Item FC switch port Information FC switch port Information WWN of storage controller A: port 0 WWN of storage controller B: port 0 WWN of storage controller A: port 2 WWN of storage controller B: port 2 WWNs of server HBAs: Table 38.
D Setting network port IP addresses using the CLI port and serial cable You can manually set the static IP addresses for each controller module. Alternatively, you can specify that IP addresses should be set automatically for both controllers through communication with a Dynamic Host Configuration Protocol (DHCP) server. In DHCP mode, the network port IP address, subnet mask, and gateway are obtained from a DCHP server. If a DHCP server is not available, the current network addresses are not changed.
Figure 42. Connecting a USB cable to the CLI port 3. Start a terminal emulator and configure it to use the display settings in Terminal emulator display settings on page 104 and the connection settings in Terminal emulator connection settings on page 104. Table 39. Terminal emulator display settings Parameter Value Terminal emulation mode VT-100 or ANSI (for color support) Font Terminal Translations None Columns 80 Table 40.
If you are connecting to a storage system with G280 firmware that has been deployed: a. Type the username of a user with the manage role at the login prompt and press Enter. b. Type the password for the user at the Password prompt and press Enter. 7.
Mini-USB Device Connection The following sections describe the connection to the mini-USB port: Emulated serial port When a computer is connected to a controller module using a mini-USB serial cable, the controller presents an emulated serial port to the computer. The name of the emulated serial port is displayed using a customer vendor ID and product ID. Serial port configuration is unnecessary.
2. Download the ME4 Series Storage Array USB Utility file from the Dell EMC support site. 3. Follow the instructions on the download page to install the ME4 Series USB driver.