Dell EMC PowerEdge MX SmartFabric Configuration and Troubleshooting Guide Abstract This document provides the steps for configuring and troubleshooting the Dell EMC PowerEdge MX networking switches in SmartFabric mode. Configuration examples include Dell EMC Networking, Cisco Nexus, and Cisco ACI environments. This document replaces the Dell EMC PowerEdge MX SmartFabric Mode Deployment Guide, which is now deprecated.
Revisions Date Description May 2019 Initial Release The information in this publication is provided “as is.” Dell Inc. makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any software described in this publication requires an applicable software license. © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.
Table of contents Revisions............................................................................................................................................................................. 2 1 2 Introduction ................................................................................................................................................................... 8 1.1 Typographical conventions .............................................................................................
5 4.3 Create the SmartFabric ....................................................................................................................................27 4.4 Configure uplink port speed or breakout, if needed .........................................................................................28 4.5 Create the Ethernet uplinks ..............................................................................................................................29 4.
8.4.6 show policy-map ...............................................................................................................................................61 8.4.7 show class-map ................................................................................................................................................61 8.4.8 show vlt domain-id ............................................................................................................................................61 8.4.
12.2.1 Verify if STP is enabled on upstream switches ............................................................................................95 12.2.2 Verify if type of STP is the same on MX and upstream switches ................................................................95 12.3 Verify VLT/vPC configuration on upstream switches .......................................................................................96 12.4 Discovery of FEM and compute sleds .......................................
E D.3 Reset chassis using RACADM .......................................................................................................................120 D.4 Reset an OS10EE switch to factory defaults ..................................................................................................121 D.5 Reset Cisco Nexus 3232C to factory defaults ................................................................................................121 Validated components ...............................
1 Introduction The Dell EMC PowerEdge MX is a unified, high-performance data center infrastructure. The PowerEdge MX provides the agility, resiliency, and efficiency to optimize a wide variety of traditional and new, emerging data center workloads and applications. With its kinetic architecture and agile management, PowerEdge MX dynamically configures compute, storage, and fabric, increases team effectiveness, and accelerates operations.
NOTE: For a detailed overview of the PowerEdge MX hardware, see Appendix A. For more information about the PowerEdge MX network architecture, see the Dell EMC PowerEdge MX Networking Architecture Guide. NOTE: The examples in document assume that the MX7000 chassis are configured in a Multi-Chassis Management group and that no errors have been found. Additionally, this guide assumes the reader has a basic understanding of the PowerEdge MX platform.
1.1 Typographical conventions The CLI and GUI examples in this document use the following conventions: 1.2 Monospace Text CLI examples Underlined Monospace Text CLI examples that wrap the page Italic Monospace Text Variables in CLI examples Bold Monospace Text Commands entered at the CLI prompt, or to highlight information in CLI output Bold text UI elements and information entered in the GUI Attachments This document in .pdf format includes one or more file attachments.
2 SmartFabric Services for PowerEdge MX overview 2.1 Dell EMC OS10 Enterprise Edition The networking market is transitioning from a closed, proprietary stack to open hardware supporting a variety of operating systems. OS10 is designed to allow multi-layered disaggregation of the network functionality.
• show discovered-expanders – displays the MX7116n FEMs attached to the MX9116n FSEs • show unit-provision – displays or configures the unit ID and service tag of a MX7116n FEM attached to a MX9116n FSE NOTE: For more information, see the OS10 Enterprise Edition User Guide for PowerEdge MX I/O Modules on the Support for Dell EMC Networking MX9116n - Manuals and documents web page. 2.2.
• • Automatically detects fabric misconfigurations or link level failure conditions Automatically heals the fabric on failure condition removal NOTE: In SmartFabric mode, MX series switches operate entirely as a Layer 2 network fabric. Layer 3 protocols are not supported.
Full Switch mode SmartFabric mode username spanning-tree vlan 2.3 All switch interfaces are assigned to VLAN 1 by default and are in the same Layer 2 bridge domain. Layer 2 bridging is disabled by default. Interfaces must join a bridge domain (VLAN) before being able to forward frames. All configurations changes are saved in the running configuration by default. To display the current configuration, use the show runningconfiguration command.
If the FSE is in SmartFabric mode, the attached FEM is automatically configured and virtual ports on the Fabric Expander and a virtual slot ID are created and mapped to 8x25GbE breakout interfaces in FEM mode on the Fabric Engine A FSE in Full Switch mode automatically discovers the FEM when these conditions are met: • • • The FEM is connected to the FSE by attaching a cable between the QSFP28-DD ports on both devices The interface for the QSFP28-DD port-group connected to on the FSE is in 8x25GbE FEM mode
You can also use the show interface command to display the Fabric Engine physical port-to-Fabric Expander virtual port mapping, and the operational status of the line: OS10# show interface ethernet 1/1/30:3 Ethernet 1/1/30:3 is up, line protocol is dormant Interface is mapped to ethernet1/77/7 NOTE: If you move a FEM by cabling it to a different QSFP28-DD port on the Fabric Engine, all software configurations on virtual ports are maintained.
OME-M provides options for creating templates: • • • • 2.6.2 Most frequently, templates are created by getting the current system configuration from a server that has been configured to the exact specifications required (referred to as a “Reference Server”). Templates may be cloned (copied) and edited. A template can be created by importing a Server Configuration Profile (SCP) file. The SCP file may be from a server or exported by OpenManage Essentials, OpenManage Enterprise, or OME-M.
Table 4 lists the network types and related settings. The QoS group is the numerical value for the queues available in SmartFabric mode. Available queues include 2 through 5. Queues 1, 6, and 7 are reserved. NOTE: In SmartFabric mode, an administrator cannot change the default weights for the queues. Network types and default QoS settings 2.6.
3 SmartFabric mode requirements, guidelines, and restrictions Before deploying a SmartFabric, ensure that the following requirements, guidelines, and restrictions are followed. Failure to do so may impact your network. 3.1 Create multi-chassis management group For a scalable fabric that uses more than one MX chassis, the chassis must be in a Multi-Chassis Management (MCM) Group. See Appendix B.1 for more details.
3.3 Spanning Tree Protocol By default, OS10EE uses Rapid per-VLAN Spanning Tree Plus (RPVST+) across all switching platforms including PowerEdge MX networking IOMs. OS10EE also supports RSTP. MST is not currently supported when using VLT, and therefore is not supported in SmartFabric mode. NOTE: Dell EMC recommends using RSTP when more than 64 VLANs are required in a fabric to avoid performance problems. Caution should be taken when connecting an RPVST+ to an existing RSTP environment.
Recommended maximum number of VLANs in SmartFabric mode OS10EE release 10.4.0.R3S 10.4.0.R4S 3.5 Parameter Value Used for low priority data traffic 128 Used for standard/default priority data traffic 128 Used for high priority data traffic 32 Configuring port speed and breakout If you need to change the default port speed and/or breakout configuration of an uplink port, you must do that prior to creating the uplink.
3.7 Switch slot placement for SmartFabric mode SmartFabric mode supports three specific switch placement options. Attempts to use placements different than described here is not supported and may result in unpredictable behavior and/or data loss. NOTE: The cabling shown in this section, Section 3.7, is the VLTi connections between the MX switches. 3.7.
3.7.3 Two MX9116n Fabric Switching Engines in the same chassis This placement should only be used in environments with a single chassis, with the switches in either slots A1/A2 or slots B1/B2. A SmartFabric cannot include a switch in Fabric A and a switch in Fabric B. IOM placement – 2 x MX9116n in the same chassis 3.8 Switch-to-Switch cabling When operating in SmartFabric mode, each switch pair runs a VLT interconnect (VLTi) between them.
3.9 NIC teaming guidelines While NIC teaming is not required, it is generally suggested for redundancy unless a specific implementation recommends against it. There are two main kinds of NIC teaming: • • Switch dependent: Also referred to as LACP, 802.3ad, or Dynamic Link Aggregation, this teaming method uses the LACP protocol to understand the teaming topology. This teaming method provides Active-Active teaming and requires the switch to support LACP teaming.
template is deployed to a target device. The same identity pool can be associated with, or used by, any number of templates. Only one identity pool can be associated with a template. Each template will have specific virtual identity needs, based on its configuration. For example, one template may have iSCSI configured, so it will need appropriate virtual identities for iSCSI operations.
4 Creating a SmartFabric The general steps required to create a SmartFabric are: 1. Physically cable the MX chassis and upstream switches. 2. Define the VLANs. 3. Create the SmartFabric. 4. If needed, configure uplink port speed and breakout. 5. Create the Ethernet uplink. 6. Configure the upstream switch and connect uplink cables. These steps make the following assumptions: • • All MX7000 chassis and management modules are cabled correctly and in a Multi-Chassis Management group.
Defined VLAN list Figure 10 shows VLAN 1 and VLAN 10 after being created using the steps above. 4.3 Create the SmartFabric To create a SmartFabric using the OME-M console, perform the following steps: 1. 2. 3. 4. Open the OME-M console. From the navigation menu, click Devices > Fabric. In the Fabric pane, click Add Fabric. In the Create Fabric window, complete the following: a. Enter a name for the fabric in the Name box. In this example, SmartFabric was entered. b.
SmartFabric deployment design window The SmartFabric deploys. This process can take several minutes to complete. During this time all related switches will be rebooted, and the operating mode changed to SmartFabric mode. NOTE: After the fabric is created, the fabric health will be critical until at least one uplink is created. Figure 12 shows the new SmartFabric object and some basic information about the fabric. SmartFabric post-deployment without defined uplinks 4.
NOTE: Prior to choosing the breakout type, you must change the Breakout Type to HardwareDefault and then select the desired configuration. If the desired breakout type is selected prior to setting HardwareDefault, an error will occur. 6. Choose Configure Breakout. In the Configure Breakout dialog box, select HardwareDefault. 7. Click Finish. First set the breakout type to HardwareDefault 8. Once the job is completed, choose Configure Breakout.
4. In the Fabric Details pane, click Uplinks. 5. Click on the Add Uplinks button. 6. In the Add Uplink window complete the following: a. Enter a name for the uplink in the Name box. In this example, Uplink01 is entered. b. Optionally, enter a description in the Description box. c. From the Uplink Type list, select the desired type of uplink. In this example, Ethernet is selected. d. Click Next. e. From the Switch Ports list, select the uplink ports on both the Mx9116n FSEs.
4.6 Configure the upstream switch and connect uplink cables The upstream switch ports must be configured in a single LACP LAG.
5 Deploying a server 5.1 Server preparation The examples in this guide use a reference server of a Dell EMC PowerEdge MX740c compute sled with QLogic (model QL41262HMKR) Converged Network Adapters (CNAs) installed. CNAs are required to achieve FCoE connectivity. Use the steps below to prepare each CNA by setting them to factory defaults (if required) and configuring NIC partitioning (NPAR). NOTE: iDRAC steps in this section may vary depending on hardware, software and browser versions used.
CNA partition 1 configuration 10. Select Partition 2 Configuration and set the NIC Mode to Disabled. 11. Set the FCoE Mode to Enabled, then click Back. CNA partition 2 configuration 12. 13. 14. 15. 16. 17. 18. 5.2 If present, select Partition 3 Configuration and set all modes to Disabled, then click Back. If present, select Partition 4 Configuration and set all modes to Disabled, then click Back. Click Back, and then Finish.
NOTE: In SmartFabric mode, you must use a template to deploy a server and to configure the networking. To create a server template, follow these steps: 1. Open the OME-M console. 2. From the navigation menu, click Configuration, then click Deploy. 3. From the center panel, click Create Template, then click From Reference Device to open the Create Template window. 4. In the Template Name box, enter a name. In this example, “M740c with Intel mezzanine” is entered. Create Template dialog box 5. 6. 7. 8.
Select the elements to clone A job starts, and the new server template displays on the list. When complete, the Completed successfully status displays. 5.3 Create identity pools Dell EMC recommends the use of identity pools. The steps below demonstrate creating an Ethernet identity pool with 255 MAC Addresses. 1. 2. 3. 4. Open the OME-M console. From the navigation menu, click Configuration > Identity Pools. Click Create. In the Create Identity Pool window, complete the following: a.
5.4 Associate server template with networks After successfully creating a new template, associate the template with a network: 1. From the Deploy pane, select the template to be associated with VLANs. In this example, R740c with Intel mezzanine server template is selected. 2. Click Edit Network. 3. In the Edit Network window, complete the following: a. Optionally, from the Identity Pool list, choose the desired identity pool. In this example, Ethernet ID Pool is selected. b.
6 SmartFabric operations This section elaborates the various operations that can be performed on the SmartFabric that has been created. 6.1 Viewing the fabric The SmartFabric created can be viewed using OME-M. The green check mark adjacent to the fabric name informs that the status of the fabric is healthy. In this example, the fabric created is named Fabric01. 1. Open the OME-M console. 2. From the navigation menu, click Devices > Fabric. 3. To view the Fabric components, select the fabric.
Uplinks Switches are the I/O Modules that are part of the fabric. In this example, the fabric has two MX9116n switches. NOTE: Fabric Expander Modules are transparent and therefore do not appear on the Fabric Details page. Switches Servers are the compute sleds that are part of the fabric. In this example, two PowerEdge MX740c compute sleds are part of the fabric.
ISL Links are the VLT interconnects between the two switches. The ISL links should be connected on port groups 11 and 12 on MX9116n switches, and ports 9 and 10 on MX5108n switches. This is a requirement and failure to connect the defined ports will result in a fabric validation error. ISL Links 6.2 Editing the fabric A fabric has four components: • • • • Uplinks Switches Servers ISL Links Editing the fabric discussed in this section includes editing the fabric name and description.
Edit fabric name and description 4. In the Edit Fabric dialog box, change the name and description as desired. Click Finish. Edit Fabric dialog box 6.3 Editing uplinks Editing the uplinks on the created fabric is done using the following steps: 1. 2. 3. 4. 5. Open the OME-M console. From the navigation menu, click Devices > Fabric. Select the fabric. Select the Uplink to edit and click Edit. In this example, Uplink1 is selected. In the Edit Uplink dialog box, modify the Name and Description as desired.
Edit Uplink dialog box 6. Click Next. 7. Edit the uplink ports on the MX switches that connects to the upstream switches. In this example, ports 41 and 42 on the MX9116n switches that connects to upstream switches are displayed. NOTE: Care should be taken to modify the uplink ports on both MX switches. Select the IOM to display the respective uplink switch ports.
Edit uplink ports and VLAN networks 8. If desired, modify the tagged and untagged VLANs. 9. Click Finish. 6.4 Editing VLANs on a deployed server The OME-M Console is used to add/remove VLANs on the deployed servers in a SmartFabric. The following illustrates the steps to add/remove VLANs on the deployed servers. NOTE: Ensure that any new VLANs to be added are first defined in the Networks screen. See Define VLANs for more details. 1. 2. 3. 4. 42 Open OME-M Console.
Add/remove VLANs 5. Choose the desired server. In this example PowerEdge MX740C with service tag 8XQP0T2 is selected. 6. Choose Edit Networks. 7. Modify the VLAN selections as required by defining the tagged and untagged VLANs. 8. Select VLANs on Tagged and Untagged Network for each Mezzanine card port. 9. Click Save. Modify VLANs NOTE: At this time, only one server can be selected at a time in the GUI.
7 Switch operations PowerEdge MX switches can be managed using the OME-M console. From the Switch Management page, you can view activity, health, and alerts, as well as perform operations such as power control, firmware update, and port configuration. Some of these operations can also be performed in Full Switch mode. 7.1 Switch management page overview To get to the switch management page, follow these steps: 1. Open the OME-M console 2. From the navigation menu, click Devices > I/O Modules. 3.
• Environment The Power Control drop-down button provides three options: • • • Power Off: Turns off the IOM Power Cycle: Power cycles the IOM System Reseat: Initiates a cold reboot of the IOM Power Control button The Blink LED drop down button provides an option to turn on or turn off the ID LED on the IOM. To turn on the ID LED, choose: • Blink LED > Turn On This activates a blinking blue LED and provides easy identification.
7.1.
7.1.3 Firmware tab The Firmware tab provides options to manage the firmware on the IOM. The Dell Update Package (DUP) file is used to update the firmware of the IOM. Firmware Tab 7.1.4 Alerts tab The Alert tab provides information on alerts and notifies the administrator. The advanced filter option can be leveraged to quickly filter out alerts.
7.1.5 Settings tab The Settings tab provides options to configure the following settings for the IOMs: • • • • Network Management Monitoring Advanced Settings Settings Tab The Network option includes configuring IPv4, IPv6, DNS Server and Management VLAN settings.
The Management option includes setting the hostname and linuxadmin password. NOTE: Although the GUI has the field name listed as Root Password, it denotes the linuxadmin password. For logging on to the CLI of the MX switch, use default credentials with username as admin and password as admin. Management Settings Monitoring provides options for SNMP settings. Monitoring Settings The Advanced Settings tab offers the option for time configuration replication and alert replication.
Advanced Settings 7.2 Configure Ethernet switch ports from OME-M The MX switches can be accessed using the OME-M console. Various operations such as port breakout, altering the MTU size, enabling/disabling auto negotiation etc. Follow the below steps to gain insight into modifying various entities. 1. From the switch management page, choose Hardware > Port Information.
Port information 2. To configure MTU, select the port listed under the respective port-group. 3. Click Configure MTU. Enter MTU size in bytes. 4. Click Finish. Configure MTU 5. To configure Auto Negotiation, select the port listed under the respective port-group. Click Toggle AutoNeg. This will change the Auto Negotiation of the port to Disabled/Enabled. Click Finish. Enable/Disable Auto Negotiation 6.
Toggle Admin State 7.3 Upgrading OS10EE Upgrading the IOMs in the fabric can be done using the OME-M console. The upgrade is carried out using the DUP file. The DUP is available for download from Support for Dell EMC Products. Download DUP file for MX9116n FSE When a single IOM is selected for firmware upgrade, the IOMs that are part of a fabric will also get their firmware updated. NOTE: If an IOM is in SmartFabric mode, it leads to an upgrade of firmware of all IOMs that are part of the fabric.
Switch management page Update firmware dialog box 2. Once the file is uploaded, select the check box next to the file and click Next. 3. Select Update Now and then click Finish.
Schedule update The firmware upgrade job can be monitored by navigating to Monitor > Jobs > Select Job > View Details.
8 Validating the SmartFabric deployment 8.1 View the MCM group topology The OME-M console can be used to show the physical cabling of the SmartFabric. 1. Open the OME-M console. 2. In the left pane click View Topology. 3. Click the lead chassis and then click Show Wiring. 4. The icons can be clicked to show cabling. Figure 54 shows the current wiring of the SmartFabric.
8.2 View the SmartFabric status The OME-M console can be used to show the overall health of the SmartFabric. 1. Open the OME-M console. 2. From the navigation menu, click Devices > Fabric. 3. Select SmartFabric1 to expand the details of the fabric. Figure 55 shows the details of the fabric. Fabric status details The Overview tab shows the current inventory, including switches, servers, and interconnects between the MX9116n FSEs in the fabric. Figure 56 shows the SmartFabric switch in a healthy state.
SmartFabric server inventory Figure 58 shows the Topology tab and the VLTi created by the SmartFabric mode. SmartFabric overview fabric diagram Figure 59 displays the wiring diagram table from the Topology tab.
8.3 View port status The OME-M console can be used to show the port status. In this example, the figure displays ports for an MX9116n FSE. 1. Open the OME-M console. 2. From the navigation menu, click Devices > I/O Modules. 3. Select an IOM and click the View Details button to the right of the inventory screen. The IOM overview for that device, displays. 4. From the IOM Overview, click Hardware. 5. Click to select the Port Information tab.
8.4 CLI commands 8.4.1 show switch-operating-mode Use the show switch-operating-mode command to display the current operating mode: C140A1# show switch-operating-mode Switch-Operating-Mode : Smart Fabric Mode 8.4.2 show discovered-expanders The show discovered-expanders command is only available on the MX9116n FSE and displays the MX7116n FEMs service tag attached to the MX9116n FSEs and the associated port-group and virtual slot.
Alternately the iDRAC MAC information can be obtained from the System Information on the iDRAC Dashboard page. IOM Port Information Subsequently, viewing the LLDP neighbors shows the iDRAC MAC address in addition to the NIC MAC address of the respective mezzanine card.
ethernet1/1/38 ethernet1/1/39 ethernet1/1/40 ethernet1/1/41 ethernet1/1/42 ethernet1/71/1 ethernet1/71/1 ethernet1/71/2 ethernet1/71/2 8.4.5 C160A2 C160A2 C160A2 Z9100-Leaf1 Z9100-Leaf2 Not Advertised iDRAC-CF52XM2 Not Advertised iDRAC-1S34MN2 ethernet1/1/38 ethernet1/1/39 ethernet1/1/40 ethernet1/1/3 ethernet1/1/3 24:6e:96:9c:e5:d8 CF52XM2 NIC.Mezzanine.1A-1-1 24:6e:96:9c:e5:da 1S34MN2 NIC.Mezzanine.
Version Local System MAC address VLT MAC address IP address Delay-Restore timer Peer-Routing Peer-Routing-Timeout timer VLTi Link Status port-channel1000 : : : : : : : 1.0 20:04:0f:00:b8:1e 20:04:0f:00:b8:1e fda5:74c8:b79e:1::1 90 seconds Disabled 0 seconds : up VLT Peer Unit ID System MAC Address Status IP Address Version -------------------------------------------------------------------------------2 20:04:0f:00:9d:1e up fda5:74c8:b79e:1::2 1.0 8.4.
9 Scenario 1 - SmartFabric deployment with Dell EMC PowerSwitch Z9100-ON upstream switches Figure 63 shows the production topology using a pair of Dell EMC PowerSwitch Z9100-ONs as upstream switches. This section walks through configuring the Z9100-ONs as well as validating the Z9100-ON configuration.
9.1 Dell EMC PowerSwitch Z9100-ON switch configuration The following section outlines the configuration commands issued to the Dell EMC PowerSwitch Z9100-ON switches. The switches start at their factory default settings per Appendix D.4. NOTE: The MX IOMs run Rapid Per-VLAN Spanning Tree Plus (RPVST+) by default. RPVST+ runs RSTP on each VLAN while RSTP runs a single instance of spanning tree across the default VLAN.
Configure the required VLANs on each switch. In this deployment example, the VLAN used is VLAN 10. Z9100-ON Leaf 1 Z9100-ON Leaf 2 interface vlan10 description “Company A General Purpose” no shutdown interface vlan10 description “Company A General Purpose” no shutdown Configure the port channels that connect to the downstream switches. The LACP protocol is used to create the dynamic LAG. Trunk ports allow tagged VLANs to traverse the trunk link. In this example, the trunk is configured allow VLAN 10.
9.2 Dell EMC PowerSwitch Z9100-ON validation This section contains validation commands for the Dell EMC PowerSwitch Z9100-ON leaf switches. 9.2.1 show vlt The show vlt command validates the VLT configuration status when the VLTi Link Status is up. The role of one switch in the VLT pair is primary, and its peer switch (not shown) is assigned the secondary role.
VLAN 1 Executing IEEE compatible Spanning Tree Protocol Root ID Priority 32768, Address 2004.0f00.a19e Root Bridge hello time 2, max age 20, forward delay 15 Bridge ID Priority 32769, Address 4c76.25e8.
10 Scenario 2 - SmartFabric connected to Cisco Nexus 3232C switches Figure 64 shows the production topology using a pair of Cisco Nexus 3232C as leaf switches. This section configures the Cisco Nexus 3232Cs and creating a SmartFabric with the corresponding uplinks.
10.1 Cisco Nexus 3232C switch configuration The following section outlines the configuration commands issued to the Cisco Nexus 3232C leaf switches. NOTE: While this configuration example is specific to the Cisco Nexus 3232C switch, the same concepts apply to other Cisco Nexus and IOS switches. The switches start at their factory default settings, as described in Appendix D.5. NOTE: The MX IOMs run Rapid per-VLAN Spanning Tree Plus (RPVST+) by default.
Cisco Nexus 3232C Leaf 1 Cisco Nexus 3232C Leaf 2 vpc domain 255 peer-keepalive destination 100.67.162.200 vpc domain 255 peer-keepalive destination 100.67.162.
NOTE: If the connections to the MX switches do not come up, see Section 12.5.1 and Section 12.5.4 for troubleshooting steps. Trunk ports on switches allow tagged traffic to traverse the links. All flooded traffic for the a VLAN will be sent across trunk ports to all the switches even if those switches do not have associated VLAN. This takes up the network bandwidth with unnecessary traffic. VLAN or VTP Pruning is the feature that can be used to eliminate this unnecessary traffic by pruning the VLANs.
vPC status ---------------------------------------------------------------------id Port Status Consistency Reason Active vlans ---------- ----------- ----------------255 Po1 up success success 1,10 10.2.2 show vpc consistency-parameters The show vpc consistency-parameters command displays the configured values on all interfaces in the vPC. The displayed configurations are only those configurations that limit the vPC peer link and vPC from coming up.
10.2.3 show lldp neighbors The show lldp neighbors command provides information about lldp neighbors. In this example, Eth1/1 and Eth1/3 are connected to the two MX9116n FSEs, C160A2 and C140A1. The remaining links, Eth1/29 and Eth1/31, represent the vPC connection.
11 Scenario 3 - SmartFabric connected to Cisco ACI leaf switches This chapter covers deploying a PowerEdge MX SmartFabric connected to a Cisco ACI environment. By integrating PowerEdge MX into an ACI environment, compute resources in the MX environment can use ACI gateways and access ACI resources. The Cisco ACI environment validated includes a pair of Nexus C93180YC-EX switches as leaf switches as shown in Figure 65.
11.1 Validated environment In this scenario, two MX7000 chassis are joined to an existing Cisco ACI environment. The MX chassis environment consists of two MX9116n FSEs, two MX7116n FEMs, and four MX compute sleds. The connections between the ACI environment and the MX chassis are made using a double-sided multichassis link aggregation group (MLAG). The MLAG is called a vPC on the Cisco ACI side and a VLT on the PowerEdge MX side.
NOTE: No peer link is used between the Cisco ACI leaf switches. See the Cisco ACI documentation for more information. Cisco recommends a minimum of three Application Policy Infrastructure Controllers (APICs) in a production environment. For this validation effort, a single APIC, named APIC-1, is used. All Dell EMC PowerEdge R730xd rack servers and MX compute sleds in this example are running VMware ESXi 6.7.0.
11.2 Cisco APIC configuration The Cisco APIC configuration includes the ports connected to the R730xd rack servers (and jump box, if used) and the vPC that connects to the MX9116n VLT port channel. This includes configuration of the ACI fabric interfaces, switches, and application-level elements such as ACI endpoint groups (EPGs) and bridge domains (BDs).
NOTE: APIC configuration steps used in the validated environment are provided in the attachment named Scenario 3 – APIC config steps.pdf. See the Cisco ACI documentation for detailed APIC configuration instructions. 11.3 Deploy a SmartFabric 11.3.1 Define VLANs The VLAN settings used during SmartFabric deployment for this environment are shown in Table 9.
The configured VLANs for this example are shown in Figure 68. Defined VLANs 11.3.2 Create the SmartFabric To create a SmartFabric using the OME-M console, perform the following steps: 1. 2. 3. 4. 79 Open the OME-M console. From the navigation menu, click Devices > Fabric. In the Fabric pane, click Add Fabric. In the Create Fabric window, complete the following: a. Enter a Name, for example, SmartFabric1. b. Optionally, enter a Description. c. Click Next. d.
e. f. g. h. From the Chassis-X list, select the first MX7000 chassis. From the Switch-A list, select Slot-IOM-A1. From the Chassis-Y list, select the second MX7000 chassis to join the fabric. From the Switch-B list, select Slot-IOM-A2. SmartFabric deployment design window i. j. Click Next. On the Summary page, verify the proposed configuration and click Finish. The SmartFabric deploys. This process takes several minutes to complete.
11.3.3 Define uplinks NOTE: To change the port speed or breakout configuration, see Section 4.4 and make those changes before creating the uplinks. To define the uplinks from the MX9116n FSEs to the Cisco ACI leafs, follow these steps: 1. 2. 3. 4. 5. Open the OME-M console. From the navigation menu, click Devices > Fabric. Click the fabric name, for example, SmartFabric1. In the left pane on the Fabric Details page, click Uplinks. Click the Add Uplink button.
f. Under Tagged Networks, select the checkbox next to each VLAN that the uplink will be tagged. The uplink is a tagged member of all six VLANs in this example as shown in Figure 72. g. If the uplink will be an untagged member of a VLAN, select the VLAN from the drop-down list next to Untagged Network. In this example, this is left at None. NOTE: If the uplink is an untagged member of a VLAN, see the Cisco ACI documentation for setting the corresponding EPG to access (untagged) mode in ACI.
11.4 Deploy servers 11.4.1 Create server templates A server template should be created for each unique server and NIC combination used in the chassis group. If all servers are identical, only one template needs to be created. For the hardware used in this example, three templates were created: • • • MX740c with QLogic QL41232HMKR NIC MX740c with Intel XXV710 NIC MX840c with QLogic QL41232HMKR NIC NOTE: To create a server template, follow the steps in Section 5.2.
VLANs added to server template 11.4.3 Deploy the server templates To deploy the server templates, complete the steps in Section 5.
11.5 vCenter configuration overview The existing ACI environment has two PowerEdge R730xd rack servers connected to the ACI leafs. The rack servers are in a vSphere cluster named Management. After the SmartFabric is deployed, MX compute sleds can communicate with the rack servers and the vCenter, mgmtvc01. The MX compute sleds are joined to the vSphere cluster by an administrator as shown in Figure 76.
A VDS named VDS-Mgmt, along with six distributed port groups, one for each VLAN, are used as shown in Figure 77. VDS and port groups used in the validated environment NOTE: For each port group in the VDS in this example, both uplinks are active and the load balancing method used is Route based on physical NIC load as recommended in VMware Validated Design Documentation. Detailed vCenter configuration is beyond the scope of this document.
11.6 Verify configuration This section covers methods to verify the SmartFabric and ACI environment is configured properly. For validating the MX side of the solution, see Section 8. 11.6.1 Cisco ACI validation 11.6.1.1 Verify vPC configuration Verify the vPC connection from the Cisco ACI fabric to the Dell MX SmartFabric VLT, shown in Figure 66, is up and properly configured to allow designated VLANs and EPGs. This is done as follows: 1.
4. With the port channel/vPC interface policy group selected in the left pane, click VLANs at the top of the right pane as shown in Figure 79. Cisco ACI vPC port channel VLANs and EPGs 5. Verify that the port channel includes all required VLANs, and EPGs are mapped to the correct VLANs. 6. Repeat steps 1 through 5 for the remaining leaf switch.
11.6.1.2 Verify physical interface configuration The physical, host-connected, interfaces in the validated environment are those connected directly to the PowerEdge R730xd servers (and the jump box, if used) as shown in Figure 66. Verify the physical interfaces from the Cisco ACI fabric to the servers are up and properly configured to allow designated VLANs and EPGs. This is done as follows: 1.
3. With an interface selected in the left navigational panel, click the VLANs tab in the navigation window as shown in Figure 81. Cisco ACI interface VLANs and EPGs 4. Verify the interface includes all required VLANs and EPGs. Repeat for remaining interfaces as needed. 5. Repeat steps 1 through 4 for the remaining leaf switch.
11.6.1.3 Verify ACI is learning endpoints To verify ACI is learning endpoints, do the following: 1. In the APIC GUI, go to Tenants > Tenant name > Application Profiles > Application Profile name > Application EPGs > select an Application EPG. 2. Click the Operational tab in the navigation window as shown in Figure 82. Cisco ACI endpoints in appEPG1 3. All learned endpoints for the selected EPG are displayed along with their VLAN, IP address, and interface. 4.
11.6.2 Verify connectivity between VMs In ACI, by default, communication flows freely within EPGs, but not between EPGs. To enable inter-EPG communication, contracts are configured on the APIC. This example is configured for unrestricted inter-EPG communication as shown in steps 17 through 19 in the Scenario 3 – APIC config steps.pdf attachment. Connectivity is verified by pinging between the VMs shown in Figure 66.
12 SmartFabric troubleshooting This section provides information on errors that might be encountered while working with a SmartFabric. Troubleshooting and remediation actions are also included to assist in resolving errors. 12.1 Troubleshooting errors encountered for port group breakout The creation of a SmartFabric involves executing specific steps in a recommended order. The SmartFabric deployment consists of four broad steps all completed using the OME-M console: 1. 2. 3. 4.
Error: I/O Module is not in fabric mode 2. Configuration of the breakout requires you to select the HardwareDefault breakout type first. If the breakout type is directly selected without first selecting HardwareDefault, the following error displays: Error: interface fanout type is not hardware default 3. Once the uplinks are added, they are most often associated with tagged or untagged VLANs.
12.2 Troubleshooting Spanning Tree Protocol (STP) Spanning Tree Protocol (STP) prevents loops in the network. Loops can occur when multiple redundant parts are available between the switches. To prevent the network from going down due to loops, various flavors of STP are available. Initial introduction of STP evolved into various types.
12.3 Verify VLT/vPC configuration on upstream switches Configuring a single VLT domain with Dell EMC Networking upstream switches or a single vPC domain with Cisco upstream switches is required. Creating two VLT/vPC domains may cause a network loop. See Scenario 1 and Scenario 2 for the topology used in the deployment example. The following example shows a mismatch of the VLT domain IDs on VLT peer switches. To resolve this issue, ensure that a single VLT domain is used across the VLT peers.
• Verify the Topology LLDP settings. This can be verified by selecting iDRAC Settings > Connectivity on the compute sled’s iDRAC GUI. Ensure that this setting is set to Enabled as shown in the figure below. Ensure Topology LLDP is enabled 12.5 Troubleshooting uplink errors There might be additional settings enabled or disabled after uplinks are added to the fabric. 12.5.
Toggle AutoNeg dialog box 12.5.2 Set uplink ports to administratively up The uplink ports on the switch might be administratively down. Enabling the uplink ports can be carried out from the OME-M console. The uplink ports can be administratively down when a port group breakout happens, especially for FC breakouts. The OME-M console can be used to disable/enable the ports on MX switches. The following steps illustrate turning setting the administrative state on ports 41 and 42 of an MX9116n. 1.
The following example shows interface ethernet 1/2 that has auto negotiation enabled on the interface: Nexus-3232C-Leaf1(config-if)# do show int eth 1/2 Ethernet1/2 is down (XCVR not inserted) admin state is down, Dedicated Interface Hardware: 40000/100000 Ethernet, address: 00fe.c8ca.f367 (bia 00fe.c8ca.
Fabric details Fabric topology with no uplinks The resolution is to add the uplinks and verify that the fabric turns healthy.
12.6 Troubleshooting FC/FCoE The following points can be verified while troubleshooting FC or FCoE errors: • • • • • Ensure that the firmware and drivers are up to date on the CNAs. Check the storage guide to ensure that the CNAs are supported by the storage used in the deployment. For qualified support matrix, see elab navigator and Dell EMC Storage Compatibility Matrix for SC Series, PS Series and FS Series. Verify that port group breakout mode is appropriately configured.
A Hardware overview This section briefly describes the hardware that is used to validate the deployment examples in this document. Appendix E contains a complete listing of hardware and software validated for this guide.
The MX7000 includes three I/O fabrics. Fabrics A and B for Ethernet I/O Module (IOM) connectivity, and Fabric C for SAS and Fibre Channel (FC) connectivity. Each fabric provides two slots for redundancy.
A.2 Dell EMC PowerEdge MX740c compute sled The PowerEdge MX740c is a two-socket, full-height, single-width sled with impressive performance and scalability. It is ideal for dense virtualization environments and can serve as a foundation for collaborative workloads. An MX7000 chassis supports up to eight MX740c sleds.
A.3 Dell EMC PowerEdge MX840c compute sled The PowerEdge MX840c, a powerful four-socket, full-height, double-width sled features dense compute and memory capacity and a highly expandable storage subsystem. It is the ultimate scale-up server that excels at running a wide range of database applications, substantial virtualization, and software-defined storage environments. An MX7000 chassis supports up to four MX840c sleds.
A.4 Dell EMC PowerEdge MX9002m module The Dell EMC MX9002m module controls overall chassis power, cooling, and hosts the OME-M console. Two external Ethernet ports are provided to allow management connectivity and to connect additional MX7000 chassis in a single logical chassis. An MX7000 supports two MX9002m modules for redundancy. Figure 98 shows a single MX9002m module and its components. Dell EMC PowerEdge MX9002m module The following MX9002m module components are labeled in Figure 98. 1. 2. 3. 4. 5.
A.5 Dell EMC Networking MX9116n Fabric Switching Engine The Dell EMC Networking MX9116n FSE is a scalable, high-performance, low latency 25GbE switch purpose-built for the PowerEdge MX platform. The MX9116n FSE provides enhanced capabilities and costeffectiveness for the enterprise, mid-market, Tier2 cloud, and NFV service providers with demanding compute and storage traffic environments.
A.6 Dell EMC Networking MX7116n Fabric Expander Module The Dell EMC Networking MX7116n Fabric Expander Module (FEM) acts as an Ethernet repeater, taking signals from attached compute sleds and repeating them to the associated lanes on the external QSFP28-DD ports. The MX7116n FEM provides eight internal 25GbE connections to the chassis and two external QSFP28-DD interfaces. There is no operating system or switching ASIC on the MX7116n FEM, so it never requires an upgrade.
Dell EMC Networking MX5108n The following MX5108n components are labeled in Figure 101: 1. 2. 3. 4. 5. 6. 7. 8. Luggage Tag Storage USB Port Micro-B USB console port Power and indicator LEDs Module insertion/removal latch One QSFP+ port Two QSFP28 ports Four 10GbE BASE-T ports NOTE: While the examples in this guide are specific to the MX9116n FSE and MX7116n FEM, the use of two MX5108n switches in a single chassis is supported for the solutions shown.
Dell EMC PowerEdge MX740c mezzanine cards Table 10 shows the port mapping for fabric A. The MX9116n FSE in slot A1 maps dual-port mezzanine cards to odd-numbered ports. The MX7116n FEM, connected to the MX9116n FSE, maps to virtual ports with each port representing a compute sled attached to the MX7116n FEM.
Figure 103 shows three (expandable to ten) MX7000 chassis in a single Scalable Fabric Architecture. The first two chassis each contain one MX9116n FSE and one MX7116n FEM. Chassis 3-10 each contain two MX7116n FEMs. All connections in the figure use QSFP28-DD connections. Scalable Fabric example using Fabric A In this document, a scalable fabric architecture is deployed across two PowerEdge MX7000 chassis. Both MX9116n FSEs operate in SmartFabric mode.
Figure 104 shows the scalable fabric architecture network and how each of the MX9116n FSEs connect to a pair of leaf switches using QSFP28 cables. The MX9116n FSEs interconnect through a pair of QSFP28-DD ports. MX7116n FEMs connect to the MX9116n FSE in the other chassis as shown.
A.10 QSFP28 double density connectors Quad Small Form-Factor Pluggable 28 Double Density, or QSFP28-DD connectors, expand on the QSFP28 pluggable form factor. By doubling the number of available lanes from four to eight, with each lane operating at 25 Gbps, the result is 200 Gbps for each connection. NOTE: A QSFP28-DD transceiver is not compatible with a QSFP28 port due to the specifications required to lengthen the PCB connector to allow for the additional four lanes.
Management network NOTE: See section 2.2 PowerEdge MX7000 Multi-Chassis Management groups in the Dell EMC PowerEdge MX Networking Architecture Guide for more information.
B OpenManage Enterprise Modular console The PowerEdge MX9002m module hosts the OME-M console. OME-M is the latest addition to the Dell OpenManage Enterprise suite of tools and provides a centralized management interface for the PowerEdge MX platform. OME-M console features include: • • • • • • B.
On first logging into the OME-M console, the Chassis Deployment Wizard displayed. In this document, only MCM group definition settings are initially configured. All settings are optional and can be completed later by selecting Overview > Configure > Initial Configuration on the chassis page. To complete the Chassis Deployment Wizard, complete the following steps: 1. In the Chassis Deployment Wizard window, click the Group Definition listing in the left navigational panel. 2.
B.3 PowerEdge MX Ethernet I/O Module initial deployment All switches running OS10EE form a redundant management cluster that provides a single REST API endpoint to OME-M to manage all switches in a chassis or across all chassis in an MCM group. Figure 108 shows the PowerEdge MX networking IOMs in the MCM group. This page is accessed by selecting Devices > I/O Modules. Each IOM can be configured directly from the OME-M console.
8. Click Apply. 9. Repeat steps 3-7 for the second MX9116n, IOM-A2.
C Rack-mounted switches This section covers the rack-mounted networking switches used in the examples in this guide. C.1 Dell EMC PowerSwitch S3048-ON The Dell EMC PowerSwitch S3048-ON is a 1-Rack Unit (RU) switch with forty-eight 1GbE BASE-T ports and four 10GbE SFP+ ports. In this document, one S3048-ON supports out-of-band (OOB) management traffic for all examples. Dell EMC PowerSwitch S3048-ON C.
D Additional information D.1 Delete a SmartFabric To remove the SmartFabric using the OME-M console, perform the following steps: 1. Open the OME-M console. 2. From the navigation menu, click Devices > Fabric. 3. Select SmartFabric. 4. Click the Delete button. 5. In the delete fabric dialog box click Yes. All participating switches reboot to Full Switch mode. NOTE: Any configuration not completed by the OME-M console is lost when switching between IOM operating modes. D.
D.4 Reset an OS10EE switch to factory defaults To reset OS10EE switches back to the factory default configuration, enter the following commands: OS10# delete startup-configuration Proceed to delete startup-configuration [yes/no(default)]:yes OS10# reload System configuration has been modified. Save? [yes/no]:no Proceed to reboot the system? [confirm yes/no]:yes The switch reboots with default configuration settings. D.
E Validated components E.1 Scenarios 1 and 2 The following tables include the hardware, software, and firmware used to configure and validate Scenario 1 and Scenario 2 in this document. E.1.1 Dell EMC Networking switches Dell EMC Networking switches and OS versions – Scenarios 1 and 2 E.1.2 Qty Item Version 2 Dell EMC PowerSwitch Z9100-ON leaf switches 10.4.0E(R3) 1 Dell EMC PowerSwitch S3048-ON OOB management switch 10.4.
E.1.3 Cisco Nexus switches Nexus switches and OS versions – Scenarios 1 and 2 E.2 Qty Item Version 2 Cisco Nexus 3232C 7.0(3)I4(1) Scenario 3 The following tables include the hardware, software, and firmware used to configure and validate Scenario 3 in this document: E.2.1 Dell EMC Networking switches Dell EMC Networking Switches and OS versions – Scenario 3 E.2.2 Qty Item OS Version 1 Dell EMC PowerSwitch S3048-ON OOB management switch 10.4.1.
MX740c sled details – Scenario 3 Qty per sled Item Version 2 Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz - 12 16GB DDR4 DIMMs (192GB total) - 1 Boot Optimized Storage Solution (BOSS) S1 Controller w/ 1x120GB SATA SSD 2.6.13.3011 1 PERC H730P MX 25.5.5.0005 2 600GB SAS HDD - 1 Intel(R) Ethernet 2x25GbE XXV710 mezzanine card or 18.5.17 (Intel) or QLogic 2x25GbE QL41232HMKR mezzanine card 14.07.07 (QLogic) - BIOS 1.0.2 - iDRAC with Lifecycle Controller 3.20.20.
F Technical resources Dell EMC Networking Guides Dell EMC PowerEdge MX IO Guide Dell EMC PowerEdge MX Network Architecture Guide Dell EMC PowerEdge MX SmartFabric Deployment Video Dell EMC PowerEdge MX SmartFabric Deployment with Cisco ACI Video MX Port-Group Configuration Errors Video MX Port-Group Configuration Video Dell EMC OpenManage Enterprise-Modular Edition User's Guide v1.00.01 OS10 Enterprise Edition User Guide for PowerEdge MX IO Modules Release 10.4.
G Support and feedback Contacting Technical Support Support Contact Information Web: http://www.dell.com/support Telephone: USA: 1-800-945-3355 Feedback for this document Dell EMC encourages readers to provide feedback on the quality and usefulness of this publication by sending an email to Dell_Networking_Solutions@Dell.