Dell EMC PowerEdge MX SmartFabric Deployment Guide Abstract This document provides steps for configuring and deploying PowerEdge MX networking switches in SmartFabric mode. Deployment examples include Dell EMC Networking, Cisco Nexus, and Cisco ACI environments.
Revisions Date Description November 2018 Added Scenario 3 – Cisco ACI environment, added MX5108n switch as an option for deployment scenarios September 2018 Initial release The information in this publication is provided “as is.” Dell Inc. makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.
Table of contents Revisions............................................................................................................................................................................. 2 1 2 Introduction ................................................................................................................................................................... 6 1.1 Typographical conventions .............................................................................................
7.2.3 Define uplinks ...................................................................................................................................................32 7.2.4 Server templates...............................................................................................................................................33 7.3 Verify configuration ...........................................................................................................................................35 7.3.
B A.6 VLAN management and automated QoS .........................................................................................................80 A.7 Identity Pools ....................................................................................................................................................81 Validated components ................................................................................................................................................82 B.
1 Introduction The new Dell EMC PowerEdge MX, a unified, high-performance data center infrastructure, provides the agility, resiliency, and efficiency to optimize a wide variety of traditional and new, emerging data center workloads and applications. With its kinetic architecture and agile management, PowerEdge MX dynamically configures compute, storage and fabric, increases team effectiveness and accelerates operations.
This document provides examples for deployment of two PowerEdge MX7000 chassis and the setup and configuration of the new switch operating mode, SmartFabric. This guide also demonstrates connectivity with different leaf switch options, including: • • • Dell EMC Networking Z9100-ON Cisco Nexus 3232C Cisco Nexus C93180YC-EX in Application Centric Infrastructure (ACI) mode Table 1 outlines what this document is and is not.
2 Hardware overview This section briefly describes the hardware that is used to validate the deployment examples in this document. Appendix B contains a complete listing of hardware and software validated for this guide.
The MX7000 includes three I/O fabrics. Fabrics A and B for Ethernet I/O Module (IOM) connectivity, and Fabric C for SAS and Fibre Channel (FC) connectivity. Each Fabric provides two slots for redundancy.
2.1.1 Dell EMC PowerEdge MX740c compute sled The PowerEdge MX740c is a two-socket, full-height, single-width sled with impressive performance and scalability. It is ideal for dense virtualization environments and can serve as a foundation for collaborative workloads. An MX7000 chassis supports up to eight MX740c sleds.
2.1.2 Dell EMC PowerEdge MX840c compute sled The PowerEdge MX840c, a powerful four-socket, full-height, double-width sled features dense compute and memory capacity and a highly expandable storage subsystem. It is the ultimate scale-up server that excels at running a wide range of database applications, substantial virtualization, and software-defined storage environments. An MX7000 chassis supports up to four MX840c sleds.
2.1.3 Dell EMC PowerEdge MX9002m module The Dell EMC MX9002m module controls overall chassis power, cooling, and hosts the OpenManage Enterprise Modular (OME-M) console. Two external Ethernet ports are provided to allow management connectivity and to connect additional MX7000 chassis in a single logical chassis. An MX7000 supports two MX9002m modules for redundancy. Figure 6 shows a single MX9002m module and its components.
2.1.4 Dell EMC Networking MX9116n Fabric Switching Engine The Dell EMC Networking MX9116n Fabric Switching Engine (FSE) is a scalable, high-performance, low latency 25GbE switch purpose-built for the PowerEdge MX platform. The MX9116n FSE provides enhanced capabilities and cost-effectiveness for the enterprise, mid-market, Tier2 cloud, and NFV service providers with demanding compute and storage traffic environments.
2.1.5 Dell EMC Networking MX7116n Fabric Expander Module The Dell EMC Networking MX7116n Fabric Expander Module (FEM) acts as an Ethernet repeater, taking signals from attached compute sleds and repeating them to the associated lanes on the external QSFP28-DD ports. The MX7116n FEM provides eight internal 25GbE connections to the chassis and two external QSFP28-DD interfaces. There is no operating system or switching ASIC on the MX7116n FEM, so it never requires an upgrade.
Dell EMC Networking MX5108n The following MX5108n components are labeled in Figure 9: 1. 2. 3. 4. 5. 6. 7. 8. Luggage Tag Storage USB Port Micro-B USB console port Power and indicator LEDs Module insertion/removal latch One QSFP+ port Two QSFP28 ports Four 10GbE BASE-T ports Note: While the examples in this guide are specific to the MX9116n FSE and MX7116n FEM, the use of two MX5108n switches in a single chassis is supported for the solutions shown.
2.2.2 Dell EMC Networking Z9100-ON The Dell EMC Networking Z9100-ON is a 1-RU multilayer switch with thirty-two QSFP28 ports supporting 10/25/40/50/100GbE and two 10GbE SFP+ ports. A pair of Z9100-ON switches is used as leaf switches in Scenario 1 in this guide. Dell EMC Networking Z9100-ON 2.2.3 Cisco Nexus 3232C The Cisco Nexus 3232C is a 1-RU fixed form-factor 100GbE switch with thirty-two QSFP28 ports supporting 10/25/40/50/100GbE.
3 PowerEdge MX7000 chassis fabrics The PowerEdge MX7000 chassis includes two I/O fabrics, fabric A and fabric B. The vertically aligned compute sleds in slots one through eight connect to the horizontally aligned IOMs in slots A1, A2, B1, and B2. This orthogonal connection method results in a midplane-free design and allows the adoption of new I/O technologies without the burden of having to upgrade the midplane. The MX740c supports two mezzanine cards, and the MX840c supports four mezzanine cards.
4 PowerEdge MX IOM overview 4.1 OS10 Enterprise Edition The Dell EMC Networking MX9116n FSE and MX5108n support Dell EMC Networking OS10 Enterprise Edition (OS10EE). OS10EE is a network operating system supporting multiple architectures and environments.
4.3 Full Switch mode SmartFabric mode All switch interfaces are assigned to VLAN 1 by default and are in the same Layer 2 bridge domain. Layer 2 bridging is disabled by default. Interfaces must join a bridge domain (VLAN) before being able to forward frames. All configurations changes are saved in the running configuration by default. To display the current configuration, use the show runningconfiguration command.
5 Scalable Fabric Architecture overview A new concept with the PowerEdge MX platform is the Scalable Fabric Architecture. A Scalable Fabric spans multiple chassis and allows them to behave like a single chassis from a networking perspective. A Scalable Fabric consists of two main components, a pair of MX9116n FSEs in the first two chassis and additional pairs of MX7116n FEMs in the remaining chassis. Each MX7116n FEM connects to the MX9116n FSE corresponding to its fabric and slot.
5.1 OOB Management network Figure 14 shows a Dell EMC Networking S3048-ON used as an OOB management switch. Management ports from the leaf switches and the MX9002 modules connect to the S3048-ON as shown. Management ports on other equipment in the rack (not shown), such as PowerEdge server iDRACs, are also connected to the S3048-ON. Not shown is the S3048-ON connecting to the management network core. Note: Shown for the leaf switch layer is a pair of Dell EMC Networking Z9100-ON switches.
5.2 Scalable Fabric Architecture network Figure 15 shows the Scalable Fabric Architecture network and how each of the MX9116n FSEs connect to a pair of leaf switches using QSFP28 cables. The MX9116n FSEs interconnect through a pair of QSFP28-DD ports. MX7116n FEMs connect to the MX9116n FSE in the other chassis as shown. Scalable Fabric Architecture topology Note: For more information on QSFP28-DD connectors, see Appendix A.5.
6 OpenManage Enterprise Modular console The PowerEdge MX9002m module hosts the OpenManage Enterprise Modular (OME-M) console. OME-M is the latest addition to the Dell OpenManage Enterprise suite of tools and provides a centralized management interface for the PowerEdge MX platform. OME-M console features include: • • • • • • 6.
Dell EMC PowerEdge MX9002m module daisy chain cabling 6.2 CxMMx Port Gb1 Port Gb2 C1MM1 To S3048-ON C2MM1 Port Gb1 C2MM1 C1MM1 Port Gb2 C1MM2 Port Gb2 C1MM2 C2MM2 Port Gb2 C2MM1 Port Gb2 C2MM2 To S3048-ON C1MM2 Port Gb1 PowerEdge MX7000 initial deployment Initial configuration may be done through the LCD touchscreen. If DHCP is not used, perform the following steps to assign a static IP address and gateway to each chassis: 1. 2. 3. 4. 5. 6.
After the window closes, click the Home button on the navigation pane. The group appears in the upper left corner of the page with all participating chassis members. It may take an additional few minutes for the secondary chassis to be added. When complete, both chassis should appear on the Home page with the status icon as shown in Figure 17.
6.3 PowerEdge MX7000 component management All switches running OS10EE form a redundant management cluster that provides a single REST API endpoint to OME-M to manage all switches in a chassis or across all chassis in an MCM group. Figure 18 shows the PowerEdge MX networking IOMs in the MCM group. This page is accessed by selecting Devices > I/O Modules. Each IOM can be configured directly from the OME-M console.
7 Scenario 1 - SmartFabric deployment while connected to Z9100-ON switches Figure 19 shows the production topology using a pair of Z9100-ONs as leaf switches. This section walks through configuring the Z9100-ONs as well as creating a SmartFabric and the corresponding uplinks. SmartFabric with Z9100-ON leaf switches Note: See Appendix A.5 for more information on QSFP28-DD cables.
7.1 Dell EMC Networking Z9100-ON leaf switch configuration The following section outlines the configuration commands issued to the Dell EMC Networking Z9100-ON leaf switches. The switches start at their factory default settings per Appendix A.2. 1. Use the following commands to set the hostname, and to configure the OOB management interface and default gateway.
4. Configure the port channels that connect to the downstream MX9116n FSEs. Then, exit configuration mode and save the configuration.
7.2 Deploy a SmartFabric SmartFabric deployment consists of four broad steps all completed using the OME-M console: 1. 2. 3. 4. 7.2.1 Create the VLANs to be used in the fabric. Select switches and create the fabric based on the physical topology desired. Create uplinks from the fabric to the existing network and assign VLANs to those uplinks. Deploy the appropriate server templates to the compute sleds. Define VLANs To define VLANs using the OME-M console, perform the following steps: 1. 2. 3. 4.
c. Click Next. d. From the Design Type list, select 2x MX9116n Fabric Switching Engine in different chassis. e. From the Chassis-X list, select the first MX7000 chassis. f. From the Switch-A list, select Slot-IOM-A1. g. From the Chassis-Y list, select the second MX7000 chassis to join the fabric. h. From the Switch-B list, select Slot-IOM-A2. i. Click Next. j. On the Summary page, verify the proposed configuration and click Finish.
7.2.3 Define uplinks After initial deployment, the new fabric shows Uplink Count as ‘zero’ and shows a warning ( fabric uplink results in a failed health check ( ). To create uplinks, follow these steps: 1. 2. 3. 4. 5. 6. ). The lack of a Open the OME-M console. From the navigation menu, click Devices > Fabric. Click on the fabric name, SmartFabric. In the Fabric Details pane, click Uplinks. Click on the Add Uplinks button. In the Add Uplink window complete the following: a.
7.2.4 Server templates A server template contains the parameters extracted from a server and allows these parameters to be quickly applied to multiple compute sleds. The templates contain settings for the following categories: • • • • • • Local access configuration Location configuration Power configuration Chassis network configuration Slot configuration Setup configuration Additionally, server templates also allow an administrator to associate VLANs to compute sleds. 7.2.4.
Server template network settings 7.2.4.3 Deploy a server template To deploy the server template, complete the following steps: 1. From the Deploy pane, select the R740c with Intel mezzanine server template. 2. From the Deploy pane, click Deploy Template. 3. In the Deploy Template window, complete the following: a. Click the Select button to choose which slots or compute sleds to deploy the template to. b. Select the Do not forcefully reboot the host OS. c. Click Next. d. Choose Run Now e. Click Finish.
7.3 Verify configuration This section covers the validation of the SmartFabric and the Z9100-ON leaf switches. 7.3.1 PowerEdge MX7000 validation This section covers validation specific to the Dell EMC PowerEdge MX7000. 7.3.1.1 Show the MCM group topology The OME-M console can be used to show the physical cabling of thee SmartFabric. 1. Open the OME-M console. 2. In the left pane click View Topology. 3. Click the lead chassis and then click Show Wiring. 4. The icons can be clicked to show cabling.
7.3.1.2 Show the SmartFabric status The OME-M console can be used to show the overall health of the SmartFabric. 1. Open the OME-M console. 2. From the navigation menu, click Devices > Fabric. 3. Select SmartFabric1 to expand the details of the fabric. Figure 25 shows the details of the fabric. Fabric status details The Overview tab shows the current inventory, including switches, servers, and interconnects between the MX9116n FSEs in the fabric. Figure 26 shows the SmartFabric switch in a healthy state.
Figure 28 shown the Topology tab and the VLTi automatically created by SmartFabric mode. SmartFabric overview fabric diagram Figure 29 displays the wiring diagram table from the Topology tab.
7.3.1.3 Show port status The OME-M console can be used to show MX9116n FSE port status. 1. Open the OME-M console. 2. From the navigation menu, click Devices > I/O Modules. 3. Select an IOM and click the View Details button to the right of the inventory screen. The IOM overview for that device, displays. 4. From the IOM Overview, click Hardware. 5. Click to select the Port Information tab. Figure 30 shows ethernet 1/1/1, 1/1/3, 1/71/1, and 1/72/1 in the correct operational status (Up).
7.3.1.5 show discovered-expanders The show discovered-expanders command is only available on the MX9116n FSE and displays the MX7116n FEMs service tag attached to the MX9116n FSEs and the associated port-group and virtual slot. C140A1# show discovered-expanders Service Model Type Chassis Chassis-slot Port-group Virtual tag service-tag Slot-Id -------------------------------------------------------------------------D10DXC2 MX7116n 1 SKY002Z A1 1/1/1 71 FEM 7.3.1.
7.3.1.8 show qos system The show qos system command displays the QoS configuration applied to the system. The command is useful to verify the service policy created manually or automatically by a SmartFabric deployment. C140A1# show qos system Service-policy (input): PM_VLAN ETS Mode : off 7.3.1.
VLT Peer Unit ID System MAC Address Status IP Address Version -------------------------------------------------------------------------------2 4c:76:25:e8:e8:40 up fda5:74c8:b79e:1::2 1.0 7.3.2.2 show lldp neighbors The show lldp neighbors command provides information about connected devices. In this case, ethernet1/1/1 and ethernet1/1/3 connect to the two MX9116n FSEs, C160A2 and C140A1. The remaining links, ethernet1/1/29 and ethernet 1/1/31, represent the VLTi connection.
VLAN 10 Executing IEEE compatible Spanning Tree Protocol Root ID Priority 32778, Address 4c76.25e8.e840 Root Bridge hello time 2, max age 20, forward delay 15 Bridge ID Priority 32778, Address 4c76.25e8.
8 Scenario 2 - SmartFabric deployment while connected to Cisco Nexus 3232C leaf switches Figure 31 shows the production topology using a pair of Cisco Nexus 3232C as leaf switches. This section configures the Cisco Nexus 3232Cs and creating a SmartFabric with the corresponding uplinks. SmartFabric with Cisco Nexus 3232C leaf switches Note: See Appendix A.5 for more information on QSFP28-DD cables.
8.1 Cisco Nexus 3232C leaf switch configuration The following section outlines the configuration commands issued to the Cisco Nexus 3232C leaf switches. The switches start at their factory default settings, as described in Appendix A.3. 1. Enter the following commands to set the hostname, enable required features, and enable RPVST spanning tree mode. Configure the management interface and default gateway.
3. Enter the following commands to configure the port channels to connect to the downstream MX9116n FSEs. Then, exit configuration mode and save the configuration. 8.
Per-vlan consistency status Type-2 inconsistency reason vPC role Number of vPCs configured Peer Gateway Dual-active excluded VLANs Graceful Consistency Check Auto-recovery status Delay-restore status Delay-restore SVI status : : : : : : : : : : success Consistency Check Not Performed secondary, operational primary 1 Disabled Enabled Disabled Timer is off.(timeout = 30s) Timer is off.
Dot1q Tunnel Switchport Isolated vPC card type Allowed VLANs Local suspended VLANs 8.3.3 1 1 1 - no 0 N9K TOR 1,10 - no 0 N9K TOR 1,10 - show lldp neighbors The show lldp neighbors command provides information about lldp neighbors. In this case Eth1/1 and Eth1/3 are connected to the two MX9116n FSEs, C160A2 and C140A1. The remaining links, Eth1/29 and Eth1/31, represent the VLTi connection.
9 Scenario 3 - SmartFabric deployment while connected to Cisco ACI leaf switches This chapter covers deploying a PowerEdge MX SmartFabric connected to a Cisco ACI environment. By integrating PowerEdge MX into an ACI environment, compute resources in the MX environment can use ACI gateways and access ACI resources. The Cisco ACI environment validated includes a pair of Nexus C93180YC-EX switches as leaf switches as shown in Figure 32.
9.1 Validated environment In this scenario, two MX7000 chassis are joined to an existing Cisco ACI environment. The MX chassis environment consists of two MX9116n FSEs, two MX7116n FEMs, and four MX compute sleds. The connections between the ACI environment and the MX chassis are made using a double-sided multichassis link aggregation group (MLAG). The MLAG is called a vPC on the Cisco ACI side and a VLT on the PowerEdge MX side.
Note: No peer link is used between the Cisco ACI leaf switches. See the Cisco ACI documentation for more information. Cisco recommends a minimum of three Application Policy Infrastructure Controllers (APICs) in a production environment. For this validation effort, a single APIC, named APIC-1, is used. All PowerEdge R730xd rack servers and MX compute sleds in this example are running VMware ESXi 6.7.0. VMs named “web,” “app,” and “db” on the ESXi hosts are running Ubuntu Linux guest operating systems.
9.2 Cisco APIC configuration The Cisco APIC configuration includes the ports connected to the R730xd rack servers (and jump box, if used) and the vPC that connects to the MX9116n VLT port channel. This includes configuration of the ACI fabric interfaces, switches, and application-level elements such as ACI endpoint groups (EPGs) and bridge domains.
9.3 Deploy a SmartFabric 9.3.1 Define VLANs The VLAN settings used during SmartFabric deployment for this environment are shown in Table 8.
The configured VLANs for this example are shown in Figure 35. Defined VLANs 9.3.2 Create the SmartFabric To create a SmartFabric using the OME-M console, perform the following steps: 1. 2. 3. 4. 53 Open the OME-M console. From the navigation menu, click Devices > Fabric. In the Fabric pane, click Add Fabric. In the Create Fabric window, complete the following: a. Enter a Name, for example, SmartFabric1. b. Optionally, enter a Description. c. Click Next. d.
e. f. g. h. From the Chassis-X list, select the first MX7000 chassis. From the Switch-A list, select Slot-IOM-A1. From the Chassis-Y list, select the second MX7000 chassis to join the fabric. From the Switch-B list, select Slot-IOM-A2. SmartFabric deployment design window i. j. Click Next. On the Summary page, verify the proposed configuration and click Finish. The SmartFabric deploys. This process takes several minutes to complete.
9.3.3 Define uplinks To define the uplinks from the MX9116n FSEs to the Cisco ACI leafs, follow these steps: 1. 2. 3. 4. 5. Open the OME-M console. From the navigation menu, click Devices > Fabric. Click the fabric name, for example, SmartFabric1. In the left pane on the Fabric Details page, click Uplinks. Click the Add Uplink button. In the Add Uplink window complete the following: a. b. c. d. e. Enter a Name, for example, VLT01. Optionally, enter a description in the Description box.
f. Under Tagged Networks, select the checkbox next to each VLAN that the uplink will be tagged. The uplink is a tagged member of all six VLANs in this example as shown in Figure 39. g. If the uplink will be an untagged member of a VLAN, select the VLAN from the drop-down list next to Untagged Network. In this example, this is left at None. Note: If the uplink is an untagged member of a VLAN, see the Cisco ACI documentation for setting the corresponding EPG to access (untagged) mode in ACI.
9.3.4 Server templates A server template contains the parameters that are extracted from a compute sled and enables these parameters to be quickly applied to multiple compute sleds. The templates contain settings for the following categories: • • • • • • Local access configuration Location configuration Power configuration Chassis network configuration Slot configuration Setup configuration Also, server templates enable an administrator to associate VLANs to compute sleds. 9.3.4.
Server templates created 9.3.4.2 Add VLANs to the server templates After successfully creating server templates, associate each template with VLANs as follows: 1. On the Configuration > Deploy page, select a server template previously created such as MX740c with QLogic QL41232HMKR NIC. 2. Click the Edit Network button. 3. In the Edit Network window, complete the following: a. For both ports, if they will be untagged members of a VLAN, select the VLAN from the drop-down box under Untagged Network.
c. 9.3.4.3 Click Finish. Deploy the server templates To deploy the server templates, complete the following steps: 1. On the Configuration > Deploy page, select a server template such as MX740c with QLogic QL41232HMKR NIC. 2. Click the Deploy Template button. Click Yes if prompted to use the physical identities. 3. In the Deploy Template window, complete the following: a. Click the Select button to choose which sleds to deploy the template. After sleds are selected, click Finish. b.
9.4 vCenter configuration overview The existing ACI environment has two PowerEdge R730xd rack servers connected to the ACI leafs. The rack servers are in a vSphere cluster named Management. After the SmartFabric is deployed, MX compute sleds can communicate with the rack servers and the vCenter, mgmtvc01. The MX compute sleds are joined to the vSphere cluster by an administrator as shown in Figure 43.
A VDS named VDS-Mgmt, along with six distributed port groups, one for each VLAN, are used as shown in Figure 44. VDS and port groups used in the validated environment Note: For each port group in the VDS in this example, both uplinks are active and the load balancing method used is Route based on physical NIC load as recommended in VMware Validated Design Documentation. Detailed vCenter configuration is beyond the scope of this document.
9.5 Verify configuration This section covers methods to verify the SmartFabric and ACI environment is configured properly. 9.5.1 Validation using the OME-M Console 9.5.1.1 Show the MCM group topology The OME-M console can be used to show the physical cabling of the SmartFabric. 1. Open the OME-M console and click Home. 2. In the chassis group pane, click View Topology. 3. Click the lead chassis image and then click Show Cabling. 4. Click the icons to view cable connections as shown in Figure 45.
The Group Topology page shows the MX9116n and MX7116n connections and if any validation errors are present. On the MX9116n FSEs, ports 1/1/17-18 are used to connect to the MX7116n FEMs. Ports 1/1/37-40 are used for the VLTi. 9.5.1.2 Show the SmartFabric status The overall health of the SmartFabric is viewed as follows: 1. Open the OME-M console. 2. From the navigation menu, click Devices > Fabric. 3. Click the fabric name, for example, SmartFabric1, to expand the details of the fabric.
Click the Servers link to view the server health status as shown in Figure 48. SmartFabric server status Select the Topology tab to view uplinks and fabric connections. Figure 49 shows the VLT port channel connection, VLT01, connected to the Cisco ACI vPC using ports 1/1/43-1/1/44 on each MX9116n. The VLTi connection between the two MX9116n FSEs is also shown.
The connection details are shown in the table at the bottom of the Topology page as shown in Figure 50.
9.5.1.3 Show port status The OME-M console can be used to show MX9116n FSE port status, toggle administrative states, configure breakouts, MTU settings, and auto-negotiation. 1. Open the OME-M console. 2. From the navigation menu, click Devices > I/O Modules. 3. Click an IOM name for the first MX9116n, for example, IOM-A1. The IOM Overview page for that device is displayed. 4. On the IOM Overview page, click Hardware > Port Information. Figure 51 shows ports 1/1/1, 1/1/5, 1/71/1 and 1/71/3 are up.
9.5.2 Validation using the MX9116n CLI The CLI commands shown in this section are available to help validate the configuration. The commands and output shown below are from the MX9116n in the first chassis. The CLI output from the MX9116n in the second chassis, not shown, is similar. Note: The MX9116n CLI is accessible using SSH. The default username and password are both admin. 9.5.2.1 show switch-operating-mode Use the show switch-operating-mode command to display the current operating mode.
9.5.2.4 show vlt domain-id The show vlt domain-id command validates the VLT configuration status. The role of one switch in the VLT pair is primary (not shown), and its peer switch is assigned the secondary role. The VLT domain ID of 255 is automatically configured in SmartFabric mode. The VLTi link Status and VLT Peer Status must both be up. SmartFabric automatically configures the VLTi as port channel 1000.
9.5.2.7 show lldp neighbors The show lldp neighbors command shows information about directly connected devices. Ports 1/1/1, 1/1/5, 1/71/1, and 1/71/3 are connected to the four compute sleds. Note: Ports 1/71/1 and 1/71/3 are the compute sleds connected to the MX7116n FEM in the other chassis. Two instances appear for each port connected to a compute sled. One instance is the compute sled iDRAC. The iDRAC uses connectivity to the mezzanine card to advertise LLDP information.
9.5.2.9 show policy-map Using the service policy from show qos system, the show policy-map command displays QoS policy details including class maps and QoS group settings. The QoS group values should match those configured for each VLAN. See Appendix A.6 for more information on QoS groups.
9.5.3 Cisco ACI validation 9.5.3.1 Verify vPC configuration Verify the vPC connection from the Cisco ACI fabric to the Dell MX SmartFabric VLT, shown in Figure 33, is up and properly configured to allow designated VLANs and EPGs. This is done as follows: 1. In the APIC GUI, go to Fabric > Inventory > Pod name > Leaf name > Interfaces > vPC Interfaces and drill down to the applicable port channel/vPC policy group as shown in Figure 52. Cisco ACI vPC port channel and interfaces 2.
4. With the port channel/vPC interface policy group selected in the left pane, click VLANs at the top of the right pane as shown in Figure 53. Cisco ACI vPC port channel VLANs and EPGs 5. Verify the port channel includes all required VLANs, and EPGs are mapped to the correct VLANs. Repeat steps 1 through 5 for the remaining leaf switch.
9.5.3.2 Verify physical interface configuration The physical, host-connected, interfaces in the validated environment are those connected directly to the PowerEdge R730xd servers (and the jump box, if used) as shown in Figure 33. Verify the physical interfaces from the Cisco ACI fabric to the servers are up and properly configured to allow designated VLANs and EPGs. This is done as follows: 1.
3. With an interface selected in the left pane, click VLANs at the top of the right pane as shown in Figure 53. Cisco ACI interface VLANs and EPGs 4. Verify the interface includes all required VLANs and EPGs. Repeat for remaining interfaces as needed. Repeat steps 1 through 4 for the remaining leaf switch.
9.5.3.3 Verify ACI is learning endpoints To verify ACI is learning endpoints, do the following: 1. In the APIC GUI, go to Tenants > Tenant name > Application Profiles > Application Profile name > Application EPGs > select an Application EPG. 2. Click Operational at the top of the right pane as shown in Figure 56. Cisco ACI endpoints in appEPG1 3. All learned endpoints for the selected EPG are displayed along with their VLAN, IP address, and interface.
9.5.4 Verify connectivity between VMs In ACI, by default, communication flows freely within EPGs, but not between EPGs. To enable inter-EPG communication, contracts are configured on the APIC. This example is configured for unrestricted inter-EPG communication as shown in steps 17 through 19 in the Scenario 3 – APIC config steps.pdf attachment. Connectivity is verified by pinging between the VMs shown in Figure 33.
A Additional information A.1 Resetting PowerEdge MX7000 to factory defaults This section covers resetting a PowerEdge MX7000 with IOMs in SmartFabric mode to factory defaults. A.1.1 Remove the SmartFabric To remove the SmartFabric using the OME-M console, perform the following steps: 1. 2. 3. 4. 5. Open the OME-M console. From the navigation menu, click Devices > Fabric. Select SmartFabric. Click the Delete button. In the delete fabric dialog box click Yes.
A.2 Reset OS10EE switches to factory defaults To reset OS10EE switches back to the factory default configuration, enter the following commands: OS10# delete startup-configuration Proceed to delete startup-configuration [yes/no(default)]:yes OS10# reload System configuration has been modified. Save? [yes/no]:no Proceed to reboot the system? [confirm yes/no]:yes The switch reboots with default configuration settings. A.
Root Bridge hello time 2, max age 20, forward delay 15 Bridge ID Priority 32768, Address 2004.0f00.cd1e Configured hello time 2, max age 20, forward delay 15 Flush Interval 200 centi-sec, Flush Invocations 95 Flush Indication threshold 0 (MAC flush optimization is disabled) A.5 QSFP28 double density connectors Quad Small Form-Factor Pluggable 28 Double Density, or QSFP28-DD connectors, expand on the QSFP28 pluggable form factor.
A.6 VLAN management and automated QoS In addition to being able to assign VLANs to server profiles, SmartFabric automates QoS settings based on the Network Type specified. Figure 59 shows that when defining a VLAN, one of 11 options are pre-defined. Each of these options represents a queue. QoS options available in SmartFabric mode Table 9 lists the network types and related settings. The QoS group is the numerical value for the queues available in SmartFabric mode. Available queues include 2 through 5.
When a VLAN-capable server template deploys, SmartFabric creates a class map. For example, class map CM10, matching all traffic associated with VLAN 10. Then a policy map, for example, PM_VLAN, sets this class map to the appropriate queue, as in qos-group 2. A.7 Identity Pools Identity Pools, or virtual identities, abstract the network identity for Ethernet, FCoE, iSCSI, or FC access. Virtual identities allow the assignment of a static MAC address or WWPN to a slot.
B Validated components B.1 Scenarios 1 and 2 The following tables include the hardware, software, and firmware used to configure and validate Scenario 1 and Scenario 2 in this document. B.1.1 Dell EMC Networking switches Dell EMC Networking switches and OS versions – Scenarios 1 and 2 B.1.2 Qty Item Version 2 Dell EMC Networking Z9100-ON leaf switches 10.4.0E(R3) 1 Dell EMC Networking S3048-ON OOB management switch 10.4.
B.1.3 Cisco Nexus switches Nexus switches and OS versions – Scenarios 1 and 2 B.2 Qty Item Version 2 Cisco Nexus 3232C 7.0(3)I4(1) Scenario 3 The following tables include the hardware, software, and firmware used to configure and validate Scenario 3 in this document. B.2.1 Dell EMC Networking switches Dell EMC Networking Switches and OS versions – Scenario 3 B.2.2 Qty Item OS Version 1 Dell EMC Networking S3048-ON OOB management switch 10.4.1.
MX740c sled details – Scenario 3 Qty per sled Item Version 2 Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz - 12 16GB DDR4 DIMMs (192GB total) - 1 Boot Optimized Storage Solution (BOSS) S1 Controller w/ 1x120GB SATA SSD 2.6.13.3011 1 PERC H730P MX 25.5.5.0005 2 600GB SAS HDD - 1 Intel(R) Ethernet 2x25GbE XXV710 mezzanine card or 18.5.17 (Intel) or QLogic 2x25GbE QL41232HMKR mezzanine card 14.07.07 (QLogic) - BIOS 1.0.2 - iDRAC with Lifecycle Controller 3.20.20.
C Technical resources Dell EMC Networking Guides Dell EMC PowerEdge MX IO Guide Dell EMC PowerEdge MX Network Architecture Guide Dell EMC PowerEdge MX SmartFabric Deployment Video Dell EMC OpenManage Enterprise-Modular Edition User's Guide v1.00.01 OS10 Enterprise Edition User Guide for PowerEdge MX IO Modules Release 10.4.
D Support and feedback Contacting Technical Support Support Contact Information Web: http://www.dell.com/support Telephone: USA: 1-800-945-3355 Feedback for this document We encourage readers to provide feedback on the quality and usefulness of this publication by sending an email to Dell_Networking_Solutions@Dell.