Dell EMC PowerEdge MX SmartFabric and Cisco ACI Integration Guide Abstract This document provides the steps for integrating Dell EMC PowerEdge MX Networking switches in SmartFabric mode with the Cisco Application Centric Infrastructure (ACI) environment. It also includes steps to configure the Cisco Application Policy Infrastructure Controller (APIC).
Revisions Date Description October 2019 Initial Release The information in this publication is provided “as is.” Dell Inc. makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any software described in this publication requires an applicable software license. © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.
Table of contents Revisions.............................................................................................................................................................................2 1 Introduction ...................................................................................................................................................................6 1.1 Dell EMC SmartFabric OS10 ..................................................................................................
.3 Deploy the SmartFabric ....................................................................................................................................43 4.3.1 Define VLANs ...................................................................................................................................................43 4.3.2 LLDP setting for SmartFabric ...........................................................................................................................44 4.3.
.4.2 Verify physical interface configuration ..............................................................................................................63 5.4.3 Verify ACI learning endpoints ...........................................................................................................................65 5.4.4 Verify ACI VMM domain integration .................................................................................................................65 5.
1 Introduction Our vision at Dell EMC is to be the essential infrastructure company from the edge, to the core, and to the cloud. Dell EMC Networking ensures modernization for today’s applications and for the emerging cloud-native world. Dell EMC is committed to disrupting the fundamental economics of the market with an open strategy that gives you the freedom of choice for networking operating systems and top-tier merchant silicon.
This document provides examples for integrating Dell EMC PowerEdge MX platform running SmartFabric Services with Cisco Application Centric Infrastructure (ACI). The examples in this document assume that the MX7000 chassis are configured in a multi-chassis management group and the reader has a basic understanding of the PowerEdge MX platform.
1.1 Dell EMC SmartFabric OS10 The networking market is transitioning from a closed, proprietary stack to open hardware supporting various operating systems. Dell EMC SmartFabric OS10 is designed to allow multi-layered disaggregation of the network functionality.
1.
2 Process flow and checklist This guide is used with other documentation to configure the validated MX networking SmartFabric and Cisco ACI environment that is shown in Figure 4 on page 15. Table 1 shows the ordered steps and locations that are referenced in the duration of this guide. Each step is covered in detail either in this guide or a link that is referenced in Table 1. The table may also be used as a checklist to ensure full coverage of all instructions in the guide.
☐ 19 Configure Access Entity profile with EPGs and VLANs This document, section 4.2, step 4.2.16 APIC ☐ 20 Create vCenter Domain for Cisco ACI and Virtual Machine Manager (VMM) domain integration: This document, section 4.2, step 4.2.17 APIC ☐ 21 Create a contract filter This document, section 4.2, step 4.2.18 APIC ☐ 22 Create a Contract This document, section 4.2, step 4.2.19 APIC ☐ 23 Apply the Contract to the VRF This document, section 4.2, step 4.2.
3 SmartFabric mode requirements Before beginning SmartFabric deployment, ensure that the requirements and guidelines in this section are followed. Configuration of SmartFabric on MX Chassis with Cisco Application Centric Infrastructure (ACI) makes the following assumptions: • • • All MX7000 chassis and management modules are cabled correctly (see Section 3.1.1) and in a multi-chassis management group (see Section 3.1.2) The VLTi cables between switches have been connected (see Section 3.1.
APIC leaf and spine node IDs and names Node ID Node name 101 Leaf1 102 Leaf2 201 Spine1 The networks used are shown in Table 3 along with the corresponding bridge domain and application EPG names used in APIC configuration in this guide. Network information 13 VLAN ID VLAN name Gateway IP address/mask Bridge domain name Application EPG name 1611 ESXi_Mgmt 172.16.11.254/24 ESXiMgmtBD1 ESXiMgmtEPG1 1612 vMotion 172.16.12.254/24 vMotionBD1 vMotionEPG1 1613 vSAN 172.16.13.
4 SmartFabric connections to Cisco ACI leaf switches This chapter covers deploying a PowerEdge MX SmartFabric connected to a Cisco ACI environment. By integrating PowerEdge MX into an ACI environment, compute resources in the MX environment can use ACI gateways and access ACI resources. The Cisco ACI environment that is validated includes a pair of Nexus C93180YC-EX switches as leaf switches as shown in Figure 3.
4.1 Validated environment In this scenario, two MX7000 chassis are joined to an existing Cisco ACI environment. The MX chassis environment consists of two MX9116n FSEs, two MX7116n Fabric Expander Modules (FEMs), and four MX compute sleds. The connections between the ACI environment and the MX chassis are made using a double-sided multichassis link aggregation group (MLAG). The MLAG is called a vPC on the Cisco ACI side and a VLT on the PowerEdge MX side.
There is no peer link that is used between the Cisco ACI leaf switches. While a typical production environment has multiple Application Policy Infrastructure Controllers (APICs), for this example, a single APIC (APIC-1) is used. All Dell EMC PowerEdge R730xd rack servers and MX compute sleds in this example are running VMware ESXi 6.7.0. To install ESXI on Dell EMC PowerEdge servers, follow the instructions on Installation of VMware ESXi on Dell EMC PowerEdge servers.
4.2 Cisco APIC configuration The Cisco APIC configuration includes the ports connected to the R730xd rack servers and the vPC that connects to the MX9116n FSE VLT port channel. This includes configuration of the ACI fabric interfaces, switches, creating VLAN Pool, policies, policy group and profiles, as well as configuring application-level elements such as ACI endpoint groups (EPGs) and bridge domains (BDs). This configuration should be done before creating the SmartFabric.
Create VLAN Pool 5. 6. 7. 8. From the Encap Blocks field, click the Add(+) icon. In the VLAN Range fields, enter 1611 and 2000 as shown in Figure 6. From the Allocation Mode field, click to select Static Allocation. For the Role, select External or On the wire encapsulations. VLAN Range 9. Click OK and then Submit. 4.2.2 Create a Physical Domain A physical domain acts as a link between the VLAN pool and the Access Entity Profile (AEP).
1. 2. 3. 4. 5. Go to Fabric > Access Policies > Physical and External Domains > Physical Domains. Right-click on Physical Domain and select Create Physical Domain. In the Name field, enter physDomain1. From the VLAN Pool drop-down, select the VLANPool1 option (created above in section 4.2.1). Click Submit. Create Physical Domain 4.2.3 Create an Attachable Access Entity Profile To create an Attachable Access Entity Profile, perform the following steps: 1.
4.2.4 Create a Port Channel Policy To create Port Channel Policy: 1. 2. 3. 4. Go to Fabric > Access Policies > Policies > Interface > Port Channel. Right-click on Port Channel and select Create Port Channel Policy. In the Name field, enter LACPPol1. From the Mode drop-down, select LACP Active. Note: When LACP is enabled on the leaf switch, it must also be enabled on the connected devices 5. Keep default settings that are shown in the Control field. 6. Click Submit. Create Port Channel Policy 4.2.
Create VPC Interface Policy Group 4.2.6 Create a Leaf Access Port Policy Group 1. Go to Fabric > Access Policies > Interfaces > Leaf Interfaces > Policy Groups > Leaf Access Port. 2. Right-click on Leaf Access Port and select Create Leaf Access Port Policy Group. 3. In the Name field, enter LeafHostPortGrp1. 4. From the Attached Entity Profile drop-down, select AEP1 (created above in step 4.2.3). 5. Click Submit.
Create Leaf Access Port Policy Group 4.2.7 Create a Leaf Interface Profile Once the vPC Interface Policy Group and Leaf Access Port Policy Group is created to bundle the interfaces, the interfaces need to be added to the policy groups. To achieve that, leaf interface profile is created, and access port selectors connect the interfaces to the policy groups. 1. Go to Fabric > Access Policies > Interfaces > Leaf Interfaces > Profiles.
2. Right-click on Profiles and select Create Leaf Interface Profile. 3. In the Name field, enter LeafIntProf1. 4. From the Interface Selectors field, click the Add(+) icon. Create Leaf Interface Profile 5. Create Access Port Selectors: a. In the Name field, enter LeafHostSel1. b. From the Interface IDs, enter 1/1-3. These ports are connected directly to the R730xd servers. c. From the Interface Policy Group drop-down, select LeafHostPortGrp1 (created above in step 4.2.6). d. Click OK. e.
Access Port Selector for vPC interfaces g. Click Submit. 4.2.8 Create a VPC Domain Policy To create VPC Domain Policy, perform the following steps: 1. 2. 3. 4. Go to Fabric > Access Policies > Policies > Switch > VPC Domain. Right-click on VPC Domain and select Create VPC Domain Policy. In the Name field, enter vPCDom1. Click Submit. Create vPC Domain Policy 4.2.9 Create a VPC Explicit Protection Group 1. 2. 3. 4. 5. 6. 7.
8. For Switch 2, select the second leaf switch, 102/Leaf2. 9. Click Submit. Create vPC Explicit Protection Group 4.2.10 Create a Leaf Profile 1. 2. 3. 4. Go to Fabric > Access Policies > Switches > Leaf Switches > Profiles. Right-click on Profiles and select Create Leaf Profile. In the Name field, enter LeafProf1. Next to Leaf Selectors, click the Add(+) to create a Leaf Selector: a. In the Name field, enter LeafSel1. b. Blocks - select switches 101 and 102 and click Update.
Create Leaf Profile c. Click Next. d. From the Interface Selector Profiles, select LeafIntProf1 (created above in step 4.2.7), then click Finish. e. Leaf 101 and 102 display in the Leaf Profile shown in Figure 17.
Choose Interface selector profile 4.2.11 Create a Tenant To create a Tenant: 1. Go to Tenants > Add Tenant. 2. In the Name field, enter Customer-TN1. 3. Click Submit.
Create a Tenant 4.2.12 Create a VRF Virtual Routing and Forwarding (VRF) also called private networks are a unique layer 3 forwarding and application policy domain. Private networks contain Bridge domains. 1. 2. 3. 4. 28 Go to Tenants > Customer-TN1 > Networking > VRFs. Right-click on VRFs and select Create VRF. In the Name field, enter VRF1. Click to deselect the Create a Bridge Domain option and then click Finish.
Create VRF 4.2.13 Create Bridge Domains Layer 2 forwarding domain within the fabric is a Bridge Domain. Bridge domain is linked to a private network and it can have multiple subnets. Note: Refer to Table 3 as needed to complete the following steps. Bridge domains are created for each VLAN as follows: 1. 2. 3. 4. 29 Click Tenants > Customer-TN1 > Networking > Bridge Domains. Right-click on Bridge Domains and then select Create Bridge Domain.
Create Bridge Domain 5. Next to the Subnets listing, click the Add(+) icon. 6. In the Gateway IP field, enter 172.16.14.254/24 for the address and mask for the bridge domain. Leave the remaining values at their defaults settings.
Create Subnet 7. Click OK, Next and then click Finish. 8. Repeat the steps in this section as needed for each VLAN. Note that the additional bridge domains created in this example are appBD1, dbBD1, ESXiMgmtBD1, vMotionBD1, and vSANBD1. 4.2.14 Create an Application Profile 1. 2. 3. 4. 31 Go to Tenants > Customer-TN1 > Application Profiles. Right-click on Application Profiles and select Create Application Profile. In the Name field, enter ap1. Click Submit.
Create Application Profile 4.2.15 Create Application EPGs End point groups (EPGs) are logically grouped hosts or servers that share similar policies and perform similar functions within the fabric. Note: Refer to Table 3 for the required network information. 1. 2. 3. 4. 5. 32 Click Tenants > Customer-TN1 > Application Profiles > ap1 > Application EPGs. Right-click on Application EPGs and then select Create Application EPG. In the Name field, enter webEPG1 as the name of the first EPG.
Create Application EPG 6. Create a separate EPG for each of the remaining bridge domains using the EPG names provided in Table 3: appEPG1, dbEPG1, ESXiMgmtEPG1, vMotionEPG1, and vSANEPG1. 4.2.16 Configure the Access Entity Profile with EPGs and VLANs Note: Refer to Table 3 for the necessary information. 1. Go to Fabric > Access policies > Policies > Global > Attachable Access Entity Profiles. 2. Form the profiles listed, select AEP1 (created above in step 4.2.3).
Create Attachable Access Entity Profile 3. At bottom of page next to Application EPGs, click the Add(+) icon. 4. For the first EPG, webEPG1, select the following options: a. b. c. d. e. f. g. 34 From the Tenant drop-down, select Customer-TN1. From the Application Profile menu, select ap1. From the EPG menu, select webEPG1. In the Encap field, enter vlan-1614. Leave the Primary Encap field blank. From the Mode menu, select Trunk. Click Update.
Attach AEP to EPGs and Bridge Domains 5. Repeat the steps in this section for all remaining EPGs using their associated VLAN IDs. 4.2.17 Create vCenter domain for Cisco ACI and Virtual Machine Manager (VMM) Domain Integration By creating vCenter domain, user can connect the VMs by creating and configuring policies and EPGs in the Cisco APIC. These EPGs as well as policies in turn are pushed to vCenter as port groups.
Create vCenter Domain 7. From the vCenter Credentials listing, click the Add(+) icon. a. In the Name field, enter vCenter-Credentials. b. In the Username field, enter administrator@dell.local. c. In the fields provided, enter and confirm Password, then click OK.
vCenter Credential 8. Next to the vCenter listing, click the Add(+) to add the vCenter Controller. a. b. c. d. In the Name field, enter vCenter. Enter Host Name or IP Address as per the configuration. In the Datacenter field, enter MgmtDatacenter. Associate vCenter-Credentials created above and click Submit. Note: The Management EPG field is optional. New Management EPG can also be created and associated by choosing Create EPG under Tenant mgmt from this menu.
Create vCenter Controller 9. Select the Port Channel Mode, vSwitch Policy and NetFlow Exporter Policy as per configuration. For this example, these options are not required. 10. Click Submit.
vCenter domain after adding vCenter 4.2.18 Create a Contract Filter Contracts are necessary in order to communicate between EPGs. 1. 2. 3. 4. Go to Tenants > Customer-TN1 > Contracts > Filters. Right-click on Filters and select Create Filter. In the Name field, enter AllowAllFilter1. In the Entries section, click the Add(+) icon: a. In the Name field, enter Allow. b. Select the IP as EtherType. c. Leave remaining items at their defaults and click Update and then Submit.
4.2.19 Create a Contract Contract provides a way to control traffic flow within the ACI fabric between EPGs. To create Contract, perform the following steps: 1. Go to Tenants > Customer-TN1 > Contracts > Standard. 2. Right-click Standard and select Create Contract. 3. In the Name field, enter AllowAllContract1. Create Contract 4. 5. 6. 7. 40 In the Subjects field, click the Add(+) icon. In the Name field, enter AllowAllSub1. In the Filters field, click the Add(+) icon.
Create Subject 8. Click Update > OK > Submit. 4.2.20 Apply the contract to the VRF 1. Go to Tenant > Customer-TN1 > Networking > VRFs > VRF1. 2. Expand the VRF1 section and select EPG collection for VRF. 3. Next to Provided Contracts listing, click the Add(+) icon: a. In the Name field, select AllowAllContract1 (created above in step 4.2.19). b. Click Update. 4. Next to Consumed Contracts listing, click the Add(+) icon: a. In the Name field, select AllowAllContract1 (created above in step 4.2.19). b.
Apply the Contract to VRF In this deployment, EPGs are extended outside of the ACI fabric by mapping EPGs to external VLANs. This is so when a frame tagged with, VLAN 1611 for example, enters the ACI fabric, ACI knows that it belongs to the ESXi Management EPG and treats it accordingly. ESXi Mgmt EPG VLAN 1611 ESXi Mgmt BD vMotion EPG VLAN 1612 vMotion BD vSAN EPG External devices VLAN 1613 vSAN BD ... Bridge domains are associated with EPGs, which are mapped to external VLANs.
4.3 Deploy the SmartFabric This section provides the details used to deploy the SmartFabric that is used in the example provided in this guide. Download the Dell EMC PowerEdge MX SmartFabric Configuration and Troubleshooting Guide, which is referenced in this section. 4.3.1 Define VLANs The VLAN settings used during the SmartFabric deployment for this environment, are shown in Table 6.
4.3.2 LLDP setting for SmartFabric Cisco ACI uses Link Layer Discovery Protocol (LLDP) to discover and build the network topology that includes the Distributed Virtual Switch (DVS) hosted in the hypervisor. To enable this functionality, click the checkbox next to Include Fabric Management Address in LLDP Messages on the Create Fabric screen, as shown in Figure 37, during deployment.
Figure 38 shows the new SmartFabric object. SmartFabric after deployment before uplinks are created After creation, the SmartFabric shows the Uplink Count as zero with the column displays the icon until uplinks are defined. 4.3.4 icon displayed. The Health Create the Uplink Note: To change the port speed or breakout configuration, see Section 4.
4.4 Deploy servers 4.4.1 Create Server Templates Create a server template for each unique server and NIC combination used in the chassis group. For identical servers, only create one template. Note: For the hardware used in this example, three templates were created: • • • MX740c with QLogic QL41232HMKR NIC MX740c with Intel XXV710 NIC MX840c with QLogic QL41232HMKR NIC Note: To create a server template, follow the steps in Section 5.
VLANs added to server template 4.4.3 Deploy the Server Templates To deploy the server templates, complete the steps in Section 5.6 - Deploy a server template of the Dell EMC PowerEdge MX SmartFabric Configuration and Troubleshooting Guide.
4.5 vCenter configuration overview The existing ACI environment has two PowerEdge R730xd rack servers connected to the ACI leafs. The rack servers are in a vSphere cluster named Management. After the SmartFabric is deployed and uplink is created, the rack servers can be added to vCenter. To create a data center, create a cluster, add a host, create virtual machine, configure a cluster and create VDS, see Documentation related to configure vCenter.
A VDS named VDS-Mgmt, along with six distributed port groups, one for each VLAN, are used as shown in Figure 43. VDS and port groups used in the validated environment Note: For each port group in the VDS in this example, both uplinks are active and the load balancing method used is Route based on physical NIC load as recommended in VMware Validated Design Documentation. Detailed vCenter configuration is beyond the scope of this document.
4.6 SmartFabric connected with MX5108n Ethernet switch and Cisco ACI Leaf switches A single MX7000 chassis may also join an existing Cisco ACI environment by using the MX5108n ethernet switch. The MX chassis in this example has two MX5108n ethernet switches and two MX compute sleds. The connections between the ACI environment and the MX chassis are made using a double-sided multichassis link aggregation group (MLAG). The MLAG is called a vPC on the Cisco ACI side and a VLT on the PowerEdge MX side.
The SmartFabric creation and APIC configuration steps are the same as mentioned in Sections 4.2 through 4.5. Refer to these sections to deploy the ACI infrastructure on the MX7000 Chassis in SmartFabric mode using MX5108n switches.
5 Validate the configuration This section covers methods to verify the SmartFabric and ACI environment is configured properly. The screens shown in this chapter depict the MX9116n FSE configuration. Steps for validating the MX5108n will be similar. 5.1 MX Validation using OME-M console This section covers the methods used to verify the SmartFabric and ACI environment is configured properly. 5.1.
The Group Topology page shows the MX9116n FSE and MX7116n FEM connections and displays any validation errors. On the MX9116n FSEs, ports 1/1/17-18 are used to connect to the MX7116n FEMs. Ports 1/1/37-40 are used for the VLTi. 5.1.2 Show the SmartFabric status The overall health of the SmartFabric is displayed as follows: 1. Open the OME-M console. 2. From the Navigation menu, click Devices and then click Fabric. 3. Click the fabric name, for example, SmartFabric1, to expand the details of the fabric.
SmartFabric server status Select the Topology tab to view uplinks and fabric connections. Figure 49 shows the VLT port channel connection. Uplink01 is connected to the Cisco ACI vPC using ports 1/1/41-1/1/42 on each MX9116n FSE. The VLTi connection between the two MX9116n FSEs is also shown. Uplink and VLTi (ISL) connections The connection details display in the table at the bottom of the Topology page as shown in Figure 50.
SmartFabric topology connection details 5.1.3 Show port status The OME-M console can be used to show MX9116n FSE port status, toggle administrative states, configure breakouts, MTU settings, and auto-negotiation. 1. Open the OME-M console. 2. From the Navigation menu, click Devices and then click I/O Modules. 3. Click an IOM name for the first MX9116n FSE, for example, IOM-A1. The IOM Overview page for that device displays. 4. On the IOM Overview page, click Hardware, and then click Port Information.
Figure 51 shows ports 1/1/1 and 1/1/3 are up. Ports 1/1/1 and 1/1/3 are connected to the compute sleds in the local chassis. The figure also shows the uplinks to the Cisco ACI leafs, using port channel 1, are up. It also shows the VLTi ports, using port channel 1000, are up. IOM port information 5.2 Validation using the MX9116n FSE CLI The CLI commands shown in this section are available to help validate the configuration. The commands and output shown below are from the MX9116n FSE in the first chassis.
-------------------------------------------------------------------------CBJWLN2 MX7116n FEM 1 CF54XM2 A1 1/1/1 71 5.2.3 show unit-provision The show unit-provision command is only available on the MX9116n FSE. It displays the unit ID, name, and the state of each MX7116n FEM attached to the MX9116n FSE.
5.2.6 show interface port channel summary The show interface port-channel summary command shows the LAG number (VLT port channel 1 in this example), the mode, status and ports used in the port channel. MX9116n-1# show interface port-channel summary LAG Mode Status Uptime 1 L2-HYBRID up 00:29:20 5.2.7 Ports Eth 1/1/41 (Up) Eth 1/1/42 (Up) show lldp neighbors The show lldp neighbors command shows information about directly connected devices.
MX9116n-1# show qos system Service-policy (input): PM_VLAN ETS Mode : off 5.2.9 show policy-map Using the service policy from show qos system, the show policy-map command displays QoS policy details including class maps and QoS group settings. The QoS group values should match those configured for each VLAN. See Section 2.7 in the Dell EMC PowerEdge MX SmartFabric Configuration and Troubleshooting Guide for more information on QoS groups.
5.3 SmartFabric Services – Troubleshooting commands The following commands allow user to view various SmartFabric Services configuration information. These commands can also be used as troubleshooting purpose on SmartFabric OS10. These commands are available in OS10.5.0.1 5.3.1 show smartfabric cluster The show smartfabric cluster command is used to see if node is part of the cluster. This displays the cluster information of the node such as node role, service, virtual IP address, and the node domain.
5.3.4 show smartfabric uplinks The show smartfabric uplinks command is used to verify the uplinks configured across the nodes in the fabric. This displays name, description, id, media type, native vlan, configured interfaces, and network profile associated with the fabric.
5.4 Cisco ACI validation 5.4.1 Verify vPC configuration Verify the vPC connection from the Cisco ACI fabric to the Dell MX SmartFabric VLT, as shown in Figure 52, that it is up and properly configured to allow the designated VLANs and EPGs. This is done as follows: 1. In the APIC GUI, click Fabric > Inventory > Pod name > Leaf name > Interfaces > vPC Interfaces and drill down to the applicable port channel vPC policy group as shown in Figure 52. 2.
3. Verify that all of the leaf switch interfaces in the vPC, for example, eth1/51-52, are listed beneath the port channel and are also Up. 4. With the port channel/vPC interface policy group selected in the left pane, click VLANs at the top of the right pane as shown in Figure 53. Cisco ACI vPC port channel VLANs and EPGs 5. Verify that the port channel includes all required VLANs, and that the EPGs are mapped to the correct VLANs. 6. Repeat the steps in this section for the remaining leaf switch. 5.4.
Cisco ACI physical interfaces 2. Verify that the required interfaces, for example, eth1/1-3, show an up status. 3. With an interface selected in the left navigational panel, click the VLANs tab in the navigation window as shown in Figure 55.
4. Verify that the interface includes all required VLANs and EPGs. Repeat the steps for the remaining interfaces as needed. 5. Repeat the steps in this section for the remaining leaf switch. 5.4.3 Verify ACI learning endpoints To verify that the ACI is learning endpoints, perform the following steps: 1. In the APIC GUI, go to Tenants > Tenant name > Application Profiles > Application Profile name > Application EPGs > and select an Application EPG. 2.
vCenter-Server 2. Select Associated EPGs to show the associated EPGs to vCenter Domain. Associated EPGs to vCenter Domain 3. For more information about vCenter server and its associated credentials, go to Virtual Networking > VMM Domains > VMware > VDS-ACI > Controllers > vCenter-Server. This shows the Datacenter, Management EPG, and Associated Credential details.
vCenter-Server Detail 4. Choose vCenter-Server, then DVS-VDS-ACI to see the details about Distributed Virtual Switch.
5.5 Verify connectivity between VMs In ACI, by default, communication flows freely within EPGs, but not between EPGs. To enable inter-EPG communication, contracts are configured on the APIC. This example is configured for unrestricted inter-EPG communication as shown in steps 4.2.18 through 4.2.20 in the Section 4.2. Connectivity is verified by pinging between the VMs as shown in Figure 61. Since inter-EPG communication is allowed using configured contracts, all VMs can ping all other VMs in the topology.
A Hardware supported in this document This section covers the rack-mounted networking switches supported by the examples in this guide. For detailed information about the hardware components related to the MX platform, see the Dell EMC PowerEdge MX Networking Architecture Guide. A.1 Dell EMC PowerSwitch S3048-ON management switch The Dell EMC PowerSwitch S3048-ON is a 1-Rack Unit (RU) switch with forty-eight 1GbE BASE-T ports and four 10GbE SFP+ ports.
A.4 Cisco Nexus C93180YC-EX The Cisco Nexus C93180YC-EX switch is a 1-RU switch with forty-eight 1/10/25GbE ports and six 40/100GbE ports. A pair of Cisco Nexus C93180YC-EX switches is used as Cisco ACI leaf switches in the example in this guide. A.5 Cisco Nexus C9336-PQ The Cisco Nexus C9336-PQ switch is a 2-RU switch with thirty-six 40GbE QSFP+ ports. One Cisco Nexus C9336-PQ switch is used as a Cisco ACI spine switch in the example in this guide.
B Validated components The following tables include the hardware, software, and firmware used to configure and validate the environment mentioned in this document. B.1 Dell EMC PowerSwitch Dell EMC PowerSwitch and OS version B.2 Qty Item OS Version 1 Dell EMC PowerSwitch S3048-ON OOB management switch 10.4.1.
B.3 Cisco ACI components Cisco ACI components 72 Qty Item Version 1 Cisco APIC 4.0(3d) 1 Cisco Nexus C9336-PQ spine switch n9000-14.0(3d) 2 Cisco Nexus C93180YC-EX leaf switches n9000-14.
C Technical resources Dell EMC Networking Guides Dell EMC PowerEdge MX IO Guide Dell EMC PowerEdge MX Network Architecture Guide Dell EMC PowerEdge MX SmartFabric Deployment Video Dell EMC PowerEdge MX SmartFabric Deployment with Cisco ACI Video MX Port-Group Configuration Errors Video MX Port-Group Configuration Video Dell EMC OpenManage Enterprise-Modular Edition User's Guide v1.00.01 OS10 Enterprise Edition User Guide for PowerEdge MX IO Modules Release 10.4.
D Support and feedback Contacting Technical Support Support Contact Information Web: http://www.dell.com/support Telephone: USA: 1-800-945-3355 Feedback for this document Dell EMC encourages readers to provide feedback on the quality and usefulness of this publication by sending an email to Dell_Networking_Solutions@Dell.