VCF on VxRail Multirack Deployment using BGP EVPN Adding a Virtual Infrastructure workload domain with NSX-T Abstract This document provides step-by-step deployment instructions for Dell EMC OS10 Enterprise Edition (EE) L2 VXLAN tunnels using BGP EVPN. This guide contains the foundation for multirack VxRail host discovery and deployment.
Revisions Date Description August 2019 Initial release The information in this publication is provided “as is.” Dell Inc. makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any software described in this publication requires an applicable software license. © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.
Table of contents 1 2 3 Introduction ...................................................................................................................................................................6 1.1 VMware Cloud Foundation on VxRail ................................................................................................................6 1.2 VMware Validated Design for SDDC on VxRail .................................................................................................8 1.
5.8 6 7 8 Switch settings ..................................................................................................................................................26 Configure and verify the underlay network .................................................................................................................27 6.1 Configure leaf switch underlay networking .......................................................................................................27 6.
B.2 5 Dell EMC Networking Guides ...........................................................................................................................61 C Fabric Design Center .................................................................................................................................................62 D Support and feedback ................................................................................................................................................
1 Introduction Our vision at Dell EMC is to be the essential infrastructure company from the edge to the core, and the cloud. Dell EMC Networking ensures modernization for today’s applications and the emerging cloud-native world. Dell EMC is committed to disrupting the fundamental economics of the market with a clear strategy that gives you the freedom of choice for networking operating systems and top-tier merchant silicon.
VMware Cloud Foundation on VxRail makes operating the data center fundamentally simpler by bringing the ease and automation of the public cloud in-house by deploying a standardized and validated network flexible architecture with built-in lifecycle automation for the entire cloud infrastructure stack including hardware. SDDC Manager orchestrates the deployment, configuration, and lifecycle management (LCM) of vCenter, NSX, and vRealize Suite above the ESXi and vSAN layers of VxRail.
VMware Cloud Foundation on VxRail Planning and Preparation Guide 1.2 VMware Validated Design for SDDC on VxRail VMware Validated Designs (VVD) simplify the process of deploying and operating an SDDC. They are comprehensive, solution-oriented designs that provide a consistent and repeatable production-ready approach to the SDDC. They are prescriptive blueprints that include comprehensive deployment and operational practices for the SDDC.
1.3 VMware NSX Data Center VMware NSX Data Center delivers virtualized networking and security entirely in software, completing a vital pillar of the Software Defined Data Center (SDDC), and enabling the virtual cloud network to connect and protect across data centers, clouds, and applications. With NSX Data Center, networking and security are brought closer to the application wherever it is running, from virtual machines (VMs) to containers to bare metal.
1.4 Prerequisites This deployment guide is a continuation of the deployment guide, VCF on VxRail multirack deploying using BGP EVPN. That guide provides step-by-step instructions on creating a VCF on VxRail multirack management domain. 1.5 Supported switches and operating systems The examples provided in this deployment guide use VxRail 4.7.211 nodes that are connected to Dell EMC PowerSwitch S5248F-ON switches running Dell EMC OS10 EE 10.4.3.5.
2 Hardware overview This section briefly describes the hardware that is used to validate the deployment examples in this document. Appendix A contains a complete listing of hardware and software that is validated for this guide.
2.4 Dell EMC PowerSwitch S3048-ON The Dell EMC PowerSwitch S3048-ON is a 1-Rack Unit (RU) switch with forty-eight 1GbE BASE-T ports and four 10GbE SFP+ ports. In this document, one S3048-ON supports out-of-band (OOB) management traffic for all examples.
3 Network transport VMware Validated Design supports both Layer 2 and Layer 3 network transport. In this section, the details of the Layer 3 leaf-spine topology are provided. Note: Most of the steps in this section may already be done if all of the configuration steps from the VCF on VxRail multirack deploying using BGP EVPN deployment guide were followed. To ensure completion, the necessary steps are included in this section. 3.
Spine VNI A Spine VNI B VTEP Leaf VLT VTEP Leaf Leaf VLT VNI A GW VNI A GW VNI B GW VNI B GW Leaf VxRail Node VxRail Node VxRail Node VxRail Node VNI A GW VNI A Anycast gateway VNI B GW VNI B Anycast gateway Physical L3 connection Physical L2 connection Virtual L2 connection Virtual L2 connection BGP EVPN topology This deployment guide uses EVPN/VXLAN to achieve the following: • Tunneling of Layer 2 overlay virtual networks through a physical Layer 3 leaf-spine underlay network usin
3.2.1 The VXLAN protocol VXLAN allows a Layer 2 network to scale across the data center by overlaying an existing Layer 3 network and is described in Internet Engineering Task Force document RFC 7348. Each overlay is seen as a VXLAN segment. Each segment is identified through a 24-bit segment ID seen as a VNI. This allows up to 16 Million VNIs, far more than the traditional 4,094 VLAN IDs that are allowed on a physical switch.
4 Topology 4.1 Leaf-spine underlay In a Layer 3 leaf-spine network, the traffic between leaf switches and spine switches are routed. Equal cost multipath routing (ECMP) is used to load balance traffic across the Layer 3 connections. BGP is used to exchange routes. The Layer 3/Layer 2 (L3/L2) boundary is at the leaf switches. Two leaf switches are configured as Virtual Link Trunking (VLT) peers at the top of each rack. VLT allows all connections to be active while also providing fault tolerance.
4.1.1 BGP ASNs and router IDs Figure 12 shows the autonomous system numbers (ASNs) and router IDs used for the leaf and spine switches in this guide. Spine switches share a common ASN, and each pair of leaf switches shares a common ASN. ASNs should follow a logical pattern for ease of administration and allow for growth as switches are added. Using private ASNs in the data center is the best practice. Private, 2-byte ASNs range from 64512 through 65534.
Each link is a separate, point-to-point IP network. Table 1 details the links labeled in Figure 13. The IP addresses in the table are used in the switch configuration examples. Point-to-point network IP addresses Link label Source switch Source IP address Destination switch Destination IP address Network A Spine 1 192.168.1.0 Leaf 1a 192.168.1.1 192.168.1.0/31 B Spine 2 192.168.2.0 Leaf 1a 192.168.2.1 192.168.2.0/31 C Spine 1 192.168.1.2 Leaf 1b 192.168.1.3 192.168.1.
4.2 Underlay network connections Figure 14 shows the wiring configuration for the six switches that include the leaf-spine network. The solid colored lines are 100 GbE links, and the light blue dashed lines are two QSFP28-DD 200 GbE cable pairs that are used for the VLT interconnect (VLTi). The use of QSFP28-DD offers a 400 GbE VLTi to handle any potential traffic increases resulting from failed interconnects to the spine layers.
4.3 BGP EVPN VXLAN overlay Spine01 Spine02 VRF tenant1 VNI 1611 VNI 1641 eBGP ECMP eBGP VTEP 10.222.222.1 Leaf01A VLTi VTEP 10.222.222.2 Leaf01B Leaf02A VLTi 172.16.11.253 172.16.11.253 VNI 1614 172.16.41.253 172.16.41.253 Leaf02B VM sfo01m01vxrail01 VM sfo01m01vxrail03 VM sfo01w02vxrail01 VM sfo01w02vxrail03 Rack 1 Rack 2 172.16.11.253 Anycast gateway - VNI 1611 VM VM on VNI 1611, IP 172.16.11.x /24 172.16.41.253 Anycast gateway - VNI 1641 VM VM on VNI 1641, IP 172.16.41.
4.4 VxRail node connections Workload domains include combinations of ESXi hosts and network equipment which can be set up with varying levels of hardware redundancy. Workload domains are connected to a network core that distributes data between them. Figure 16 shows a physical view of Rack 1. On each VxRail node, the NDC links carry traditional VxRail network traffic such as management, vMotion, vSAN, and VxRail management traffic.
5 Planning and preparation Before creating the IP underlay that drives the SDDC, it is essential to plan out the networks, IP subnets, and external services required. Also, planning of the prerequisites on all required switching hardware is recommended. 5.1 VLAN IDs and IP subnets VCF on VxRail requires that specific VLAN IDs and IP subnets for the traffic types in the SDDC are defined ahead of time. Table 2 shows the values that are used in this document.
5.3 DNS In this document, the Active Directory (AD) servers provide DNS services. Other DNS records that are used in this document follow the VVD examples. The examples can be found in the VVD documentation section, Prerequisites for the NSX-T Deployment. Hostnames and IP addresses for the external services 5.3.1 Component group Hostname DNS zone IP address Description AD/DNS dc01rpl rainpole.local 172.16.11.4 Windows 2016 host containing AD and DNS server for rainpole.local dc01sfo sfo01.
DHCP scope values ID DHCP server IP address Start IP address End IP address Gateway Subnet mask Rack 1 10.10.14.5 172.25.101.1 172.25.101.199 172.25.101.253 /24 Rack 2 10.10.14.5 172.25.102.1 172.25.102.199 172.25.102.253 /24 5.4 Switch preparation 5.5 Check switch OS version Dell EMC PowerSwitches must be running OS10EE version 10.4.3.5 or later for this deployment. Run the show version command to check the OS version.
Version License License License License : Type : Duration: Status : location: 10.4.3.5 PERPETUAL Unlimited Active /mnt/license/68X00Q2.lic --------------------------------------------------------Note: A perpetual license is already on the switch if OS10EE was factory installed. 5.7 Factory default configuration The switch configuration commands in the chapters that follow begin with the leaf switches at their factory default settings.
5.8 Switch settings Table 6 shows the unique values for the four S5248F-ON switches. The table provides a summary of the configuration differences between each switch and each VLT switch pair. Unique switch settings for leaf switches Setting S5248F-Leaf1A S5248F-Leaf1B S5248F-Leaf2A S5248F-Leaf2B Hostname sfo01-Leaf01A sfo01-Leaf01B sfo01-Leaf02A sfo01-Leaf02B OOB IP address 100.67.198.32/24 100.67.198.31/24 100.67.198.30/24 100.67.198.
6 Configure and verify the underlay network 6.1 Configure leaf switch underlay networking This chapter details the configuration for S5248F-ON switch with the hostname sfo01-Leaf01a, shown as the left switch in Figure 17. Virtual networks 1641 and 3939 are shown in the diagram as an example. All the required virtual networks are created during the switch configuration. Configuration differences for Leaf switch 1b, 2a, and 2b are noted in Section 5.8. These commands should be entered in the order shown.
1. Configure general switch settings, including management and NTP source. OS10# configure terminal OS10(config)# interface mgmt 1/1/1 OS10(conf-if-ma-1/1/1)# no ip address dhcp OS10(conf-if-ma-1/1/1)# ip address 100.67.198.32/24 OS10(conf-if-ma-1/1/1)# exit OS10(config)# management route 100.67.0.0/16 managementethernet OS10(config)# hostname sfo01-Leaf01A sfo01-Leaf01A(config)# ntp server 100.67.10.20 sfo01-Leaf01A(config)# bfd enable sfo01-Leaf01A(config)# ipv6 mld snooping enable 2.
6. Assign the VLAN member interfaces to virtual networks.
sfo01-Leaf01A(conf-if-eth1/1/2)# sfo01-Leaf01A(conf-if-eth1/1/2)# 1643,3939 sfo01-Leaf01A(conf-if-eth1/1/2)# sfo01-Leaf01A(conf-if-eth1/1/2)# sfo01-Leaf01A(conf-if-eth1/1/2)# sfo01-Leaf01A(conf-if-eth1/1/2)# sfo01-Leaf01A(conf-if-eth1/1/2)# switchport access vlan 1641 switchport trunk allowed vlan 1642mtu 9216 spanning-tree port type edge flowcontrol receive on flowcontrol transmit off exit 8.
Note: If more than two ESGs are being used, update the maximum-paths ebgp value accordingly. 11. Configure eBGP for the IPv4 point-to-point peering using the following commands: sfo01-Leaf01A(config-router-bgp-65101)# neighbor 192.168.1.
sfo01-Leaf01A(config-router-neighbor)# update-source loopback1 sfo01-Leaf01A(config-router-neighbor)# no shutdown sfo01-Leaf01A(config-router-neighbor)# address-family ipv4 unicast sfo01-Leaf01A(config-router-neighbor-af)# no activate sfo01-Leaf01A(config-router-neighbor-af)# exit sfo01-Leaf01A(config-router-neighbor)# address-family l2vpn evpn sfo01-Leaf01A(config-router-neighbor-af)# activate sfo01-Leaf01A(config-router-neighbor-af)# exit sfo01-Leaf01A(config-router-neighbor)# exit sfo01-Leaf01A(config-ro
sfo01-Leaf01A(conf-vlt-1)# exit 18. Configure the iBGP IPv4 peering between the VLT peers. sfo01-Leaf01A(config)# router bgp 65101 sfo01-Leaf01A(config-router-bgp-65101)# neighbor 192.168.3.1 sfo01-Leaf01A(config-router-neighbor)# remote-as 65101 sfo01-Leaf01A(config-router-neighbor)# no shutdown sfo01-Leaf01A(config-router-neighbor)# exit 19. Create a tenant VRF. Note: An OS10 best practice is to isolate any virtual network traffic in a non-default VRF.
6.2 Configure leaf switch NSX-T overlay networking In this section, the specific networking required to support the NSX-T overlay networks are configured on sfo01-Leaf1A. Figure 18 shows three networks, VLANs 2500, 1647, and 1648. VLAN 2500 is used to support NSX-T TEPs and VLANs 1647 and 1648 are used for north-south traffic into the NSX-T overlay. Note: The physical connections from the VxRail nodes to the leaf switches use the PCIe card in slot2.
2. Create VLAN 1647 and assign an IP address. This VLAN is used to carry north-south traffic from the Edge service cluster configured in Section 8. sfo01-Leaf01A(config)# interface vlan1647 sfo01-Leaf01A(config-if-vl-1647)# description sfo01-w-uplink01 sfo01-Leaf01A(config-if-vl-1647)# no shutdown sfo01-Leaf01A(config-if-vl-1647)# mtu 9216 sfo01-Leaf01A(config-if-vl-1647)# ip address 172.27.11.1/24 sfo01-Leaf01A(config-if-vl-1647)# exit 3. Create VLAN 1649.
sfo01-Leaf01A(config)# ip prefix-list spine-leaf seq 70 permit 172.25.101.0/24 sfo01-Leaf01A(config)# ip prefix-list spine-leaf seq 70 permit 172.25.102.0/24 sfo01-Leaf01A(config)# ip prefix-list spine-leaf seq 90 permit 172.16.49.0/24 7. Repeat the steps using the appropriate values from 5.8, for the remaining leaf switches. Note: If any of the workload subnets need to have access to the underlay network, additional IP prefix lists will need to be added. 6.
OS10(conf-if-ma-1/1/1)# exit OS10(config)# management route 100.67.0.0/16 managementethernet OS10(config)# hostname sfo01-Spine01 sfo01-Spine01(config)# ntp server 100.67.10.20 sfo01-Spine01(config)# hardware forwarding-table mode scaled-l3-routes sfo01-Spine01(config)# bfd enable 2. Configure a loopback interface for the Router ID.
sfo01-Spine01(config-route-map)# match ip address prefix-list spine-leaf sfo01-Spine01(config-route-map)# exit 5. Enter the following commands to configure eBGP. sfo01-Spine01(config)# router bgp 65100 sfo01-Spine01(config-router-bgp-65100)# bfd all-neighbors interval 200 min_rx 200 multiplier 3 role active sfo01-Spine01(config-router-bgp-65100)# router-id 10.0.1.
sfo01-Spine01(conf-if-lo-1)# ip address 10.2.1.1/32 sfo01-Spine01(conf-if-lo-1)# exit 8. Configure BGP EVPN peering. sfo01-Spine01(config)# router bgp 65100 sfo01-Spine01(config-router-bgp-65100)# neighbor 10.2.2.
sfo01-Spine01(config-router-neighbor)# address-family ipv4 unicast sfo01-Spine01(config-router-neighbor-af)# no activate sfo01-Spine01(config-router-neighbor-af)# exit sfo01-Spine01(config-router-neighbor)# address-family l2vpn evpn sfo01-Spine01(config-router-neighbor-af)# activate sfo01-Spine01(config-router-neighbor-af)# exit sfo01-Spine01(config-router-neighbor)# exit sfo01-Spine01(config-router-bgp-65100)# exit 9. Repeat the steps using the appropriate values from Section 5.
2. Run the show ip route bgp command to verify that all routes using BGP are being received. The command also confirms that the multiple gateway entries show the multiple routes to the BGP learned networks. Figure 21 shows two different routes to the remote loopback addresses for 10.0.2.3/32 and 10.2.2.3/32.
6.5 Verify BGP EVPN and VXLAN between leaf switches For the L2 VXLAN virtual networks to communicate, each leaf must be able to establish a connection to the other leaf switches before host MAC information can be exchanged. Verify that peering is successful and BGP EVPN routing has been established. 1. Run the show ip bgp l2vpn evpn summary command to display information about the BGP EVPN and TCP connections to neighbors.
The output of show evpn evi Note: For more validation and troubleshooting commands, see the OS10 Enterprise Edition User Guide.
7 Create a VxRail Virtual Infrastructure workload domain This chapter provides guidance on creating a VxRail Virtual Infrastructure (VI) workload domain before adding a cluster. Deploy the vCenter server and make the domain ready for the cluster addition. Note: You can only perform one workload domain operation at a time. For example, when creating a workload domain, you cannot add a cluster to any other workload domain. 1.
7.1 Create a local user in the workload domain vCenter Server Before adding the VxRail cluster, image the workload domain nodes. Once complete, perform the VxRail first run of the workload domain nodes using the external vCenter server. Create a local user in the vCenter Server as this is an external server that the VMware Cloud Foundation deploys. This is required for the first run of VxRail. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 7.
7.3 VxRail deployment values Table 7 lists the values that are used during the VxRail Manager initialization and expansion operation. Note: The values are listed in order as they are entered in the GUI. VxRail network configuration values Parameter Appliance NTP server Settings Domain ESXi hostname and IP addresses Value 172.16.11.5 sfo01.rainpole.local ESXi hostname prefix Separator Iterator Offset Suffix sfo01w02vxrail none Num 0x 1 none ESXi beginning address ESXi ending address 172.16.41.101 172.
3. Click the three dots, then click Add VxRail Cluster. The Add VxRail cluster to Workload Domain page displays. 4. On the VxRail Manager page, a single VxRail cluster is discovered. Select the VxRail-Cluster object and click Next. 5. The Discovered Host page displays a list of the discovered hosts for that cluster. Update the SSH password for the discovered hosts for that cluster and then click Next. The Networking page displays the networking details for the cluster. a.
8 Configure NSX-T north-south connectivity The necessary components to facilitate NSX-T overlay networking are automatically created by VCF on VxRail including VLAN and overlay transport zones as well as Uplink, NIOC, and Transport Node profiles. This chapter provides the steps that are required to establish north-south connectivity from the NSX-T overlay network to the leaf switches.
8.2 Create uplink profiles and the network I/O control profile Table 9 shows the values that are used for the corresponding uplink profiles that the uplink transport zones use. Uplink profiles 8.
8.5 Deploy the NSX-T edge appliances To provide tenant workloads with routing services and connectivity to networks that are external to the workload domain, deploy two NSX-T Edge nodes. Table 12 shows the values that are used for the edge nodes. Uplink profiles Setting Value for sfo01wesg01 Value for sfo01esg02 Hostname sfo01wesg01.sfo01.rainpole.local sfo01wesg02.sfo01.rainpole.local Port Groups sfo01-w-nvds01-management sfo01-w-nvds01-management Primary IP Address 172.16.41.21 172.16.41.
3. 4. 5. 6. Click Add. In the Create VM/Host Rules dialog box, type a name for the rule. From the Type drop-down menu, select the appropriate type. Click Add and in the Add Group Member window select either VxRail nodes or Edge nodes to which the rule applies and click OK. 7. Click OK. 8. Repeat the steps in this section for the remaining rule.
8.8 Add the NSX-T edge nodes to the transport zones After you deploy the NSX-T edge nodes and join them to the management plane, to connect the nodes to the workload domain. Next, add the nodes to the transport zones for uplink and overlay traffic and configure the N-VDS on each edge node. Table 16 shows the values that are used for both edge nodes.
8.9 Create and configure the Tier-0 gateway The Tier-0 gateway in the NSX-T Edge cluster provides a gateway service between the logical and physical network. The NSX-T Edge cluster can back multiple Tier-0 gateways. In this example, each edge node hosts a Tier-0 gateway, and ECMP is used to create multiple paths to the two leaf switches in the rack. See Create and configure the Tier-0 gateway for step-by-step instructions. In this example, no changes were made to the values found in the VVD.
8.11 Verify BGP peering and route redistribution The Tier-0 gateway must establish a connection to each of the upstream Layer 3 devices before BGP updates can be exchanged. Verify that the NSX-T Edge nodes are successfully peering and that BGP routing is established. 1. Open an SSH connection to sfo01wesg01. 2. Log in using the previously defined credentials. 3. Use the get logical-router command to get information about the Tier-0 and Tier-1 service routes and distributed routers.
9 Validate connectivity between virtual machines This chapter covers a quick validation of the entire solution. A combination of ping and traceflow are used between a combination of three virtual machines and a loopback interface. Two of the VMs are associated with one segment, Web, and the other VM is associated with the segment, App. The loopback interface represents all external networks. Figure 29 shows the three virtual machines, Web01, Web02, and App01, are running on two separate hosts.
9.1 Ping from Web01 to Web02 Figure 30 shows the results of the ping that is issued from Web01 (10.10.20.10/24) to Web02 (10.10.20.11/24) Ping results from Web01 to Web02 9.2 Ping from Web01 to App01 Figure 31 shows the results of the ping that is issued from Web01 (10.10.20.10/24) to App01 (10.10.10.10/24).
9.3 Ping from Web01 to 10.0.1.2 Figure 32 shows the results of the ping that is issued from Web01 (10.10.20.10/24) to the destination address 10.0.1.2, the loopback address on sfo01-spine02. Ping results from Web01 to 10.0.1.2 Note: If connectivity is needed to the underlay network from the workload tenants, additional IP prefix list will need to be added to the leaf switches. Refer to Section 6.2. 9.4 Ping from App01 to 10.0.1.2 Figure 33 shows the results of the ping that is issued from App01 (10.10.
9.5 Traceflow App01 to 10.0.1.2 Figure 34 shows the results of the traceflow tool from VM App01 (10.10.10.10) to the destination address 10.0.1.2, the loopback address on sfo01-spine02. Traceflow results from App01 to 10.0.1.2/32 Note: See NSX-T Administration Guide: Traceflow for more information.
A Validated components A.1 Dell EMC PowerSwitch models Switches and operating system versions A.2 Qty Item Version 4 Dell EMC PowerSwitch S5248F-ON leaf switches 10.4.3.5 2 Dell EMC PowerSwitch Z9264F-ON spine switches 10.4.3.5 2 Dell EMC PowerSwitch S3048-ON OOB mgmt switches 10.4.3.5 VxRail E560 nodes A cluster of four VxRail E560 nodes was used to validate the VI-WLD in this guide. The nodes were each configured using the information that is provided in Table 19.
A.3 Appliance software This Deployment Guide uses the VxRail appliance software 4.7.211 for development. The software consists of the component versions that are provided in Table 20. VxRail appliance software component versions 60 Item Version VxRail Manager 4.7.211.13893929 ESRS 3.28.0006 Log Insight 4.6.0.8080673 VMware vCenter 6.7 U2a 13643870 VMware ESXi 6.7 EP09 13644319 Platform Service 4.7.211 NSX-T 2.4.1.0.
B Technical resources B.1 VxRail, VCF, and VVD Guides VMware Cloud Foundation on VxRail Planning and Preparation Guide VMware Cloud Foundation on VxRail Architecture Guide VMware Cloud Foundation on VxRail Administrator Guide VMware Cloud Foundation on VxRail Technical FAQ Dell EMC VxRail Network Guide VMware Validated Design 5.0.1 NSX-T Workload Domains NSX-T Data Center Administration Guide VxRail Support Matrix B.
C Fabric Design Center The Dell EMC Fabric Design Center (FDC) is a cloud-based application that automates the planning, design, and deployment of network fabrics that power Dell EMC compute, storage, and HCI solutions. The FDC is ideal for turnkey solutions and automation based on validated deployment guides. FDC allows for design customization and flexibility to go beyond validated deployment guides. For additional information, go to the Dell EMC Fabric Design Center.
D Support and feedback Contacting Technical Support Support Contact Information Web: http://www.dell.com/support Telephone: USA: 1-800-945-3355 Feedback for this document Dell EMC encourages readers to provide feedback on the quality and usefulness of this publication by sending an email to Dell_Networking_Solutions@Dell.com.