Dell EMC Networking FCoE-to-Fibre Channel Deployment with S4148U-ON in F_port Mode Connecting server CNAs to FC storage and a leaf-spine network using two S4148U-ON switches running OS10 Dell EMC Networking Infrastructure Solutions June 2018 A Dell EMC Deployment Guide
Revisions Date Rev. Description Authors June 2018 1.0 Initial release Jim Slaughter, Andrew Waranowski The information in this publication is provided “as is.” Dell Inc. makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any software described in this publication requires an applicable software license.
Table of contents Revisions............................................................................................................................................................................. 2 1 2 3 4 Introduction ................................................................................................................................................................... 6 1.1 Typographical conventions .......................................................................................
5.2.2 Fibre Channel configuration .............................................................................................................................24 5.2.3 QoS Configuration ............................................................................................................................................27 5.2.4 LAN configuration for IP traffic..........................................................................................................................
9 A B C 5 Configure ESXi hosts for LAN traffic ..........................................................................................................................51 9.1 vSphere distributed switches ............................................................................................................................51 9.2 Create a VDS....................................................................................................................................................51 9.
1 Introduction Our vision at Dell EMC is to be the essential infrastructure company in the data center for today’s applications and for the cloud-native world we are entering. To attain that vision, the Dell EMC portfolio focuses on making every component of data center infrastructure (servers, storage, and networking) compelling by making the value of the integrated solution greater than the sum of the parts.
1.1 Typographical conventions The CLI and GUI examples in this document use the following conventions: 1.2 Monospace Text CLI examples Underlined Monospace Text CLI examples that wrap the page. This text is entered as a single command. Italic Monospace Text Variables in CLI examples Bold Monospace Text Commands entered at the CLI prompt Bold text GUI fields and information entered in the GUI Attachments This .pdf includes switch configuration file attachments.
2 Hardware overview This section briefly describes the primary hardware used to validate this deployment. A complete listing of hardware validated for this guide is provided in Appendix A. 2.1 Dell EMC Networking S4148U-ON The S4148U-ON enables converging LAN and SAN traffic in a single 1-RU, multilayer switch. It includes twenty-four 10GbE ports, two 40GbE ports, four 10/25/40/50/100GbE or FC8/16/32 ports, and twenty-four 10GbE or FC8/16 ports.
2.4 Dell EMC PowerEdge R640 server The PowerEdge R640 is a 1-RU, two-socket server platform with support for up to 56 processor cores, 3TB memory, and up to twelve SAS/SATA HDD/SSD drives or eight NVMe drives. Two R640 servers are used in the deployment in this guide. Dell EMC PowerEdge R640 2.5 Dell EMC Unity 500F storage array The Unity 500F storage platform delivers all-flash storage with up to 8PB raw capacity. It has concurrent support for NAS, iSCSI, and FC protocols.
3 Topology overview Note: In this deployment guide, "LAN" is used to broadly refer to the data center's leaf-spine production TCP/IP network, and "SAN" is used to refer to FCoE and FC storage networks. A pair of S4148U-ONs is installed in a rack to forward converged LAN and SAN traffic for all devices in the rack. Each S4148U-ON provides universal ports that are configured as either FC or Ethernet.
Note: Using a leaf-spine network in the data center is considered a best practice. For detailed leaf-spine network configuration instructions, including spine switch configuration, refer to Dell EMC Networking Layer 3 Leaf-Spine Deployment and Best Practices with OS10. 3.1 FC SAN topology detail For the SAN portion of the topology, each S4148U-ON switch is placed in F_port mode. This enables direct connections to N_port devices such as FC storage without the need for an additional dedicated FC switch.
3.2 LAN topology detail For the LAN portion of the topology, the S4148U-ON switches are leafs in the data center’s leaf-spine network. The topology includes the converged connections from the server CNAs and leaf-spine network connections as shown. The leaf pair forwards production TCP/IP traffic for all devices in the rack. Traffic destined for other racks is forwarded by the leafs to the spines. Z9100-Spine1 Z9100-Spine2 eth1 eth2 eth1 eth2 Leaf and spine port numbers shown are abbreviated, e.g.
3.3 OOB management network The out-of-band (OOB) management network is an isolated network for management traffic only. It is used by administrators to remotely configure and manage servers, switches, and storage devices. Production traffic initiated by the network end users does not traverse the management network. An S3048-ON switch is installed at the top of each rack for OOB management connections as shown.
4 Preparation Note: Exact iDRAC steps in this section may vary depending on hardware, software and browser versions used. See the PowerEdge server documentation for steps to connect to the iDRAC. 4.1 Reset server CNAs to factory defaults Note: Resetting to defaults is only necessary if installed CNAs have been modified from their factory default settings. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 4.2 Connect to the server's iDRAC in a web browser and launch the virtual console.
a. Set NIC Mode to Disabled. b. Set iSCSI Offload Mode to Disabled. c. Set FCoE Mode to Enabled, shown in Figure 11, and click Back. CNA partition 2 configuration 10. If present, select Partition 3 Configuration. Set NIC, iSCSI and FCoE modes to Disabled and click Back. 11. If present, select Partition 4 Configuration. Set NIC, iSCSI and FCoE modes to Disabled and click Back. 12. Click Back > Finish. 13. When prompted, answer Yes to save changes and click OK in the Success window. 14.
4.3 Determine CNA FCoE port WWPNs The PowerEdge R640 server's FCoE adapter World Wide Port Names (WWPNs) are used for FC zone configuration. Adapter WWPNs are determined as follows: 1. Connect to the first server's iDRAC in a web browser and log in. 2. Select System > Network Devices. 3. Click on the CNA. In this example, it is Integrated NIC 1. Under Ports and Partitioned Ports, the FCoE partition for each port is displayed as shown in Figure 12: FCoE partitions in iDRAC 4.
MAC address and FCoE WWPN for CNA port 1 5. Record the MAC Address and WWPN, outlined in red above. A convenient method is to copy and paste these values into a text file. Note: While the WWPN is used in the actual switch configuration for FC zoning, the MAC address is recorded to identify the corresponding vmnic number in VMware. 6. Repeat steps 4 and 5 for the FCoE partition on port 2. 7. Repeat steps 1-6 for remaining servers.
4.4 Determine Unity 500F storage array FC WWPNs The WWPNs of FC adapters in storage arrays are also used for FC zone configuration. WWPNs on Unity storage arrays are determined as follows: 1. Connect to the Unisphere GUI in a web browser and log in. Click the Settings icon right corner of the page. 2. In the left pane of the Settings window, select Access > Fibre Channel. near the top The Fibre Channel Ports page is displayed as shown in Figure 14.
4.5 VMware preparation 4.5.1 VMware ESXi download and installation Install VMware ESXi 6.5 U1 or later on each PowerEdge server. Dell EMC recommends using the latest Dell EMC customized ESXi .iso image available on support.dell.com. The correct drivers for the PowerEdge server hardware are built into this image. This image can be used to install ESXi via CD/DVD, a USB flash drive, or by mounting the .iso image through the PowerEdge server’s iDRAC interface.
4.5.4 Add VMkernel adapters for FCoE Note: Before starting this section, be sure the vmnic-to-physical adapter mapping for each host’s FCoE ports is known. Vmnics and their MAC addresses are visible in the Web Client by selecting the host in the Navigator pane. In the center pane, go to Configure > Networking > Physical adapters. MAC addresses for this deployment were recorded earlier in Table 1. In this example, the two FCoE adapters on each host are vmnic4 and vmnic5. Your vmnic numbering may vary.
Note: The IP addresses shown next to the two FCoE adapters are automatically assigned private addresses and are not used for FCoE. 4.5.5 Increase MTU size for FCoE FCoE frames may be up to 2180 bytes in size. By default, VMware vSwitches and VMkernel adapters have the Maximum Transmission Unit (MTU) size set to 1500 bytes. Use the following steps to increase the MTU size to 2500 bytes on the vSwitches created for FCoE: 1. In the vSphere Web Client, go to Home > Hosts and Clusters. 2.
5. Click OK to apply the setting and close the box. 6. Repeat for the host's second FCoE vSwitch, e.g., vSwitch2. 7. Repeat the steps above for the remaining ESXi hosts. Use the following steps to increase the MTU size to 2500 bytes on the VMkernel adapters created for FCoE: 1. In the vSphere Web Client, go to Home > Hosts and Clusters. 2. In the left pane, select the first ESXi host. In the center pane, select Configure > VMkernel adapters. 3. Select the first FCoE VMkernel adapter, e.g., FCoE1. 4. 5. 6.
5 S4148U-ON switch configuration This section covers steps to configure the S4148U-ON leaf switches running OS10. 5.1 Prepare switches 5.1.1 Factory default configuration The configuration commands in the sections that follow begin with S4148U-ON switches at their factory default settings.
Proceed to reboot the system? [confirm yes/no]:y 5.2 Configure switches After both S4148U-ONs are set to port profile-3, the commands in the tables that follow are run to complete the configuration of both switches. The port numbers used correspond to the topology diagrams shown in Figure 8 and Figure 9. Note: The commands in the tables below should be entered in the order shown. Complete switch runningconfiguration files are provided as attachments named S4148U-Leaf1.txt and S4148U-Leaf2.txt. 5.2.
Configure FC aliases for CNA and storage WWPNs. Using aliases is optional, but makes the configuration more user-friendly.
Create the FCoE VLAN, vfabric, FCoE map, and activate the zone set. In this deployment, the FCoE VLAN is set to 1002, the vfabric ID is set to 100 (valid range is 1-255), and the fcmap is set to 0xEFC64 (valid range is 0xEFC00-0xEFCFF).
S4148U-Leaf1 S4148U-Leaf2 vfabric 100 vfabric 100 interface ethernet 1/1/31 description "To Server 1" switchport access vlan 1 switchport mode trunk spanning-tree port type edge priority-flow-control mode on ets mode on vfabric 100 no shutdown interface ethernet 1/1/31 description "To Server 1" switchport access vlan 1 switchport mode trunk spanning-tree port type edge priority-flow-control mode on ets mode on vfabric 100 no shutdown interface ethernet 1/1/32 description "To Server 2" switchport acces
The following commands are run to create the class maps: S4148U-Leaf1 S4148U-Leaf2 class-map type network-qos class_Dot1p_3 match qos-group 3 class-map type network-qos class_Dot1p_3 match qos-group 3 class-map type queuing map_ETSQueue_0 match queue 0 class-map type queuing map_ETSQueue_0 match queue 0 class-map type queuing map_ETSQueue_3 match queue 3 class-map type queuing map_ETSQueue_3 match queue 3 QoS and policy maps are configured as follows: 28 S4148U-Leaf1 S4148U-Leaf2 trust dot1p-map
The QoS and policy maps defined above are applied using the system qos command as follows: 5.2.
Create a server-facing VLAN interface for IP traffic. Use the same VLAN ID, VLAN 50 in this example, on both leaf switches. Assign an IP address to the VLAN interface. The address must be unique but on the same network on both leaf switches. Configure VRRP to use VRRP version 3. Create a VRRP group and specify the group’s virtual IP address. S4148U-Leaf1 S4148U-Leaf2 interface Vlan 50 ip address 172.16.1.1/24 no shutdown exit interface Vlan 50 ip address 172.16.1.
S4148U-Leaf1 S4148U-Leaf2 no switchport ip address 192.168.2.1/31 no shutdown no switchport ip address 192.168.2.3/31 no shutdown interface loopback 0 description "Router ID" ip address 10.0.2.1/32 no shutdown interface loopback 0 description "Router ID" ip address 10.0.2.2/32 no shutdown Configure a route map and an IP prefix list to redistribute all loopback addresses and leaf networks via BGP.
Enable external Border Gateway Protocol (eBGP) with the router bgp ASN command. The bestpath as-path multipath-relax command enables equal-cost multi-path routing (ECMP). The maximumpaths ebgp 2 command specifies the maximum number of parallel paths to a destination to add to the routing table. Graceful restart enables the data plane to continue forwarding traffic for a time if the BGP process fails or quits. Neighbor fall-over is enabled, and BGP neighbors are configured.
6 S4148U-ON validation After connected devices are configured, many commands are available to validate the network configuration. This section provides a list of the most common commands and their output for this topology. Note: The commands and output shown below are for S4148U-Leaf1. The output for S4148U-Leaf2 is similar. For additional commands and output related to the leaf-spine portion of the topology, such as BGP, etc.
6.1.3 show vlt domain_id VLT configuration is verified by running the show vlt domain_id command on each of the S4148U-ON switches. The VLTi is shown as port channel 1000, and the link status is up. The role of one switch in the VLT pair is primary and its peer switch (not shown) is assigned to the secondary role.
6.2.2 show fc ns switch The show fc ns switch command shows all device ports logged into the fabric. In this deployment, four ports are logged in to each switch: two storage ports and two CNA ports.
Switch Name Domain Id Switch Port FC-Id Port Name Node Name Class of Service Symbolic Port Name Symbolic Node Name Port Type Registered with NameServer Registered for SCN 6.2.3 10:00:e4:f0:04:6b:04:42 100 ethernet1/1/32 64:80:00 20:01:18:66:da:77:d0:c3 20:00:18:66:da:77:d0:c3 8 QLogic 57800 bnx2fc v1.713.30.v60.6 over vmnic4 QLogic 57800 bnx2fc v1.713.30.v60.6 over vmnic4 N_PORT Yes Yes show fc zoneset active The show fc zoneset active command shows the zones and zone members in the active zone set.
6.2.5 18:66:da:71:50:ad 0e:fc:64:64:7c:00 Eth 1/1/31 e4:f0:04:6b:05:41 ~ 1002 64:7c:00 20:01:18:66:da:71:50:ad 20:00:18:66:da:71:50:ad 18:66:da:77:d0:c3 0e:fc:64:64:80:00 Eth 1/1/32 e4:f0:04:6b:05:41 ~ 1002 64:80:00 20:01:18:66:da:77:d0:c3 20:00:18:66:da:77:d0:c3 show vfabric The show vfabric command output provides a variety of information including the default zone mode, the active zone set, and interfaces that are members of the vfabric.
6.3 QoS commands 6.3.1 show qos maps The show qos maps command shows the configured traffic-class-to-queue and dot1p-to-traffic-class maps. The command output also displays default traffic-class maps (not used in this deployment and removed from the output below to save space).
6.3.3 show class-map The show class-map command shows the configured class maps. Note: The class-iscsi application class map is configured by default and is not used in this deployment. S4148U-Leaf1# show class-map Class-map (application): class-iscsi Class-map (qos): class-trust Class-map (network-qos): class_Dot1p_3 Match: qos-group 3 Class-map (queuing): map_ETSQueue_0 Match: queue 0 Class-map (queuing): map_ETSQueue_3 Match: queue 3 6.3.
S4148U-Leaf1# show lldp dcbx interface ethernet 1/1/31 E-ETS Configuration TLV enabled e-ETS Configuration TLV disabled R-ETS Recommendation TLV enabled r-ETS Recommendation TLV disabled P-PFC Configuration TLV enabled p-PFC Configuration TLV disabled F-Application priority for FCOE enabled f-Application Priority for FCOE disabled I-Application priority for iSCSI enabled i-Application Priority for iSCSI disabled -------------------------------------------------------------------------------Interface etherne
6.4.2 show lldp dcbx interface ethernet interface_number pfc detail The show lldp dcbx interface ethernet interface_number pfc detail command is used to verify PFC is enabled on dot1p priority 3 traffic, and its status is operational. It shows the FCoE TLV is enabled and the FCoE priority map is set to 0x08, which maps to dot1p priority 3. (Hex 08 is binary 1000. Counting bits from right to left and starting at 0, 1000 represents priority 3.
2 3 4 5 6 7 15 3 0% 50% 0% 0% 0% 0% 0% ETS SP Remote Parameters : ------------------Remote is enabled PG-grp Priority# Bandwidth TSA -----------------------------------------------0 0,1,2,4,5,6,7 50% ETS 1 0% 2 0% 3 3 50% ETS 4 0% 5 0% 6 0% 7 0% 15 0% SP Remote Willing Status is enabled Local Parameters : ------------------Local is enabled PG-grp Priority# Bandwidth TSA -----------------------------------------------0 0,1,2,4,5,6,7 50% ETS 1 0% 2 0% 3 3 50% ETS 4 0% 5 0% 6 0% 7 0% 15 0% SP Oper status i
7 Configure Unity FC storage This section covers configuration of a Dell EMC Unity 500F storage array. Refer to the storage system documentation for other FC storage devices. 7.1 Create a storage pool 1. 2. 3. 4. 5. Connect to the Unisphere GUI in a web browser and log in. In the left pane under STORAGE, select Pools. Click the icon. In the Create Pool dialog box, provide a Name and click Next. Select appropriate storage tiers and RAID configuration for the pool. Click Next.
7.2 Add ESXi hosts 1. In the Unisphere left pane under ACCESS, select VMware. 2. On the vCenters tab, click the icon to open the Add vCenter dialog box. 3. Enter the Network Name or Address of the vCenter server. Enter the vCenter User Name and Password and click Find. 4. A list of discovered ESXi hosts is displayed. Select the applicable hosts and click Next. 5. A VMware API for Storage Awareness (VASA) Provider is not used in this example. Click Next. 6.
5. 6. 7. 8. On the Snapshot page, leave settings at their defaults and click Next. On the Replication page, leave settings at their defaults and click Next. On the Summary page, review the details and click Finish to create the LUN. On the Results page, click Close when Overall status shows 100% Completed. The newly created LUN is now visible on the LUNs tab as shown in Figure 24. In this example, a LUN named FC-80GB that is 80GB in size has been created.
8 Configure storage on ESXi hosts In this section, the example LUN created on the storage array is used to create a datastore on an ESXi host. The datastore is used to create a virtual disk on a virtual machine (VM) residing on the ESXi host. This process may be repeated as needed for additional LUNs, hosts, and VMs. 8.1 Rescan storage 1. In the vSphere Web Client, go to Home > Hosts and Clusters. 2. In the Navigator pane, select an ESXi host with LUN access configured on the FC storage array. 3.
Adapter Details - Paths tab The Paths tab includes similar information for the host’s second storage adapter. 8.2 Create a datastore In this section, a datastore that uses the Unity LUN is created on the ESXi host. To create the datastore: 1. 2. 3. 4. In the vSphere Web Client, go to Home > Hosts and Clusters. In the Navigator pane, right-click on the ESXi host and select Storage > New Datastore. In the New Datastore window, leave the Type set to VMFS and click Next.
Datastore configured The datastore is also accessible by going to Home > Storage. It is listed under the Datacenter object in the Navigator pane. 8.3 Create a virtual disk Note: Virtual machine guest operating system deployment steps are not included in this document. For instructions, see the VMware vSphere 6.5 Documentation. Guest operating systems can be any supported by ESXi 6.5. VMs should be deployed before proceeding with this section.
New hard disk configuration options 6. Next to New Hard disk, set the size in GB less than or equal to the Maximum size shown on the line below. The size is set to 40 GB in this example. 7. Click OK to close the Edit Settings window and create the virtual disk. 49 Dell EMC Networking FCoE-to-Fibre Channel Deployment with S4148U-ON in F_port Mode | version 1.
8.4 Configure the virtual disk Note: The following example is applicable for VMs running Windows Server 2008, 2012, or 2016. See the operating system documentation to configure virtual disks on other supported guest operating systems. 1. Power on the VM and log in to the Windows Server guest OS. 2. In Windows, go to Server Manager > Tools > Computer Management > Storage > Disk Management. Note: If an Initialize Disk window appears, select OK to initialize now, or Cancel to initialize in step 5. 3.
9 Configure ESXi hosts for LAN traffic In this section, ESXi hosts and VMs are configured for TCP/IP access to the production network. (Refer to the LAN topology shown in Figure 9.) 9.1 vSphere distributed switches A vSphere Distributed Switch (also referred to as a VDS or a distributed switch) is a virtual switch that provides network connectivity to hosts and virtual machines. Unlike vSphere standard switches, distributed switches act as a single switch across multiple hosts in a cluster.
R640-VDS created with uplinks port group 9.3 Add a distributed port group In this section, a distributed port group is created on the distributed switch. To create the port group: 1. On the Web Client Home screen, select Networking. 2. Right-click on the distributed switch, e.g., R640-VDS. Select Distributed Port Group > New Distributed Port Group. 3. On the Select name and location page, provide a name for the distributed port group, e.g., Production. Click Next. 4.
9.4 Configure load balancing Note: It is a best practice to use Route based on Physical NIC load as the load balancing algorithm on distributed port groups. For more information, see VMware Validated Design Documentation, release 4.2. To configure load balancing on the distributed port group: 1. In the Navigator pane, right-click on the port group, e.g., Production, and select Edit Settings. 2. In the Edit Settings window, select Teaming and Failover in the left pane.
To add hosts to R640-VDS: 1. On the Web Client Home screen, select Networking. 2. Right click on R640-VDS and select Add and Manage Hosts. 3. In the Add and Manage Hosts dialog box: a. On the Select task page, make sure Add hosts is selected. Click Next. b. On the Select hosts page, Click the icon. Select the checkbox by each host to add. Click OK > Next. c. On the Select network adapters tasks page, be sure the Manage physical adapters box is checked. Be sure all other boxes are unchecked. Click Next. d.
9.6 Add a virtual network adapter to VMs In this section, virtual network adapters (vNICs) are added to VMs for LAN traffic using the previously created Production port group on the VDS. 1. 2. 3. 4. In the vSphere Web Client, go to Home > Hosts and Clusters. Under the Rack1 cluster, right click on a VM and click Edit Settings. Next to New Device, select Network. Click Add. Next to New Network, select Show more networks to open the Select Network page.
Virtual network adapter configured Note: In the top half of Figure 35, an additional adapter named Network adapter 1 on VM Network is present. VM Network is the OOB Management network in this deployment. If present, the adapter connected to VM Network may be retained or deleted as needed. 7. Click OK to add the New Network adapter. Repeat the steps above for remaining VMs that will access the leaf-spine production network.
9.7 Verify connectivity to the production network Log in to a guest OS by right-clicking on the VM and selecting Open Console. Use the procedure dictated by the guest OS to configure an IP address and default gateway on the newly added vNIC. VMs in this deployment have IP addresses on the 172.16.1.0/24 network. The default gateway is the VRRP IP address configured on the S4148U-ON leaf switches, 172.16.1.254. Test connectivity by pinging the default gateway and other configured VMs in the rack.
A Validated components The following tables include the hardware, software, and firmware used to configure and validate the examples in this guide. A.1 Switches Switches and firmware versions A.2 Qty Item OS Version 2 Dell EMC Networking S4148U-ON leaf switches 10.4.0E(R3P2) 2 Dell EMC Networking Z9100-ON spine switches 10.4.0E(R3P2) 1 Dell EMC Networking S3048-ON management switch 10.4.0E(R3P2) PowerEdge R640 servers R640 server components A.
A.4 VMware software The following table lists the VMware software components used to validate the examples in this guide. VMware software versions A.5 Item Version VMware ESXi 6.5 U1 - Dell EMC customized image version A10, build 7967591 VMware vCenter Server Appliance 6.5 Update 1d – build 7312210 (Includes vSphere Web Client) VMware licenses The vCenter Server is licensed by instance. Other VMware product licenses are allocated based on the number of CPU sockets in the participating hosts.
B Technical support and resources Dell TechCenter is an online technical community where IT professionals have access to numerous resources for Dell EMC software, hardware, and services. B.
C Support and feedback Contacting Technical Support Support Contact Information Web: http://www.dell.com/support Telephone: USA: 1-800-945-3355 Feedback for this document We encourage readers to provide feedback on the quality and usefulness of this publication by sending an email to Dell_Networking_Solutions@Dell.com. 61 Dell EMC Networking FCoE-to-Fibre Channel Deployment with S4148U-ON in F_port Mode | version 1.