Dell EMC Networking OS10EE FCoE Deployment with FSB Connecting server FCoE CNAs to Fibre Channel storage using Dell EMC PowerSwitch OS10EE switches in FSB and F_Port modes Abstract This document provides the deployment steps for configuring Dell EMC PowerSwitch OS10EE based switches in FSB and F_Port modes.
Revisions Date Description April 2019 Initial release The information in this publication is provided “as is.” Dell Inc. makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any software described in this publication requires an applicable software license. © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.
Table of contents Revisions............................................................................................................................................................................. 2 1 2 Introduction ................................................................................................................................................................... 6 1.1 Typographical conventions .............................................................................................
6 S4148U-ON FCF switch configuration .......................................................................................................................22 6.1 Prepare switches ..............................................................................................................................................22 6.1.1 Factory default configuration ............................................................................................................................22 6.1.
B.3.4 Configure storage on ESXi hosts .....................................................................................................................41 B.3.5 Rescan storage.................................................................................................................................................41 B.3.6 Create a datastore ............................................................................................................................................42 B.3.
1 Introduction Our vision at Dell EMC is to be the essential infrastructure company from the edge, to the core, and to the cloud. Dell EMC Networking ensures modernization for today’s applications and for the emerging cloud-native world. Dell EMC is committed to disrupting the fundamental economics of the market with an open strategy that gives you the freedom of choice for networking operating systems and top-tier merchant silicon.
1.1 Typographical conventions The CLI and GUI examples in this document use the following conventions: 1.2 Monospace Text CLI examples Underlined Monospace Text CLI examples that wrap the page Italic Monospace Text Variables in CLI examples Bold Monospace Text Commands entered at the CLI prompt, or to highlight information in CLI output Bold text UI elements and information that is entered in the GUI Attachments This document in .pdf format includes one or more file attachments.
2 Hardware Overview This section briefly describes the hardware that is used to validate the deployment examples in this document. Appendix A contains a complete listing of hardware and software that is validated for this guide.
Dell EMC PowerSwitch S3048-ON 2.2 Storage arrays, Fibre Channel switches, and servers This section details the supplemental hardware that is used to validate a complete storage solution. Comparable hardware models can be substituted for the hardware that is listed in this section to operate on the network topology described in this document. 2.2.1 Dell EMC Unity 500F storage array The Unity 500F storage platform delivers all-flash storage with up to 8PB raw capacity.
3 Topology overview This section details the FCoE and FC portions of the network to explain the network design for storage traffic. In the leaf-spine portion of the topology, the leaf pair can be any OS10EE-based switch model. In this document, the S5248-ON is used. Figure 6 shows the leaf pair functioning in the FSB role to the dedicated FC storage network.
FC SAN topology 3.2 OOB management network The out-of-band (OOB) management network is an isolated network for management traffic only. It is used by administrators to remotely configure and manage servers, switches, and storage devices. Production traffic that is initiated by network end users does not traverse the management network. An S3048-ON switch is installed at the top of each rack for OOB management connections as shown.
Four 10GbE SFP+ ports are available on each S3048-ON switch for use as uplinks to the OOB management network core. Downstream connections to servers, switches, and storage devices are 1GbE BASE-T. The dedicated OOB management port of each leaf and spine switch is used for these connections. Each PowerEdge R740xd server has a connection to the S3048-ON via the server’s iDRAC port. The Unity 500F storage array has two dedicated management ports - one port for each Storage Processor (SP), SP A and SP B.
4 Deployment overview This section provides high-level guidance for deploying the total solution to include FC storage, networking, server resources, and virtualization. 4.1 Configuration strategy and sequence This document provides specific configuration examples for the S5248-ON leaf pair and S4148U-ON switch in F_Port mode. Note: The Dell EMC Unity 500F storage array, and Dell EMC PowerEdge R740xd servers were used to validate the complete solution.
5 S5248-ON FSB leaf switch configuration This section details steps to configure the S5248-ON leaf switches running OS10EE. 5.1 Prepare switches 5.1.1 Factory default configuration The configuration commands in the sections that follow begin with S5248-ON switches at their factory default settings.
spanning-tree mode rstp spanning-tree rstp priority 0 spanning-tree mode rstp spanning-tree rstp priority 4096 port-group 1/1/11 mode Eth 10g-4x port-group 1/1/11 mode Eth 10g-4x Note: The default port mode for the first twelve port groups on the S5248-ON switch is 25g-4x. Port mode commands for the link to the S4148U-ON switch are shown above, configured for 10g-4x. For server NICs operating at 10GbE, change the port mode corresponding to the appropriate downstream server interfaces.
5.2.4 Configure QoS Quality of Service (QoS) configuration is a 3-step process: 1. Create class maps to classify traffic. 2. Create QoS and policy maps for the classified traffic. 3. Apply the QoS and policy maps. FCoE traffic is assigned dot1p priority value 3 by default. In the following tables, dot1p priority value 3 is mapped to QoS group 3. The remaining dot1p priority values, 0-2 and 4-7, are mapped to QoS group 0. QoS group 3 is mapped to queue 3, and QoS group 0 is mapped to queue 0.
S5248-Leaf1 S5248-Leaf2 service-policy output type queuing policy_Output_BandwidthPercent service-policy output type queuing policy_Output_BandwidthPercent system qos trust-map dot1p map_Dot1pToGroups qos-map traffic-class map_GroupsToQueues system qos trust-map dot1p map_Dot1pToGroups qos-map traffic-class map_GroupsToQueues The bulk of the FSB configuration is now complete. The following steps complete the configuration example by configuring common features found in a Layer 3 leaf-spine network.
5.2.6 Configure VRRP VRRP is an active/standby first hop redundancy protocol. When used among VLT peers, it becomes active/active. Both VLT peers have the VRRP virtual MAC address in their forwarding table as a local destination address. This allows the backup VRRP router to forward intercepted frames whose destination MAC address matches the VRRP virtual MAC address. 5.2.7 18 S5248-Leaf1 S5248-Leaf2 vrrp version 3 vrrp version 3 interface vlan 1612 vrrp-group 12 virtual-address 172.16.12.
S5248-Leaf1 S5248-Leaf2 description "Server 3" switchport mode trunk switchport trunk allowed vlan 1001,1612-1616 spanning-tree port type edge ets mode on mtu 9216 no shutdown description "Server 3" switchport mode trunk switchport trunk allowed vlan 1002,1612-1616 spanning-tree port type edge ets mode on mtu 9216 no shutdown interface ethernet 1/1/34 description "Server 4" switchport mode trunk switchport trunk allowed vlan 1001,1612-1616 spanning-tree port type edge ets mode on mtu 9216 no shutdown i
Note: Configuration of the spine switches are not detailed in this document. For information about the leafspine architecture and detailed configuration, see Dell EMC Networking Layer 3 Leaf-Spine Deployment and Best Practices with OS10EE. 5.2.9 Configure BGP routing The example configuration use of BGP is for the application traffic that is associated with the layer 3 leafspine network. Other routing protocols can be used and will not affect the FC storage configuration.
5.2.10 Configure uplink failure detection (UFD) UFD is recommended on all server interfaces. Finally, exit configuration mode and save the configuration with the end and write memory commands.
6 S4148U-ON FCF switch configuration This section details steps to configure the S4148U-ON switches running OS10EE in F_Port mode. 6.1 Prepare switches 6.1.1 Factory default configuration The configuration commands in the sections that follow begin with S4148U-ON switches at their factory default settings.
Note: The commands in the following tables should be entered in the order shown. Switch runningconfiguration files are provided as attachments named S4148U-F-Port-1.txt and S4148U-F-Port-2.txt. 6.2.1 Configure global switch settings Configure the hostname, OOB management IP address, and OOB management default gateway. S4148U-F-Port-1 S4148U-F-Port-2 configure terminal configure terminal hostname S4148U-F-Port-1 hostname S4148U-F-Port-2 interface mgmt 1/1/1 no ip address dhcp ip address 100.67.166.
fc zone member member member r740xd-2p1zone alias-name SpA-0 alias-name SpB-0 alias-name r740xd-2p1 fc zone member member member r740xd-2p2zone alias-name SpA-1 alias-name SpB-1 alias-name r740xd-2p2 fc zone member member member r740xd-3p1zone alias-name SpA-0 alias-name SpB-0 alias-name r740xd-3p1 fc zone member member member r740xd-3p2zone alias-name SpA-1 alias-name SpB-1 alias-name r740xd-3p2 fc zone member member member r740xd-4p1zone alias-name SpA-0 alias-name SpB-0 alias-name r740xd-4p1 fc
6.2.5 Configure QoS Quality of Service (QoS) configuration is a 3-step process: 1. Create class maps to classify traffic. 2. Create QoS and policy maps for the classified traffic. 3. Apply the QoS and policy maps. FCoE traffic is assigned dot1p priority value 3 by default. In the following tables, dot1p priority value 3 is mapped to QoS group 3. The remaining dot1p priority values, 0-2 and 4-7, are mapped to QoS group 0. QoS group 3 is mapped to queue 3, and QoS group 0 is mapped to queue 0.
service-policy output type queuing policy_Output_BandwidthPercent service-policy output type queuing policy_Output_BandwidthPercent system qos trust-map dot1p map_Dot1pToGroups qos-map traffic-class map_GroupsToQueues system qos trust-map dot1p map_Dot1pToGroups qos-map traffic-class map_GroupsToQueues end write memory end write memory Dell EMC Networking OS10EE FCoE Deployment with FSB
7 S5248-ON FSB validation After configuring connected devices, many commands are available to validate the network configuration. This section provides a list of the most common commands and their output for this topology. Note: The following commands and outputs that are shown are for the S5248F-Leaf1. The output for the S5248U-Leaf2 is similar.
7.3 show fcoe sessions The show fcoe sessions command shows all currently active FCoE sessions on the switch. In this example, four FCoE sessions are active on each switch.
8 S4148U-ON FCF (F_Port) validation After configuring connected devices, many commands are available to validate the network configuration. This section provides a list of the most common commands and their output for this topology. Note: The following commands and outputs are for the S4148U-F-Port-1. The output for the S4148U-F-Port-2 is similar.
8.4 show fc ns switch The show fc ns switch command shows all device ports that are logged into the fabric. In this deployment, four ports are logged in to each switch: two storage ports and two CNA ports.
Switch Port FC-Id Port Name Node Name Class of Service Symbolic Port Name Symbolic Node Name Port Type Registered with NameServer Registered for SCN ethernet1/1/41 64:05:01 20:01:f4:e9:d4:62:4b:ba 20:00:f4:e9:d4:62:4b:ba 8 Switch Name Domain Id Switch Port FC-Id Port Name Node Name Class of Service Symbolic Port Name Symbolic Node Name Port Type Registered with NameServer Registered for SCN 10:00:e4:f0:04:6b:01:42 100 ethernet1/1/41 64:05:02 20:01:f4:e9:d4:61:c6:6a 20:00:f4:e9:d4:61:c6:6a 8 Switch N
8.5 show fc zoneset The show fc zoneset active command shows the zones and zone members in the configured zone sets. Members that are logged into the fabric are shown with an asterisk (*).
8.6 show vfabric The show vfabric command output provides various information including the default zone mode, the active zone set, and interfaces that are members of the vfabric.
A Validated components Leaf switches Qty Item Version 2 Dell EMC S5248-ON 10.4.2.1 Management switches Qty Item Version 2 Dell EMC S3048-ON 10.4.2.1 Spine switches Qty Item Version 2 Dell EMC Z9264-ON 10.4.2.1 Fibre Channel switches Qty Item Version 2 Dell EMC S4148U-ON 10.4.2.1 Storage Qty Item Version 2 Dell EMC Unity 500F 4.3.0.1522077968 Servers Qty Item 4 Dell EMC PowerEdge R740xd Version - BIOS 1.6.12 - iDRAC 3.21.26.
B PowerEdge server, Unity storage, and VMware setup B.1 PowerEdge server configuration This section details the configuration of the CNAs used to validate the network topology. Note: Exact iDRAC steps in this section may vary depending on hardware, software and browser versions used. See the PowerEdge server documentation for steps to connect to the iDRAC. B.1.
CNA ports listed in iDRAC 4. Under Ports and Partitioned Ports, click the icon next to the first port to expand the details as shown: WWPN for FCoE CNA port 1 5. Record the World Wide Port Name outlined in red in Figure 10. A convenient method is to copy and paste it into a text file. The WWPN is used in the S4148U-ON switch FC zone configuration. 6. Repeat steps 4 and 5 for CNA port 2. 7. Repeat steps 1 through 6 for remaining servers.
The FC WWPNs used in this deployment example are shown in Table 1. The Switch column has been added for reference per the cable connections in the SAN topology diagram (Figure 7). Server FCoE CNA port WWPNs B.
Two WWNs are listed for each port. The World Wide Node Name (WWNN), outlined in black, identifies this Unity storage array (the node). It is not used in zone configuration. The WWPNs, outlined in blue, identify the individual ports and are used for FC zoning. 3. Record the WWPNs as shown in Table 8. The Switch column has been added based on the physical cable connections that are shown in Figure 7. Storage array FC adapter WWPNs B.2.
4. 5. 6. 7. 8. A list of discovered ESXi hosts is displayed. Select the applicable hosts and click Next. A VMware API for Storage Awareness (VASA) Provider is not used in this example. Click Next. On the Summary page, review the ESXi Hosts to be added. Click Finish. When the Overall status shows 100% Completed, click Close. The vCenter server is displayed as shown in Figure 14. vCenter server added to Unisphere 9. The list of added ESXi hosts is displayed on the ESXi Hosts tab, as shown in Figure 15.
LUN created Create extra LUNs and grant access (map) to hosts as needed. Note: To modify host access at any time, check the box next to the LUN to select it. Click the select the Host Access tab. B.3 VMware preparation B.3.1 VMware ESXi download and installation icon, and Install VMware ESXi 6.7 U1 or later, on each PowerEdge server. Dell EMC recommends using the latest Dell EMC customized ESXi .iso image available on support.dell.com.
Datacenter and cluster that is created with ESXi hosts B.3.4 Configure storage on ESXi hosts The example LUN created on the storage array is used to create a datastore on an ESXi host. The datastore is used to create a virtual disk on a virtual machine (VM) residing on the ESXi host. This process may be repeated as needed for extra LUNs, hosts, and VMs. B.3.5 Rescan storage 1. In the vSphere Web Client, go to Home > Hosts and Clusters. 2.
LUN visible to ESXi host 6. Repeat for host's second adapter, vmhba5 in this example. The LUN information on the Adapter Details > Devices tab is identical to the first adapter. 7. Select the first storage adapter, for example, vmhba4, and then select the Adapter Details > Paths tab as shown in Figure 19. The target, LUN number, for example, LUN 0, and the path status are shown. The target field includes the two active storage WWPNs connected to vmhba4.
Name and device selection page 5. Provide a Datastore name, for example, Unity 80GB LUN, select the LUN in the list, and click Next. 6. Select the VMFS version. For this guide, it is left at its default setting, VMFS 5. Click Next. 7. Leave the Partition configuration at its default settings and click Next > Finish to create the datastore. The datastore is now accessible by selecting the host in the Navigator pane. Select the Configure tab > Storage > Datastores as shown in Figure 21.
4. Click the icon next to New Hard Disk to view the configuration options. 5. Next to Location, click Browse then select the previously configured datastore, such as, Unity 80GB LUN, and click OK. The screen looks similar to Figure 22. New hard disk configuration options 6. Next to New Hard disk, set the size in GB less than or equal to the Maximum size shown on the line below. The size is set to 40 GB in this example. 7. Click OK to close the Edit Settings window and create the virtual disk.
B.3.8 Configure the virtual disk in Windows Server The following example is applicable for VMs running Windows Server 2008, 2012, or 2016. See the operating system documentation to configure virtual disks on other supported guest operating systems. 1. Power on the VM and log in to the Windows Server guest operating system. 2. Within Windows Server,click Server Manager, Tools, > Computer Management, Storage, then Disk Management.
C Technical resources Dell EMC Networking Guides Dell EMC Networking Layer 3 Leaf-Spine Deployment and Best Practices with OS10EE OS10 Enterprise Edition User Guide Release 10.4.2.
D Support and feedback Contacting Technical Support Support Contact Information Web: http://www.dell.com/support Telephone: USA: 1-800-945-3355 Feedback for this document We encourage readers to provide feedback on the quality and usefulness of this publication by sending an email to Dell_Networking_Solutions@Dell.com.