VxFlex Network Deployment Guide using Dell EMC Networking 25GbE switches and OS10EE A VxFlex Ready Node deployment guide using Dell EMC Networking S5200-ON switches Abstract The Dell EMC Networking S5248F-ON is the latest in the S-series of 25GbE switches that provide the bandwidth and low latency support for a scalable storage architecture. This document details the deployment of the Dell EMC VxFlex Ready Node solution using these Dell EMC Networking 25GbE switches.
Revisions Date Description March 2019 Initial release The information in this publication is provided “as is.” Dell Inc. makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any software described in this publication requires an applicable software license. © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.
Table of contents Revisions.............................................................................................................................................................................2 Executive summary.............................................................................................................................................................5 1 2 3 Introduction ................................................................................................................
7 4 6.2 Upload the VxFlex OVA template .....................................................................................................................27 6.3 Deploy VxFlex using the deployment wizard ....................................................................................................27 6.4 VxFlex GUI .......................................................................................................................................................31 Best practices ............
Executive summary Dell EMC VxFlex is an industry leading software-defined storage (SDS) solution that enables customers to extend their existing virtual infrastructure into a high performing virtual SAN. VxFlex creates a virtual server SAN using industry-standard servers with direct attached storage (DAS). You can deploy VxFlex using as few as three hosts, and up to 1024 hosts. Each host can use storage media such as flash-based SSDs, NVMe SSDs, traditional spinning disks, or a mix.
1 Introduction VxFlex is a software-only solution that uses existing servers’ local disks and LAN to create a virtual SAN that has all the benefits of external storage – but at a fraction of cost and complexity. VxFlex uses the existing local storage devices and turns them into shared block storage. For many workloads, VxFlex storage is comparable to, or better than, external shared block storage.
1.2 Attachments This document in .pdf format includes one or more file attachments. To access attachments in Adobe Acrobat Reader, click the icon in the left pane halfway down the page, then click the icon. 1.3 Dell EMC VxFlex A VxFlex virtual SAN consists of the following software components: • • • Meta Data Manager (MDM) - Configures and monitors the VxFlex system. The MDM can be configured in redundant cluster mode, with three members on three servers, or five members on five servers.
2 Hardware overview This section briefly describes the hardware that is used to validate the deployment example in this guide. Appendix C contains a complete listing of hardware and components. Steps in this document were validated using the specified Dell EMC Networking switches and OS10EE but may be used for other Dell EMC Networking switch models that use the same networking operating system or later assuming the switch has the available port numbers, speeds, and types.
3 VxFlex networking overview The primary purpose of this guide is to provide a step-by-step example for configuring the network for VxFlex using OS10EE. Chapter 4 provides instructions for configuring Dell EMC Networking S5248F-ON 25GbE switches, running OS10EE. Two S5248F-ON switches are used as ToR/leaf switches to connect Dell EMC VxFlex Ready Nodes (based on R740xd servers) for VxFlex installation and upstream connectivity.
Management Environment - vCenter NTP DNS (172.16.11.4, 172.16.11.5) Management workstation Spine layer S5248F-ON VLTi Leaf layer S5248F-ON VxFlex-1 VxFlex nodes VxFlex-2 • • • • ESXi host MDM SDS SDC VxFlex-3 VxFlex-4 Production network 3.1.2 Management network A single S3048-ON switch provides iDRAC connectivity to the VxFlex nodes. Figure 6 shows the S3048-ON is connected to the leaf switches through the OOB port on each leaf switch.
MGMT Switch S3048-ON S5248F-ON (Leaf1A) S5248F-ON (Leaf1B) iDRAC VxFlex-1 1GbE to Leaf OOB ports 1GbE to iDRAC ports VLTi iDRAC VxFlex-2 iDRAC VxFlex-3 iDRAC VxFlex-4 Management network for a single rack 3.2 Network connectivity Figure 7 shows one VxFlex node (VxFlex-1) connected to two leaf switches using two Mellanox ConnectX-4 LX PCIe cards that are installed in PCIe slots 1 and 2. The leaf switches are Virtual Link Trunking (VLT) peers.
S5248F-ON Leaf1A Stack ID VLTi S5248F-ON Leaf1B GRN=10G ACT/ LNK B GRN=10G ACT/ LNK A GRN=10G ACT/ LNK B GRN=10G ACT/ LNK A VxFlex-1 Ready Node Stack ID 25GbE Production Network VxFlex-1 wiring to production network 3.1 Connections to OOB management switch The OOB management network is an isolated network for remote management of servers, switches, and other devices. It is also used to carry heartbeat messages sent between leaf switches configured as VLT peers.
S3048-ON OOB Management 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 S5248F-ON Leaf1A S5248F-ON Leaf1B GRN=10G ACT/ LNK A GRN=10G ACT/ LNK B VxFlex-1 Ready Node 1GbE OOB Management OOB management network connections Figure 8 shows the first node (VxFlex-1) connected to the S3048-ON management switch using the onboard iDRAC port.
3.2 IP addressing Dell EMC VxFlex MDMs, SDSs, and SDCs can have multiple IP addresses and can reside on more than one network. Multiple IPs provide options for load balancing and redundancy. VxFlex natively provides redundancy and load balancing across physical network links when an MDM or SDS is configured to send traffic across multiple links. In this configuration, each interface available to the MDM or SDS is assigned an IP address, each in a different subnet.
VxFlex data IP network calculations Item Description N Number of nodes Comments Data network 1 The pools of IP addresses used for static allocation for the following For clarity, the first groups: subnet is referred to as Data1 1. Node_DATA1_IP = VxFlex internal (interconnect) IP addresses 2. SVM_DATA1_IP = SVM management IP addresses 3.
IP Pool VLAN ID ESXI_VMOTION_IP 1632 HOST_AND_SVM_MGMT_IP 1633 Node_DATA1_IP & SVM_DATA1_IP 1634 Node_DATA2_IP & SVM_DATA2_IP 1635 3.3 Switch preparation Switches used in this guide run OS10EE version 10.4.2.2 or later. Run the show version command to check the operating system version, and then update the operating system as required for each switch. Note: Dell EMC recommends upgrading to the latest release available on Dell Digital Locker (account required).
Note: If OS10EE was factory installed, a perpetual license is already on the switch. Set the factory defaults of your switches to remove any current configuration. The following commands set switches running OS10EE to their factory default settings: OS10# delete startup-configuration Proceed to delete startup-configuration [confirm yes/no(default)]:y OS10# reload System configuration has been modified.
4 Dell EMC Networking S5248F-ON 25GbE switch configuration This chapter provides the steps for configuring the network to connect VxFlex v2.6.1.1 on Dell EMC VxFlex Ready Nodes (based on R740xd servers) using Dell EMC Networking S5248F-ON 25GbE switches running OS10EE. Switch configuration details with explanations are provided for one leaf switch. The remaining leaf switch uses a similar configuration. Note: The configuration files for both leaf switches are provided as attachments in the softcopy (.
S5248-Leaf1A(config)# spanning-tree rstp priority 0 Configure the VLT interconnect between S5248F-Leaf1A and S5248F-Leaf1B. In this configuration, use interfaces Eth 1/1/55-56 for the VLT interconnect. The backup destination is the management IP address of the VLT peer switch, S5248F-Leaf1B. Finally, VLT peer-routing is enabled providing forwarding redundancy in the event of a switch failure.
S5248-Leaf1A(config)# interface range ethernet 1/1/1-1/1/4 S5248-Leaf1A(conf-range-eth1/1/1-1/1/4)# no ip address S5248-Leaf1A(conf-range-eth1/1/1-1/1/4)# description "LINK TO VXFLEX NODES" S5248-Leaf1A(conf-range-eth1/1/1-1/1/4)# mtu 9216 S5248-Leaf1A(conf-range-eth1/1/1-1/1/4)# switchport mode trunk S5248-Leaf1A(conf-range-eth1/1/1-1/1/4)# switchport access vlan 1 S5248-Leaf1A(conf-range-eth1/1/1-1/1/4)# switchport trunk allowed vlan 1632,1633,1634 S5248-Leaf1A(conf-range-eth1/1/1-1/1/4)# spanning-tree po
VLT MAC address : IP address : Delay-Restore timer : Peer-Routing : Peer-Routing-Timeout timer : VLTi Link Status port-channel1000 : de:11:de:11:de:11 fda5:74c8:b79e:1::1 90 seconds Enabled 0 seconds up VLT Peer Unit ID System MAC Address Status IP Address Version -------------------------------------------------------------------------------2 3c:2c:30:10:35:00 up fda5:74c8:b79e:1::2 2.0 4.3.2 show vlt domain id mismatch The mismatch option lists VLANs configured on a single switch in the VLT domain.
4.3.4 show vlt mac-inconsistency The show vlt mac-inconsistency command shows the inconsistencies in dynamic MAC addresses learned between VLT peers. S5248-Leaf1A# show vlt mac-inconsistency Inconsistency check for VLAN based MAC -------------------------------------Fetching MACs from unit 2 Fetching MACs from unit 1 Identifying inconsistencies .. No inconsistencies found 4.3.5 show spanning-tree brief The show spanning-tree brief command validates that STP is enabled on the leaf switches.
5 VMware virtual network design Tables are provided in this section that outline the virtual network design used in this deployment. Specific steps to create the distributed switches, VMkernels, and setting NIC teaming policies are not covered in this document. See vSphere Networking Guide for vSphere 6.5, ESXi 6.5, and vCenter Server 6.5 for details on configuring ESXi and the virtual network environment. 5.
Port group settings VDS Port group name Teaming policy Teaming and Failover VLAN ID atx01-w01vds01 atx01-w01-vds01-vmotion Route based on physical NIC load Active: Uplink 1 and Uplink 2 1632 atx01-w01vds01 atx01-w01-vds01-VxFlexmanagement Route based on physical NIC load Active: Uplink 1 and Uplink 2 1633 atx01-w01vds01 atx01-w01-vds01-VxFlex-data01 Route based on originating virtual port Active: Uplink 1 1634 Route based on originating virtual port Active: Uplink 2 atx01-w01vds01 atx
5.4 VMware vSphere VMkernel configuration The following table contains the configuration details for the VxFlex VDS with four VMkernel adapters assigned (see chapter 6).
Physical connectivity and configuration of the VMware vSphere distributed switches are now complete. Figure 10 represents a successfully deployed VxFlex HCI platform which is created in the next chapter. The screen is taken from Home > Hosts and Clusters. Continue to chapter 6 for instructions to deploy the VxFlex HCI platform.
6 Deploy Dell EMC VxFlex Deploying VxFlex in this environment consists of the following steps: • • • Register the VxFlex plug-in Upload the VxFlex Open Virtual Appliance (OVA) template Deploy VxFlex This section does not contain step-by-step instructions for deploying VxFlex. For a detailed step-by-step guide, see the ScaleIO IP Fabric Best Practice and Deployment Guide. 6.
VxFlex deployment has four steps: 1. 2. 3. 4. SDC deployment and configuration VxFlex advanced configuration settings Deploy the VxFlex environment Install the VxFlex GUI (optional) Before an ESXi host can consume the virtual SAN, the SDC kernel driver must be installed on each ESXi host, regardless of the role that host is playing. The process that is outlined below installs the SDC driver on the target host. To start the installation wizard, perform the following steps: 1.
3. Using the following table, select the VxFlex wizard parameter settings for steps 5 through 7. VxFlex Wizard deployment settings Parameter Setting Protection domain name PD01 RAM read cache size per SDS 128 MB Storage Pools SSD01 Enable zero padding True SDS host selection atx01w01esx05, atx01w01esx06, atx01w01esx07, atx01w01esx08 Selected devices All empty device categorized into the appropriate storage pool.
VxFlex networking addressing ESXi name Management IP Default gateway Data 1 IP Data 2 IP atx01w01esx08 172.16.33.11/24 172.16.33.253 172.16.34.11/24 172.16.35.11/24 172.16.33.12/24 172.16.33.253 172.16.34.12/24 172.16.35.12/24 172.16.33.13/24 172.16.33.253 172.16.34.13/24 172.16.35.13/24 172.16.33.14/24 172.16.33.253 172.16.34.14/24 172.16.35.14/24 172.16.33.15/24 172.16.33.253 172.16.34.15/24 172.16.35.
REST API can be used to add virtual IP addresses to the cluster. In all cases, a virtual IP NIC placeholder must be mapped to each virtual IP address. Ensure that there are NICs available for this purpose. Existing systems may be extended to include additional MDMs to a cluster. The new MDMs should be mapped to the existing virtual IP addresses.
7 Best practices The post-installation information that is provided in this section consists of the following: • • Increase the Maximum Transmission Unit (MTU) for VMware vSphere and Dell EMC VxFlex Configure Quality of Service using Differentiated Services (DiffServ) For more information on performance tuning, including ESXi hosts and VxFlex VMs, see the VxFlex v2.x Performance Fine-Tuning Technical Notes Guide. 7.
To enable jumbo frames for the SVM, perform the following steps: 1. Run the ifconfig command to get the NIC information. The following is an example from an SVM deployed in this solution, VxFlex-172.16.33.12: VxFlex-172-16-33-12:~ # ifconfig eth0 Link encap:Ethernet HWaddr 00:50:56:B7:81:28 inet addr:172.16.33.12 Bcast:172.16.33.255 Mask:255.255.255.
3. Save the file (:qw [ENTER]) and then enter the following command to restart the network services for the virtual machine: VxFlex-172-16-33-12:~ # service network restart Shutting down network interfaces: eth0 device: VMware VMXNET3 Ethernet Controller eth1 device: VMware VMXNET3 Ethernet Controller Eth2 device: VMware VMXNET3 Ethernet Controller Shutting down service network . . . . . . . . .
In the switch configuration section, a policy map is created and instructs both switches to trust the DSCP value mapping. The configuration below shows the commands set to trust DSCP value mapping for the S5248F-ON switch. Configuration for the second leaf switch is identical. S5248-Leaf1A(config)# interface range ethernet1/1/1-1/1/4 S5248-Leaf1A(conf-range-eth1/1/1-1/1/4)# trust-map dscp default DSCP values are inserted on a port-group basis.
A Troubleshoot SDS connectivity SDS connectivity problems affect VxFlex performance. VxFlex has a built-in tool to verify that all SDS nodes in a given protection domain have connectivity. From the VxFlex Command Line Interface (SCLI), run the VxFlex internal network test to verify the network speed between all the SDS nodes in the Protection Domain. The following commands test all SDS nodes with a payload of 10 GB, using eight parallel threads: VxFlex-172-16-33-12:~ # scli --mdm_ip 172.16.33.
B Routing VxFlex Virtual Machine traffic In this section, a possible solution to solve routing between SVMs in separate subnets is outlined. Each SVM contains three virtual NICs: • • • Eth0 for VxFlex management Eth1 for VxFlex Data01 Eth2 for VxFlex Data02 The SVM uses a single TCP/IP stack, and any unknown networks are limited to this single default gateway. If VxFlex Data1 or Data2 need to reach an SVM in another subnet, for instance in another rack in the data center, this traffic fails.
100 200 eth1 eth2 4. For each interface enable PBR. Below is an example for eth1 and needs to be repeated for eth2. VxFlex-172-16-33-12:~ VxFlex-172-16-33-12:~ scope link table eth1 VxFlex-172-16-33-12:~ table eth1 VxFlex-172-16-33-12:~ # ip route flush table eth1 # ip route add 172.16.34.0/24 dev eth1 proto kernel # ip route add default via 172.16.34.253 dev eth1 # ip rule add from 172.16.34.0/24 lookup eth1 5.
C Validated hardware and components The tables in this section list the hardware and components that are used to configure and validate the examples in this guide. Table 20 shows the Dell EMC Networking switches and associated versions used. It is recommended to update your switches to the latest operating systems available on Dell Digital Locker (account required). Dell EMC Networking switches Qty 1 Item S3048-ON - Management switch OS/Firmware version OS10EE 10.4.2.
D Validated software Table 22 lists the software components used to validate the example configurations in this guide. It is important to update to the latest versions located at VxFlex Ready Nodes driver and firmware version requirements. Software versions 40 Item Version Dell EMC VxFlex 2.6.1.1_113 VMware vSphere Power CLI 11.0 VxFlex vSphere Plug-in Installer 2.6.11000_113 SVM OVA 2.6.11000_113.ova VMware ESXi 6.
E Supported switches Steps in this document were validated using specific Dell EMC Networking switches and OS10EE but may be used for other Dell EMC Networking switch models that use the same networking operating system or later. Depending on a switch’s available port numbers, speeds, and types, minor adjustments may be required to the example commands provided in this guide to achieve the same results. The table below shows all switches that can be configured using the directions provided in this guide.
F Product manuals and technical guides Dell EMC Dell EMC Knowledge Library - An online technical community where IT professionals have access to numerous resources for Dell EMC software, hardware, and services.
G Fabric Design Center The Dell EMC Fabric Design Center (FDC) is a cloud-based application that automates the planning, design and deployment of network fabrics that power Dell EMC compute, storage and hyper-converged infrastructure solutions, including VxFlex. The FDC is ideal for turnkey solutions and automation based on validated deployment guides like this one. FDC allows design customization and flexibility to go beyond validated deployment guides.
H Support and feedback Contacting Technical Support Support Contact Information Web: http://support.dell.com/ Telephone: USA: 1-800-945-3355 Feedback for this document We encourage readers to provide feedback on the quality and usefulness of this publication by sending an email to Dell_Networking_Solutions@Dell.