Cover User’s Guide Converged Network Adapters 41xxx Series Third party information brought to you courtesy of Dell. AH0054602-00 M October 16, 2019 Marvell.
User’s Guide Ethernet iSCSI Adapters and Ethernet FCoE Adapters For more information, visit our website at: http://www.marvell.com Notice THIS DOCUMENT AND THE INFORMATION FURNISHED IN THIS DOCUMENT ARE PROVIDED “AS IS” WITHOUT ANY WARRANTY.
Table of Contents Preface Supported Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Intended Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . What Is in This Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Documentation Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Downloading Updates and Documentation . . . . . . . . . . . . . . . . .
User’s Guide—Converged Network Adapters 41xxx Series Installing the Linux Drivers with RDMA . . . . . . . . . . . . . . . . . . . . . . . . Linux Driver Optional Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . Linux Driver Operation Defaults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Linux Driver Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Converged Network Adapters 41xxx Series Configuring FCoE Boot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring iSCSI Boot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring Partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Partitioning for VMware ESXi 6.5 and ESXi 6.7 . . . . . . . . . . . . . . . . . 6 55 56 61 65 Boot from SAN Configuration iSCSI Boot from SAN .
User’s Guide—Converged Network Adapters 41xxx Series Injecting (Slipstreaming) Adapter Drivers into Windows Image Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring FCoE Boot from SAN on Linux . . . . . . . . . . . . . . . . . . . . Prerequisites for Linux FCoE Boot from SAN. . . . . . . . . . . . . . . Configuring Linux FCoE Boot from SAN . . . . . . . . . . . . . . . . . . Configuring FCoE Boot from SAN on VMware . . . . . . . . . . . . . . . . . .
User’s Guide—Converged Network Adapters 41xxx Series RoCE Mode and Statistics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring a Paravirtual RDMA Device (PVRDMA). . . . . . . . . . . . . . Configuring DCQCN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DCQCN Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DCQCN Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Converged Network Adapters 41xxx Series Configuring iSER on ESXi 6.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Before You Begin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring iSER for ESXi 6.7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 iSCSI Configuration iSCSI Boot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iSCSI Offload in Windows Server. .
User’s Guide—Converged Network Adapters 41xxx Series 14 VXLAN Configuration Configuring VXLAN in Linux. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring VXLAN in VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring VXLAN in Windows Server 2016 . . . . . . . . . . . . . . . . . . . . . . . Enabling VXLAN Offload on the Adapter. . . . . . . . . . . . . . . . . . . . . . . Deploying a Software Defined Network. . . . . . . . . . . . . . . .
User’s Guide—Converged Network Adapters 41xxx Series 16 Windows Server 2019 RSSv2 for Hyper-V. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . RSSv2 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Known Event Log Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Windows Server 2019 Behaviors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VMMQ Is Enabled by Default . . . . . . . . .
User’s Guide—Converged Network Adapters 41xxx Series List of Figures Figure Page 3-1 Dell Update Package Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3-2 QLogic InstallShield Wizard: Welcome Window . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3-3 QLogic InstallShield Wizard: License Agreement Window. . . . . . . . . . . . . . . . . . . . 20 3-4 InstallShield Wizard: Setup Type Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Converged Network Adapters 41xxx Series 6-6 6-7 6-8 6-9 6-10 6-11 6-12 6-13 6-14 6-15 6-16 6-17 6-18 6-19 6-20 6-21 6-22 6-23 6-24 6-25 6-26 6-27 6-28 6-29 6-30 6-31 6-32 6-33 6-34 7-1 7-2 7-3 7-4 7-5 7-6 7-7 7-8 7-9 7-10 7-11 7-12 7-13 7-14 7-15 7-16 System Setup: Selecting General Parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . System Setup: iSCSI General Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Converged Network Adapters 41xxx Series 7-17 7-18 8-1 8-2 8-3 8-4 9-1 9-2 9-3 9-4 9-5 10-1 10-2 10-3 10-4 10-5 10-6 10-7 12-1 12-2 12-3 12-4 12-5 12-6 12-7 12-8 12-9 12-10 12-11 12-12 12-13 12-14 12-15 13-1 13-2 13-3 13-4 14-1 15-1 15-2 15-3 15-4 15-5 15-6 15-7 Assigning a vmknic for PVRDMA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Setting the Firewall Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Converged Network Adapters 41xxx Series 15-8 15-9 15-10 15-11 15-12 15-13 15-14 15-15 15-16 16-1 Windows PowerShell Command: Get-NetAdapter. . . . . . . . . . . . . . . . . . . . . . . . . . Advanced Properties: Enable QoS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Advanced Properties: Setting VLAN ID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Advanced Properties: Enabling QoS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Converged Network Adapters 41xxx Series List of Tables Table 2-1 2-2 3-1 3-2 3-3 3-4 3-5 3-6 5-1 6-1 6-2 6-3 6-4 6-5 6-6 7-1 7-2 7-3 7-4 7-5 13-1 16-1 16-2 17-1 A-1 B-1 B-2 Host Hardware Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Minimum Host Operating System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . 41xxx Series Adapters Linux Drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Preface This preface lists the supported products, specifies the intended audience, explains the typographic conventions used in this guide, and describes legal notices. Supported Products NOTE QConvergeConsole® (QCC) GUI is the only GUI management tool across all Marvell® FastLinQ® adapters. QLogic Control Suite™ (QCS) GUI is no longer supported for the FastLinQ 45000 Series Adapters and adapters based on 57xx/57xxx controllers, and has been replaced by the QCC GUI management tool.
Preface Intended Audience QL41162HMRJ-DE 10Gb Converged Network Adapter QL41164HMCU-DE 10Gb Converged Network Adapter QL41164HMRJ-DE 10Gb Converged Network Adapter QL41164HFRJ-DE 10Gb Converged Network Adapter, full-height bracket QL41164HFRJ-DE 10Gb Converged Network Adapter, low-profile bracket QL41164HFCU-DE 10Gb Converged Network Adapter, full-height bracket QL41232HFCU-DE 10/25Gb NIC Adapter, full-height bracket QL41232HLCU-DE 10/25Gb NIC Adapter, low-profile bracket
Preface What Is in This Guide Chapter 5 Adapter Preboot Configuration describes the preboot adapter configuration tasks using the Human Infrastructure Interface (HII) application. Chapter 6 Boot from SAN Configuration covers boot from SAN configuration for both iSCSI and FCoE. Chapter 7 RoCE Configuration describes how to configure the adapter, the Ethernet switch, and the host to use RDMA over converged Ethernet (RoCE).
Preface Documentation Conventions Appendix E Revision History describes the changes made in this revision of the guide. At the end of this guide is a glossary of terms. Documentation Conventions This guide uses the following documentation conventions: NOTE CAUTION provides additional information. without an alert symbol indicates the presence of a hazard that could cause damage to equipment or loss of data.
Preface Documentation Conventions Text in italics indicates terms, emphasis, variables, or document titles. For example: What are shortcut keys? To enter the date type mm/dd/yyyy (where mm is the month, dd is the day, and yyyy is the year). Topic titles between quotation marks identify related topics either within this manual or in the online help, which is also referred to as the help system throughout this document.
Preface Documentation Conventions ( ) (parentheses) and { } (braces) are used to avoid logical ambiguity. For example: a|b c is ambiguous {(a|b) c} means a or b, followed by c {a|(b c)} means either a, or b c Downloading Updates and Documentation The Marvell Web site provides periodic updates to product firmware, software, and documentation. To download Marvell firmware, software, and documentation: 1. Go to www.marvell.com. 2.
Preface Legal Notices Legal Notices Legal notices covered in this section include laser safety (FDA notice), agency certification, and product safety compliance. Laser Safety—FDA Notice This product complies with DHHS Rules 21CFR Chapter I, Subchapter J. This product has been designed and manufactured according to IEC60825-1 on the safety label of laser product.
Preface Legal Notices Immunity Standards EN61000-4-2: ESD EN61000-4-3: RF Electro Magnetic Field EN61000-4-4: Fast Transient/Burst EN61000-4-5: Fast Surge Common/ Differential EN61000-4-6: RF Conducted Susceptibility EN61000-4-8: Power Frequency Magnetic Field EN61000-4-11: Voltage Dips and Interrupt VCCI: 2015-04; Class A AS/NZS; CISPR 32: 2015 Class A CNS 13438: 2006 Class A KCC: Class A Korea RRA Class A Certified Product Name/Model: Converged Network Adapters and Intelligent Ethernet Adapters Certific
Preface Legal Notices VCCI: Class A This is a Class A product based on the standard of the Voluntary Control Council for Interference (VCCI). If this equipment is used in a domestic environment, radio interference may occur, in which case the user may be required to take corrective actions. Product Safety Compliance UL, cUL product safety: UL 60950-1 (2nd Edition) A1 + A2 2014-10-14 CSA C22.2 No.60950-1-07 (2nd Edition) A1 +A2 2014-10 Use only with listed ITE or equivalent. Complies with 21 CFR 1040.
1 Product Overview This chapter provides the following information for the 41xxx Series Adapters: Functional Description Features “Adapter Specifications” on page 3 Functional Description The Marvell FastLinQ 41000 Series Adapters include 10 and 25Gb Converged Network Adapters and Intelligent Ethernet Adapters that are designed to perform accelerated data networking for server systems. The 41000 Series Adapter includes a 10/25Gb Ethernet MAC with full-duplex capability.
1–Product Overview Features Generic segment offload (GSO) Large receive offload (LRO) Receive segment coalescing (RSC) Microsoft® dynamic virtual machine queue (VMQ), and Linux Multiqueue Adaptive interrupts: Transmit/receive side scaling (TSS/RSS) Stateless offloads for Network Virtualization using Generic Routing Encapsulation (NVGRE) and virtual LAN (VXLAN) L2/L3 GRE tunneled traffic1 Manageability: System management bus (SMB) controller Advanced Configuration and
1–Product Overview Adapter Specifications Serial flash NVRAM memory PCI Power Management Interface (v1.1) 64-bit base address register (BAR) support EM64T processor support iSCSI and FCoE boot support2 Adapter Specifications The 41xxx Series Adapter specifications include the adapter’s physical characteristics and standards-compliance references.
2 Hardware Installation This chapter provides the following hardware installation information: System Requirements “Safety Precautions” on page 5 “Preinstallation Checklist” on page 6 “Installing the Adapter” on page 6 System Requirements Before you install a Marvell 41xxx Series Adapter, verify that your system meets the hardware and operating system requirements shown in Table 2-1 and Table 2-2. For a complete list of supported operating systems, visit the Marvell Web site. Table 2-1.
2–Hardware Installation Safety Precautions Table 2-2. Minimum Host Operating System Requirements Operating System Windows Server Requirement 2012 R2, 2019 RHEL® 7.6, 7.7, 8.0, 8.1 Linux SLES® 12 SP4, SLES 15, SLES 15 SP1 CentOS 7.6 VMware vSphere® ESXi 6.5 U3 and vSphere ESXi 6.7 U3 XenServer Citrix Hypervisor 8.0 7.0, 7.1 NOTE Table 2-2 denotes minimum host OS requirements. For a complete list of supported operating systems, visit the Marvell Web site.
2–Hardware Installation Preinstallation Checklist Preinstallation Checklist Before installing the adapter, complete the following: 1. Verify that the system meets the hardware and software requirements listed under “System Requirements” on page 4. 2. Verify that the system is using the latest BIOS. NOTE If you acquired the adapter software from the Marvell Web site, verify the path to the adapter driver files. 3. If the system is active, shut it down. 4.
2–Hardware Installation Installing the Adapter 5. Applying even pressure at both corners of the card, push the adapter card into the slot until it is firmly seated. When the adapter is properly seated, the adapter port connectors are aligned with the slot opening, and the adapter faceplate is flush against the system chassis. CAUTION Do not use excessive force when seating the card, because this may damage the system or the adapter.
3 Driver Installation This chapter provides the following information about driver installation: Installing Linux Driver Software “Installing Windows Driver Software” on page 18 “Installing VMware Driver Software” on page 31 Installing Linux Driver Software This section describes how to install Linux drivers with or without remote direct memory access (RDMA). It also describes the Linux driver optional parameters, default values, messages, statistics, and public key for Secure Boot.
3–Driver Installation Installing Linux Driver Software Table 3-1 describes the 41xxx Series Adapter Linux drivers. Table 3-1. 41xxx Series Adapters Linux Drivers Linux Driver Description qed The qed core driver module directly controls the firmware, handles interrupts, and provides the low-level API for the protocol specific driver set. The qed interfaces with the qede, qedr, qedi, and qedf drivers. The Linux core module manages all PCI device resources (registers, host interface queues, and so on).
3–Driver Installation Installing Linux Driver Software The following source code TAR BZip2 (BZ2) compressed file installs Linux drivers on RHEL and SLES hosts: fastlinq-.tar.bz2 NOTE For network installations through NFS, FTP, or HTTP (using a network boot disk), you may require a driver disk that contains the qede driver. Compile the Linux boot drivers by modifying the makefile and the make environment. Installing the Linux Drivers Without RDMA To install the Linux drivers without RDMA: 1.
3–Driver Installation Installing Linux Driver Software rmmod qed depmod -a For RHEL: cd /lib/modules//extra/qlgc-fastlinq rm -rf qed.ko qede.ko qedr.ko For SLES: cd /lib/modules//updates/qlgc-fastlinq rm -rf qed.ko qede.ko qedr.ko To remove Linux drivers in a non-RDMA environment: 1. To get the path to the currently installed drivers, issue the following command: modinfo 2. Unload and remove the Linux drivers.
3–Driver Installation Installing Linux Driver Software To remove Linux drivers in an RDMA environment: 1. To get the path to the installed drivers, issue the following command: modinfo 2. Unload and remove the Linux drivers. modprobe -r qedr modprobe -r qede modprobe -r qed depmod -a 3. Remove the driver module files: If the drivers were installed using an RPM package, issue the following command: rpm -e qlgc-fastlinq-kmp-default-.
3–Driver Installation Installing Linux Driver Software For RHEL: cd /root/rpmbuild rpmbuild -bb SPECS/fastlinq-.spec For SLES: cd /usr/src/packages rpmbuild -bb SPECS/fastlinq-.spec 3. Install the newly compiled RPM: rpm -ivh RPMS//qlgc-fastlinq-..rpm NOTE The --force option may be needed on some Linux distributions if conflicts are reported. The drivers will be installed in the following paths.
3–Driver Installation Installing Linux Driver Software Installing Linux Drivers Using the TAR File To install Linux drivers using the TAR file: 1. Create a directory and extract the TAR files to the directory: tar xjvf fastlinq-.tar.bz2 2. Change to the recently created directory, and then install the drivers: cd fastlinq- make clean; make install The qed and qede drivers will be installed in the following paths.
3–Driver Installation Installing Linux Driver Software 5. Install libqedr libraries to work with RDMA user space applications. The libqedr RPM is available only for inbox OFED. You must select which RDMA (RoCE, RoCEv2, or iWARP) is used in UEFI until concurrent RoCE+iWARP capability is supported in the firmware). None is enabled by default. Issue the following command: rpm –ivh qlgc-libqedr-..rpm 6.
3–Driver Installation Installing Linux Driver Software Linux Driver Operation Defaults Table 3-3 lists the qed and qede Linux driver operation defaults. Table 3-3.
3–Driver Installation Installing Linux Driver Software To import and enroll the QLogic public key: 1. Download the public key from the following Web page: http://ldriver.qlogic.com/Module-public-key/ 2. To install the public key, issue the following command: # mokutil --root-pw --import cert.der Where the --root-pw option enables direct use of the root user. 3. Reboot the system. 4. Review the list of certificates that are prepared to be enrolled: # mokutil --list-new 5. Reboot the system again.
3–Driver Installation Installing Windows Driver Software Installing Windows Driver Software For information on iWARP, see Chapter 8 iWARP Configuration.
3–Driver Installation Installing Windows Driver Software 2. In the Dell Update Package window (Figure 3-1), click Install. Figure 3-1. Dell Update Package Window 3. In the QLogic Super Installer—InstallShield® Wizard’s Welcome window (Figure 3-2), click Next. Figure 3-2.
3–Driver Installation Installing Windows Driver Software 4. Complete the following in the wizard’s License Agreement window (Figure 3-3): a. Read the End User Software License Agreement. b. To continue, select I accept the terms in the license agreement. c. Click Next. Figure 3-3. QLogic InstallShield Wizard: License Agreement Window 5. Complete the wizard’s Setup Type window (Figure 3-4) as follows: a. b. Select one of the following setup types: Click Complete to install all program features.
3–Driver Installation Installing Windows Driver Software Figure 3-4. InstallShield Wizard: Setup Type Window 6. If you selected Custom in Step 5, complete the Custom Setup window (Figure 3-5) as follows: a. b. Select the features to install. By default, all features are selected.
3–Driver Installation Installing Windows Driver Software Figure 3-5. InstallShield Wizard: Custom Setup Window 7. In the InstallShield Wizard’s Ready To Install window (Figure 3-6), click Install. The InstallShield Wizard installs the QLogic Adapter drivers and Management Software Installer. Figure 3-6.
3–Driver Installation Installing Windows Driver Software 8. When the installation is complete, the InstallShield Wizard Completed window appears (Figure 3-7). Click Finish to dismiss the installer. Figure 3-7. InstallShield Wizard: Completed Window 9. In the Dell Update Package window (Figure 3-8), “Update installer operation was successful” indicates completion. (Optional) To open the log file, click View Installation Log.
3–Driver Installation Installing Windows Driver Software Figure 3-8. Dell Update Package Window DUP Installation Options To customize the DUP installation behavior, use the following command line options. To extract only the driver components to a directory: /drivers= NOTE This command requires the /s option. To install or update only the driver components: /driveronly NOTE This command requires the /s option.
3–Driver Installation Installing Windows Driver Software NOTE This command requires the /s option. DUP Installation Examples The following examples show how to use the installation options. To update the system silently: .exe /s To extract the update contents to the C:\mydir\ directory: .exe /s /e=C:\mydir To extract the driver components to the C:\mydir\ directory: .exe /s /drivers=C:\mydir To install only the driver components: .
3–Driver Installation Installing Windows Driver Software Figure 3-9.
3–Driver Installation Installing Windows Driver Software Setting Power Management Options You can set power management options to allow the operating system to turn off the controller to save power or to allow the controller to wake up the computer. If the device is busy (servicing a call, for example), the operating system will not shut down the device. The operating system attempts to shut down every possible device only when the computer attempts to go into hibernation.
3–Driver Installation Installing Windows Driver Software A communication protocol enables communication between the RPC agent and the client software. Depending on the mix of operating systems (Linux, Windows, or both) on the clients and managed hosts in your network, you can choose an appropriate utility.
3–Driver Installation Installing Windows Driver Software Link Control Mode There are two modes for controlling link configuration: Preboot Controlled is the default mode. In this mode, the driver uses the link configuration from the device, which is configurable from preboot components. This mode ignores the link parameters on the Advanced tab. Driver Controlled mode should be set when you want to configure the link settings from Advanced tab of the Device Manager (as shown in Figure 3-11).
3–Driver Installation Installing Windows Driver Software Link Speed and Duplex The Speed & Duplex property (on the Advanced tab of the Device Manager) can be configured to any selection in the Value menu (see Figure 3-12). Figure 3-12. Setting the Link Speed and Duplex Property This configuration is effective only when the link control property is set to Driver controlled (see Figure 3-11). FEC Mode FEC mode configuration at the OS level involves three driver advanced properties. To set FEC mode: 1.
3–Driver Installation Installing VMware Driver Software FEC mode configuration is active only when Speed & Duplex is set to a fixed speed. Setting this property to Auto Negotiation disables FEC configuration. 3. Set FEC Mode. On the Advanced tab of the Device Manager: a. In the Property menu, select FEC Mode. b. In the Value menu, select a valid value (see Figure 3-13). Figure 3-13. Setting the FEC Mode Property This property is in effect only when Step 1 and Step 2 have been completed.
3–Driver Installation Installing VMware Driver Software VMware Driver Parameter Defaults Removing the VMware Driver FCoE Support iSCSI Support VMware Drivers and Driver Packages Table 3-4 lists the VMware ESXi drivers for the protocols. Table 3-4. VMware Drivers VMware Driver Description qedentv Native networking driver qedrntv Native RDMA-Offload (RoCE and RoCEv2) drivera qedf Native FCoE-Offload driver qedil Legacy iSCSI-Offload driver qedi Native iSCSI-Offload driver (ESXi 6.
3–Driver Installation Installing VMware Driver Software Procedures in the following VMware KB article: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US& cmd=displayKC&externalId=2137853 You should install the NIC driver first, followed by the storage drivers. Installing VMware Drivers You can use the driver ZIP file to install a new driver or update an existing driver. Be sure to install the entire driver set from the same driver ZIP file.
3–Driver Installation Installing VMware Driver Software 5. Select one of the following installation options: Option 1: Install the driver bundle (which will install all of the driver VIBs at one time) by issuing the following command: # esxcli software vib install -d /tmp/qedentv-2.0.3.zip Option 2: Install the .vib directly on an ESX server using either the CLI or the VMware Update Manager (VUM). To do this, unzip the driver ZIP file, and then extract the .vib file. To install the .
3–Driver Installation Installing VMware Driver Software Table 3-5. VMware NIC Driver Optional Parameters (Continued) Parameter num_queues Description Specifies the number of TX/RX queue pairs. num_queues can be 1–11 or one of the following: –1 allows the driver to determine the optimal number of queue pairs (default). 0 uses the default queue. You can specify multiple values delimited by commas for multiport or multifunction configurations.
3–Driver Installation Installing VMware Driver Software Table 3-5. VMware NIC Driver Optional Parameters (Continued) Parameter Description vxlan_filter_en Enables (1) or disables (0) the VXLAN filtering based on the outer MAC, the inner MAC, and the VXLAN network (VNI), directly matching traffic to a specific queue. The default is vxlan_filter_en=1. You can specify multiple values delimited by commas for multiport or multifunction configurations.
3–Driver Installation Installing VMware Driver Software Removing the VMware Driver To remove the .vib file (qedentv), issue the following command: # esxcli software vib remove --vibname qedentv To remove the driver, issue the following command: # vmkload_mod -u qedentv FCoE Support The Marvell VMware FCoE qedf driver included in the VMware software package supports Marvell FastLinQ FCoE converged network interface controllers (C-NICs).
3–Driver Installation Installing VMware Driver Software NOTE The iSCSI interface supported by the QL41xxx Adapters is a dependent hardware interface that relies on networking services, iSCSI configuration, and management interfaces provided by VMware. The iSCSI interface includes two components: a network adapter and an iSCSI engine on the same interface. The iSCSI engine appears on the list of storage adapters as an iSCSI adapter (vmhba).
4 Upgrading the Firmware This chapter provides information about upgrading the firmware using the Dell Update Package (DUP). The firmware DUP is a Flash update utility only; it is not used for adapter configuration. You can run the firmware DUP by double-clicking the executable file. Alternatively, you can run the firmware DUP from the command line with several supported command line options.
4–Upgrading the Firmware Running the DUP by Double-Clicking 3. Follow the on-screen instructions. In the Warning dialog box, click Yes to continue the installation. The installer indicates that it is loading the new firmware, as shown in Figure 4-2. Figure 4-2. Dell Update Package: Loading New Firmware When complete, the installer indicates the result of the installation, as shown in Figure 4-3. Figure 4-3.
4–Upgrading the Firmware Running the DUP from a Command Line 4. Click Yes to reboot the system. 5. Click Finish to complete the installation, as shown in Figure 4-4. Figure 4-4. Dell Update Package: Finish Installation Running the DUP from a Command Line Running the firmware DUP from the command line, with no options specified, results in the same behavior as double-clicking the DUP icon. Note that the actual file name of the DUP will vary.
4–Upgrading the Firmware Running the DUP Using the .bin File Figure 4-5 shows the options that you can use to customize the Dell Update Package installation. Figure 4-5. DUP Command Line Options Running the DUP Using the .bin File The following procedure is supported only on Linux OS. To update the DUP using the .bin file: 1. Copy the Network_Firmware_NJCX1_LN_X.Y.Z.BIN file to the system or server. 2. Change the file type into an executable file as follows: chmod 777 Network_Firmware_NJCX1_LN_X.Y.Z.
4–Upgrading the Firmware Running the DUP Using the .bin File Example output from the SUT during the DUP update: ./Network_Firmware_NJCX1_LN_08.07.26.BIN Collecting inventory... Running validation... BCM57810 10 Gigabit Ethernet rev 10 (p2p1) The version of this Update Package is the same as the currently installed version. Software application name: BCM57810 10 Gigabit Ethernet rev 10 (p2p1) Package version: 08.07.26 Installed version: 08.07.
5 Adapter Preboot Configuration During the host boot process, you have the opportunity to pause and perform adapter management tasks using the Human Infrastructure Interface (HII) application.
5–Adapter Preboot Configuration Getting Started Getting Started To start the HII application: 1. Open the System Setup window for your platform. For information about launching the System Setup, consult the user guide for your system. 2. In the System Setup window (Figure 5-1), select Device Settings, and then press ENTER. Figure 5-1. System Setup 3. In the Device Settings window (Figure 5-2), select the 41xxx Series Adapter port that you want to configure, and then press ENTER. Figure 5-2.
5–Adapter Preboot Configuration Getting Started The Main Configuration Page (Figure 5-3) presents the adapter management options where you can set the partitioning mode. Figure 5-3. Main Configuration Page 4. Under Device Level Configuration, set the Partitioning Mode to NPAR to add the NIC Partitioning Configuration option to the Main Configuration Page, as shown in Figure 5-4. NOTE NPAR is not available on ports with a maximum speed of 1G. Figure 5-4.
5–Adapter Preboot Configuration Getting Started Device Level Configuration (see “Configuring Device-level Parameters” on page 49) NIC Configuration (see “Configuring NIC Parameters” on page 50) iSCSI Configuration (if iSCSI remote boot is allowed by enabling iSCSI offload in NPAR mode on the port’s third partition) (see “Configuring iSCSI Boot” on page 56) FCoE Configuration (if FCoE boot from SAN is allowed by enabling FCoE offload in NPAR mode on the port’s second partition) (see “Configur
5–Adapter Preboot Configuration Displaying Firmware Image Properties Table 5-1.
5–Adapter Preboot Configuration Configuring Device-level Parameters Configuring Device-level Parameters NOTE The iSCSI physical functions (PFs) are listed when the iSCSI Offload feature is enabled in NPAR mode only. The FCoE PFs are listed when the FCoE Offload feature is enabled in NPAR mode only. Not all adapter models support iSCSI Offload and FCoE Offload. Only one offload can be enabled per port, and only in NPAR mode.
5–Adapter Preboot Configuration Configuring NIC Parameters 4. NParEP Mode configures the maximum quantity of partitions per adapter. This parameter is visible when you select either NPAR or NPar + SR-IOV as the Virtualization Mode in Step 2. Enabled allows you to configure up to 16 partitions per adapter. Disabled allows you to configures up to 8 partitions per adapter. 5. Click Back. 6. When prompted, click Yes to save the changes. Changes take effect after a system reset.
5–Adapter Preboot Configuration Configuring NIC Parameters 2. Select one of the following Link Speed options for the selected port. Not all speed selections are available on all adapters. Auto Negotiated enables Auto Negotiation mode on the port. FEC mode selection is not available for this speed mode. 1 Gbps enables 1GbE fixed speed mode on the port. This mode is intended only for 1GbE interfaces and should not be configured for adapter interfaces that operate at other speeds.
5–Adapter Preboot Configuration Configuring NIC Parameters 6. 7. 8. For Boot Mode, select one of the following values: PXE enables PXE boot. FCoE enables FCoE boot from SAN over the hardware offload pathway. The FCoE mode is available only if FCoE Offload is enabled on the second partition in NPAR mode (see “Configuring Partitions” on page 61). iSCSI enables iSCSI remote boot over the hardware offload pathway.
5–Adapter Preboot Configuration Configuring Data Center Bridging To configure the port to use RDMA: NOTE Follow these steps to enable RDMA on all partitions of an NPAR mode port. 1. Set NIC + RDMA Mode to Enabled. 2. Click Back. 3. When prompted, click Yes to save the changes. Changes take effect after a system reset. To configure the port's boot mode: 1. For a UEFI PXE remote installation, select PXE as the Boot Mode. 2. Click Back. 3. When prompted, click Yes to save the changes.
5–Adapter Preboot Configuration Configuring Data Center Bridging 3. CEE enables the legacy Converged Enhanced Ethernet (CEE) protocol DCBX mode on this port. IEEE enables the IEEE DCBX protocol on this port. Dynamic enables dynamic application of either the CEE or IEEE protocol to match the attached link partner. On the Data Center Bridging (DCB) Settings page, enter the RoCE v1 Priority as a value from 0–7.
5–Adapter Preboot Configuration Configuring FCoE Boot Configuring FCoE Boot NOTE The FCoE Boot Configuration Menu is only visible if FCoE Offload Mode is enabled on the second partition in NPAR mode (see Figure 5-18 on page 64). It is not visible in non-NPAR mode. To enable FCoE-Offload mode, see the Application Note, Enabling Storage Offloads on Dell and Marvell FastLinQ 41000 Series Adapters at https://www.marvell.com/documents/5aa5otcbkr0im3ynera3/.
5–Adapter Preboot Configuration Configuring iSCSI Boot Figure 5-10. FCoE Target Configuration 4. Click Back. 5. When prompted, click Yes to save the changes. Changes take effect after a system reset. Configuring iSCSI Boot NOTE The iSCSI Boot Configuration Menu is only visible if iSCSI Offload Mode is enabled on the third partition in NPAR mode (see Figure 5-19 on page 65). It is not visible in non-NPAR mode.
5–Adapter Preboot Configuration Configuring iSCSI Boot To configure the iSCSI boot configuration parameters: 1. On the Main Configuration Page, select iSCSI Boot Configuration Menu, and then select one of the following options: iSCSI General Configuration iSCSI Initiator Configuration iSCSI First Target Configuration iSCSI Second Target Configuration 2. Press ENTER. 3.
5–Adapter Preboot Configuration Configuring iSCSI Boot iSCSI Second Target Parameters (Figure 5-14 on page 60) Connect IPv4 Address TCP Port Boot LUN iSCSI Name CHAP ID CHAP Secret 4. Click Back. 5. When prompted, click Yes to save the changes. Changes take effect after a system reset. Figure 5-11.
5–Adapter Preboot Configuration Configuring iSCSI Boot Figure 5-12. iSCSI Initiator Configuration Parameters Figure 5-13.
5–Adapter Preboot Configuration Configuring iSCSI Boot Figure 5-14.
5–Adapter Preboot Configuration Configuring Partitions Configuring Partitions You can configure bandwidth ranges for each partition on the adapter. For information specific to partition configuration on VMware ESXi 6.5, see Partitioning for VMware ESXi 6.5 and ESXi 6.7. To configure the maximum and minimum bandwidth allocations: 1. On the Main Configuration Page, select NIC Partitioning Configuration, and then press ENTER. 2.
5–Adapter Preboot Configuration Configuring Partitions 3. On the Global Bandwidth Allocation page (Figure 5-16), click each partition minimum and maximum TX bandwidth field for which you want to allocate bandwidth. There are eight partitions per port in dual-port mode. Figure 5-16. Global Bandwidth Allocation Page Partition n Minimum TX Bandwidth is the minimum transmit bandwidth of the selected partition expressed as a percentage of the maximum physical port link speed. Values can be 0–100.
5–Adapter Preboot Configuration Configuring Partitions 4. When prompted, click Yes to save the changes. Changes take effect after a system reset. To configure partitions: 1. To examine a specific partition configuration, on the NIC Partitioning Configuration page (Figure 5-15 on page 61), select Partition n Configuration. If NParEP is not enabled, only four partitions exist per port. 2.
5–Adapter Preboot Configuration Configuring Partitions iSCSI Mode enables or disables the iSCSI-Offload personality on the third partition. If you enable this mode on the third partition, you should disable NIC Mode. Because only one offload is available per port, if iSCSI-Offload is enabled on the port’s third partition, FCoE-Offload cannot be enabled on the second partition of that same NPAR mode port. Not all adapters support iSCSI Mode.
5–Adapter Preboot Configuration Configuring Partitions PCI Device ID PCI Address Figure 5-19. Partition 3 Configuration: iSCSI Offload 5. To configure the remaining Ethernet partitions, including the previous (if not offload-enabled), open the page for a partition 2 or greater partition (see Figure 5-20). NIC Mode (Enabled or Disabled). When disabled, the partition is hidden such that it does not appear to the OS if fewer than the maximum quantity of partitions (or PCI PFs) are detected.
5–Adapter Preboot Configuration Configuring Partitions Storage partitions are enabled (by converting one of the NIC partitions as storage) while drivers are already installed on the system. Partition 2 is changed to FCoE. The configuration is saved and the system is rebooted again.
6 Boot from SAN Configuration SAN boot enables deployment of diskless servers in an environment where the boot disk is located on storage connected to the SAN. The server (initiator) communicates with the storage device (target) through the SAN using the Marvell Converged Network Adapter (CNA) Host Bus Adapter (HBA). To enable FCoE-Offload mode, see the Application Note, Enabling Storage Offloads on Dell and Marvell FastLinQ 41000 Series Adapters at https://www.marvell.com/documents/5aa5otcbkr0im3ynera3/.
6–Boot from SAN Configuration iSCSI Boot from SAN iSCSI Out-of-Box and Inbox Support Table 6-1 lists the operating systems’ inbox and out-of-box support for iSCSI boot from SAN (BFS). Table 6-1.
6–Boot from SAN Configuration iSCSI Boot from SAN For both Windows and Linux operating systems, iSCSI boot can be configured to boot with two distinctive methods: iSCSI SW (also known as non-offload path with Microsoft/Open-iSCSI initiator) Follow the Dell BIOS guide for iSCSI software installation. ISCSI HW (offload path with the Marvell FastLinQ offload iSCSI driver). This option can be set using Boot Mode.
6–Boot from SAN Configuration iSCSI Boot from SAN Figure 6-1.
6–Boot from SAN Configuration iSCSI Boot from SAN Enabling NPAR and the iSCSI HBA To enable NPAR and the iSCSI HBA: 1. In the System Setup, Device Settings, select the QLogic device (Figure 6-2). Refer to the OEM user guide on accessing the PCI device configuration menu. Figure 6-2. System Setup: Device Settings 2. Enable NPAR. Configuring the Storage Target Configuring the storage target varies by target vendors.
6–Boot from SAN Configuration iSCSI Boot from SAN 2. Create a virtual disk. Selecting the iSCSI UEFI Boot Protocol Before selecting the preferred boot mode, ensure that the Device Level Configuration menu setting is Enable NPAR and that the NIC Partitioning Configuration menu setting is Enable iSCSI HBA. The Boot Mode option is listed under NIC Configuration (Figure 6-3) for the adapter, and the setting is port specific.
6–Boot from SAN Configuration iSCSI Boot from SAN 1. On the NIC Configuration page (Figure 6-4), for the Boot Protocol option, select UEFI iSCSI HBA (requires NPAR mode). Figure 6-4. System Setup: NIC Configuration, Boot Protocol NOTE Use the Virtual LAN Mode and Virtual LAN ID options on this page only for PXE boot. If a vLAN is needed for UEFI iSCSI HBA boot mode, see Step 3 of Static iSCSI Boot Configuration.
6–Boot from SAN Configuration iSCSI Boot from SAN To configure the iSCSI boot parameters using static configuration: 1. In the Device HII Main Configuration Page, select iSCSI Configuration (Figure 6-5), and then press ENTER. Figure 6-5. System Setup: iSCSI Configuration 2. On the iSCSI Configuration page, select iSCSI General Parameters (Figure 6-6), and then press ENTER. Figure 6-6.
6–Boot from SAN Configuration iSCSI Boot from SAN 3.
6–Boot from SAN Configuration iSCSI Boot from SAN Table 6-2. iSCSI General Parameters Option Description TCP/IP Parameters via DHCP This option is specific to IPv4. Controls whether the iSCSI boot host software acquires the IP address information using DHCP (Enabled) or using a static IP configuration (Disabled). iSCSI Parameters via DHCP Controls whether the iSCSI boot host software acquires its iSCSI target parameters using DHCP (Enabled) or through a static configuration (Disabled).
6–Boot from SAN Configuration iSCSI Boot from SAN 5. Select iSCSI Initiator Parameters (Figure 6-8), and then press ENTER. Figure 6-8. System Setup: Selecting iSCSI Initiator Parameters 6. On the iSCSI Initiator Parameters page (Figure 6-9), select the following parameters, and then type a value for each: IPv4* Address Subnet Mask IPv4* Default Gateway IPv4* Primary DNS IPv4* Secondary DNS iSCSI Name. Corresponds to the iSCSI initiator name to be used by the client system.
6–Boot from SAN Configuration iSCSI Boot from SAN NOTE For the preceding items with asterisks (*), note the following: The label will change to IPv6 or IPv4 (default) based on the IP version set on the iSCSI General Parameters page (Figure 6-7 on page 75). Carefully enter the IP address. There is no error-checking performed against the IP address to check for duplicates, incorrect segment, or network assignment. Figure 6-9. System Setup: iSCSI Initiator Parameters 7.
6–Boot from SAN Configuration iSCSI Boot from SAN 8. Select iSCSI First Target Parameters (Figure 6-10), and then press ENTER. Figure 6-10. System Setup: Selecting iSCSI First Target Parameters 9. On the iSCSI First Target Parameters page, set the Connect option to Enabled for the iSCSI target. 10.
6–Boot from SAN Configuration iSCSI Boot from SAN Figure 6-11. System Setup: iSCSI First Target Parameters 11. Return to the iSCSI Boot Configuration page, and then press ESC.
6–Boot from SAN Configuration iSCSI Boot from SAN 12. If you want to configure a second iSCSI target device, select iSCSI Second Target Parameters (Figure 6-12), and enter the parameter values as you did in Step 10. This second target is used if the first target cannot be connected to.Otherwise, proceed to Step 13. Figure 6-12. System Setup: iSCSI Second Target Parameters 13. Press ESC once, and a second time to exit. 14.
6–Boot from SAN Configuration iSCSI Boot from SAN Figure 6-13. System Setup: Saving iSCSI Changes 15. After all changes have been made, reboot the system to apply the changes to the adapter’s running configuration. Dynamic iSCSI Boot Configuration In a dynamic configuration, ensure that the system’s IP address and target (or initiator) information are provided by a DHCP server (see IPv4 and IPv6 configurations in “Configuring the DHCP Server to Support iSCSI Boot” on page 85).
6–Boot from SAN Configuration iSCSI Boot from SAN NOTE When using a DHCP server, the DNS server entries are overwritten by the values provided by the DHCP server. This override occurs even if the locally provided values are valid and the DHCP server provides no DNS server information. When the DHCP server provides no DNS server information, both the primary and secondary DNS server values are set to 0.0.0.0.
6–Boot from SAN Configuration iSCSI Boot from SAN Figure 6-14. System Setup: iSCSI General Parameters Enabling CHAP Authentication Ensure that the CHAP authentication is enabled on the target. To enable CHAP authentication: 1. Go to the iSCSI General Parameters page. 2. Set CHAP Authentication to Enabled. 3. In the Initiator Parameters window, type values for the following: CHAP ID (up to 255 characters) CHAP Secret (if authentication is required; must be 12 to 16 characters in length) 4.
6–Boot from SAN Configuration iSCSI Boot from SAN 7. Press ESC to return to the iSCSI Boot Configuration Menu. 8. Press ESC, and then select confirm Save Configuration. Configuring the DHCP Server to Support iSCSI Boot The DHCP server is an optional component, and is only necessary if you will be doing a dynamic iSCSI boot configuration setup (see “Dynamic iSCSI Boot Configuration” on page 82).
6–Boot from SAN Configuration iSCSI Boot from SAN Table 6-3. DHCP Option 17 Parameter Definitions (Continued) Parameter Definition Logical unit number to use on the iSCSI target. The value of the LUN must be represented in hexadecimal format. A LUN with an ID of 64 must be configured as 40 within the Option 17 parameter on the DHCP server. Target name in either IQN or EUI format. For details on both IQN and EUI formats, refer to RFC 3720. An example IQN name is iqn.1995-05.com.
6–Boot from SAN Configuration iSCSI Boot from SAN Configuring the DHCP Server Configure the DHCP server to support either Option 16, 17, or 43. NOTE The format of DHCPv6 Option 16 and Option 17 are fully defined in RFC 3315. If you use Option 43, you must also configure Option 60. The value of Option 60 must match the DHCP Vendor ID value, QLGC ISAN, as shown in the iSCSI General Parameters of the iSCSI Boot Configuration page.
6–Boot from SAN Configuration iSCSI Boot from SAN Table 6-5 lists the DHCP Option 17 sub-options. Table 6-5.
6–Boot from SAN Configuration iSCSI Boot from SAN 3. Select VLAN ID to enter and set the vLAN value, as shown in Figure 6-15. Figure 6-15. System Setup: iSCSI General Parameters, VLAN ID Configuring iSCSI Boot from SAN on Windows Adapters support iSCSI boot to enable network boot of operating systems to diskless systems. iSCSI boot allows a Windows operating system to boot from an iSCSI target machine located remotely over a standard IP network.
6–Boot from SAN Configuration iSCSI Boot from SAN Before You Begin Before you begin configuring iSCSI boot from SAN on a Windows machine, note the following: iSCSI boot is only supported for NPAR with NParEP Mode. Before configuring iSCSI boot: 1. Access the Device Level Configuration page. 2. Set the Virtualization Mode to Npar (NPAR). 3. Set the NParEP Mode to Enabled. The server boot mode must be UEFI. iSCSI boot on 41xxx Series Adapters is not supported in legacy BIOS.
6–Boot from SAN Configuration iSCSI Boot from SAN Virtual LAN ID: (Optional) You can isolate iSCSI traffic on the network in a Layer 2 vLAN to segregate it from general traffic.To segregate traffic, make the iSCSI interface on the adapter a member of the Layer 2 vLAN by setting this value. Configuring the iSCSI Initiator To set the iSCSI initiator parameters on Windows: 1. From the Main Configuration page, select iSCSI Configuration, and then select iSCSI Initiator Parameters. 2.
6–Boot from SAN Configuration iSCSI Boot from SAN Configuring the iSCSI Targets You can set up the iSCSI first target, second target, or both at once. To set the iSCSI target parameters on Windows: 1. From the Main Configuration page, select iSCSI Configuration, and then select iSCSI First Target Parameters. 2. On the iSCSI First Target Parameters page, set the Connect option to Enabled for the iSCSI target. 3.
6–Boot from SAN Configuration iSCSI Boot from SAN The output from the preceding command shown in Figure 6-16 indicates that the iSCSI LUN was detected successfully at the preboot level. Figure 6-16. Detecting the iSCSI LUN Using UEFI Shell (Version 2) 2. On the newly detected iSCSI LUN, select an installation source such as using a WDS server, mounting the .ISO with an integrated Dell Remote Access Controller (iDRAC), or using a CD/DVD. 3.
6–Boot from SAN Configuration iSCSI Boot from SAN 4. Inject the latest Marvell drivers by mounting drivers in the virtual media: a. Click Load driver, and then click Browse (see Figure 6-18). Figure 6-18. Windows Setup: Selecting Driver to Install b. Navigate to the driver location and choose the qevbd driver. c. Choose the adapter on which to install the driver, and then click Next to continue. 5. Repeat Step 4 to load the qeios driver (Marvell L4 iSCSI driver). 6.
6–Boot from SAN Configuration iSCSI Boot from SAN Configuring iSCSI Boot from SAN for RHEL 7.5 and Later To install RHEL 7.5 and later: 1. Boot from the RHEL 7.x installation media with the iSCSI target already connected in UEFI. Install Red Hat Enterprise Linux 7.x Test this media & install Red Hat Enterprise 7.x Troubleshooting --> Use the UP and DOWN keys to change the selection Press 'e' to edit the selected item or 'c' for a command prompt 2. To install an out-of-box driver, press the E key.
6–Boot from SAN Configuration iSCSI Boot from SAN 13. Edit the /etc/default/grub file as follows: a. Locate the string in double quotes as shown in the following example. The command line is a specific reference to help find the string. GRUB_CMDLINE_LINUX="iscsi_firmware" b. The command line may contain other parameters that can remain. Change only the iscsi_firmware string as follows: GRUB_CMDLINE_LINUX="rd.iscsi.firmware selinux=0" 14. Create a new grub.
6–Boot from SAN Configuration iSCSI Boot from SAN Configuring iSCSI Boot from SAN for SLES 12 SP3 and Later To install SLES 12 SP3 and later: 1. Boot from the SLES 12 SP3 installation media with the iSCSI target pre-configured and connected in UEFI. 2. Update the latest driver package by adding the dud=1 parameter in the installer command parameter. The driver update disk is required because the necessary iSCSI drivers are not inbox.
6–Boot from SAN Configuration iSCSI Boot from SAN iSCSI offload for other distributions of Linux includes the following information: Booting from SAN Using a Software Initiator Migrating from Software iSCSI Installation to Offload iSCSI Linux Multipath Considerations Booting from SAN Using a Software Initiator To boot from SAN using a software initiator with Dell OEM Solutions: NOTE [[ The preceding step is required because DUDs contain qedi, which binds to the iSCSI PF.
6–Boot from SAN Configuration iSCSI Boot from SAN Migrating to Offload iSCSI for RHEL 6.9/6.10 To migrate from a software iSCSI installation to an offload iSCSI for RHEL 6.9 or 6.10: 1. Boot into the iSCSI non-offload/L2 boot from SAN operating system. Issue the following commands to install the Open-iSCSI and iscsiuio RPMs: # rpm -ivh --force qlgc-open-iscsi-2.0_873.111-1.x86_64.rpm # rpm -ivh --force iscsiuio-2.11.5.2-1.rhel6u9.x86_64.
6–Boot from SAN Configuration iSCSI Boot from SAN initrd /initramfs-2.6.32-696.el6.x86_64.img kernel /vmlinuz-2.6.32-696.el6.x86_64 ro root=/dev/mapper/vg_prebooteit-lv_root rd_NO_LUKS iscsi_firmware LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_NO_DM rd_LVM_LV=vg_prebooteit/lv_swap KEYBOARDTYPE=pc KEYTABLE=us rd_LVM_LV=vg_prebooteit/lv_root selinux=0 initrd /initramfs-2.6.32-696.el6.x86_64.img 4. Build the initramfs file by issuing the following command: # dracut -f 5.
6–Boot from SAN Configuration iSCSI Boot from SAN 2. Edit the /etc/elilo.conf file, make the following changes, and then save the file: 3. Remove the ip=ibft parameter (if present) Add iscsi_firmware Edit the /etc/sysconfig/kernel file as follows: a. Locate the line that begins with INITRD_MODULES. This line will look similar to the following, but may contain different parameters: INITRD_MODULES="ata_piix ata_generic" or INITRD_MODULES="ahci" b.
6–Boot from SAN Configuration iSCSI Boot from SAN 9. c. Open the NIC Partitioning Configuration page and set the iSCSI Offload Mode to Enabled. (iSCSI HBA support is on partition 3 for a two--port adapter and on partition 2 for a four-port adapter.) d. Open the NIC Configuration menu and set the Boot Protocol to UEFI iSCSI. e. Open the iSCSI Configuration page and configure iSCSI settings. Save the configuration and reboot the server. The OS can now boot through the offload interface.
6–Boot from SAN Configuration iSCSI Boot from SAN 6. 7. 8. Edit the /etc/default/grub file and modify the GRUB_CMDLINE_LINUX value: a. Remove rd.iscsi.ibft (if present). b. Remove any ip= boot options (if present). c. Add rd.iscsi.firmware. For older distros, add iscsi_firmware. Create a backup of the original grub.cfg file. The file is in the following locations: Legacy boot: /boot/grub2/grub.cfg UEFI boot: /boot/efi/EFI/sles/grub.cfg for SLES Create a new grub.
6–Boot from SAN Configuration iSCSI Boot from SAN Linux Multipath Considerations iSCSI boot from SAN installations on Linux operating systems are currently supported only in a single-path configuration. To enable multipath configurations, perform the installation in a single path, using either an L2 or L4 path. After the server boots into the installed operating system, perform the required configurations for enabling multipath I/O (MPIO).
6–Boot from SAN Configuration iSCSI Boot from SAN 8. Reboot and change the adapter boot settings to use L4 or iSCSI (HW) for both ports and to boot through L4. 9. After booting into the OS, set up the multipath daemon multipathd.conf: # mpathconf --enable --with_multipathd y # mpathconf –enable 10. Start the multipathd service: # service multipathd start 11. Rebuild initramfs with multipath support. # dracut --force --add multipath --include /etc/multipath 12.
6–Boot from SAN Configuration iSCSI Boot from SAN 3. Set MPIO services to remain persistent on re-boot as follows: # chkconfig multipathd on # chkconfig boot.multipath on # chkconfig boot.udev on 4. Start multipath services as follows: # /etc/init.d/boot.multipath start # /etc/init.d/multipathd start 5. Run multipath –v2 –d to display multipath configuration with a dry run. 6. Locate the multipath.conf file under /etc/multipath.conf. NOTE \ If the file is not present, copy multipath.
6–Boot from SAN Configuration iSCSI Boot from SAN 4. Boot to the OS with L2. 5. Update the Open-iSCSI tools by issuing the following commands: # rpm -ivh qlgc-open-iscsi-2.0_873.111.sles12sp1-3.x86_64.rpm --force # rpm -ivh iscsiuio-2.11.5.5-6.sles12sp1.x86_64.rpm --force 6. Edit the /etc/default/grub file by changing the rd.iscsi.ibft parameter to rd.iscsi.firmware, and then save the file. 7. Issue the following command: # grub2-mkconfig -o /boot/efi/EFI/suse/grub.cfg 8.
6–Boot from SAN Configuration iSCSI Boot from SAN Configuring iSCSI Boot from SAN on VMware Because VMware does not natively support iSCSI boot from SAN offload, you must configure BFS through the software in preboot, and then transition to offload upon OS driver loads. For more information, see “Enabling NPAR and the iSCSI HBA” on page 71.
6–Boot from SAN Configuration iSCSI Boot from SAN 7. Go to the Main Configuration Page and select NIC Partitioning Configuration. 8. On the NIC Partitioning Configuration page, select Partition 1 Configuration. 9. Complete the Partition 1 Configuration page as follows: a. For Link Speed, select either Auto Neg, 10Gbps, or 1Gbps. b. Ensure that the link is up. c. For Boot Protocol, select None. d. For Virtual LAN Mode, select Disabled. 10.
6–Boot from SAN Configuration iSCSI Boot from SAN Configuring the System BIOS for iSCSI Boot (L2) To configure the System BIOS on VMware: 1. On the System BIOS Settings page, select Boot Settings. 2. Complete the Boot Settings page as shown in Figure 6-21. Figure 6-21. Integrated NIC: System BIOS, Boot Settings for VMware 3. On the System BIOS Settings page, select Network Settings. 4. On the Network Settings page under UEFI iSCSI Settings: 5. 6. a. For iSCSI Device1, select Enabled. b.
6–Boot from SAN Configuration iSCSI Boot from SAN Figure 6-22. Integrated NIC: System BIOS, Connection 1 Settings for VMware 7. Complete the target details, and for Authentication Type, select either CHAP (to set CHAP details) or None (the default). Figure 6-23 shows an example. Figure 6-23.
6–Boot from SAN Configuration iSCSI Boot from SAN 8. Save all configuration changes, and then reboot the server. 9. During system boot up, press the F11 key to start the Boot Manager. 10. In the Boot Manager under Boot Menu, Select UEFI Boot Option, select the Embedded SATA Port AHCI Controller. Mapping the CD or DVD for OS Installation To map the CD or DVD: 1. Create a customized ISO image using the ESXi-Customizer and inject the latest bundle or VIB. 2.
6–Boot from SAN Configuration FCoE Boot from SAN 7. After the ESXi OS installation completes successfully, the system boots to the OS, as shown in Figure 6-25. Figure 6-25. VMware iSCSI Boot from SAN Successful FCoE Boot from SAN Marvell 41xxx Series Adapters support FCoE boot to enable network boot of operating systems to diskless systems. FCoE boot allows a Windows, Linux, or VMware operating system to boot from a Fibre Channel or FCoE target machine located remotely over an FCoE supporting network.
6–Boot from SAN Configuration FCoE Boot from SAN FCoE Out-of-Box and Inbox Support Table 6-6 lists the operating systems’ inbox and out-of-box support for FCoE boot from SAN (BFS). Table 6-6. FCoE Out-of-Box and Inbox Boot from SAN Support Out-of-Box Inbox Hardware Offload FCoE BFS Support Hardware Offload FCoE BFS Support Windows 2012 Yes No Windows 2012 R2 Yes No Windows 2016 Yes No Windows 2019 Yes Yes RHEL 7.5 Yes Yes RHEL 7.6 Yes Yes RHEL 8.
6–Boot from SAN Configuration FCoE Boot from SAN Configuring Adapter UEFI Boot Mode To configure the boot mode to FCOE: 1. Restart the system. 2. Press the OEM hot key to enter System Setup (Figure 6-26). This is also known as UEFI HII. Figure 6-26. System Setup: Selecting Device Settings NOTE SAN boot is supported in the UEFI environment only. Make sure the system boot option is UEFI, and not legacy.
6–Boot from SAN Configuration FCoE Boot from SAN 3. On the Device Settings page, select the Marvell FastLinQ adapter (Figure 6-27). Figure 6-27.
6–Boot from SAN Configuration FCoE Boot from SAN 4. On the Main Configuration Page, select NIC Configuration (Figure 6-28), and then press ENTER. Figure 6-28. System Setup: NIC Configuration 5. On the NIC Configuration page, select Boot Mode, press ENTER, and then select FCoE as a preferred boot mode. NOTE FCoE is not listed as a boot option if the FCoE Mode feature is disabled at the port level. If the Boot Mode preferred is FCoE, make sure the FCoE Mode feature is enabled as shown in Figure 6-29.
6–Boot from SAN Configuration FCoE Boot from SAN Figure 6-29. System Setup: FCoE Mode Enabled To configure the FCoE boot parameters: 1. On the Device UEFI HII Main Configuration Page, select FCoE Configuration, and then press ENTER. 2. On the FCoE Configuration Page, select FCoE General Parameters, and then press ENTER. 3.
6–Boot from SAN Configuration FCoE Boot from SAN Figure 6-30. System Setup: FCoE General Parameters 4. Return to the FCoE Configuration page. 5. Press ESC, and then select FCoE Target Parameters. 6. Press ENTER. 7. In the FCoE General Parameters Menu, enable Connect to the preferred FCoE target. 8.
6–Boot from SAN Configuration FCoE Boot from SAN Figure 6-31.
6–Boot from SAN Configuration FCoE Boot from SAN 3. Load the latest Marvell FCoE boot images into the adapter NVRAM. 4. Configure the FCoE target to allow a connection from the remote device. Ensure that the target has sufficient disk space to hold the new OS installation. 5. Configure the UEFI HII to set the FCoE boot type on the required adapter port, correct initiator, and target parameters for FCoE boot. 6. Save the settings and reboot the system.
6–Boot from SAN Configuration FCoE Boot from SAN Injecting (Slipstreaming) Adapter Drivers into Windows Image Files To inject adapter drivers into the Windows image files: 1. Obtain the latest driver package for the applicable Windows Server version (2012, 2012 R2, 2016, or 2019). 2. Extract the driver package to a working directory: a. Open a command line session and navigate to the folder that contains the driver package. b.
6–Boot from SAN Configuration FCoE Boot from SAN Do not use the installer parameter withfcoe=1 because the software FCoE will conflict with the hardware offload if network interfaces from qede are exposed.
6–Boot from SAN Configuration FCoE Boot from SAN 4. In the ESXi-Customizer dialog box, click Browse to complete the following. a. Select the original VMware ESXi ISO file. b. Select either the Marvell FCoE driver VIB file or the Marvell offline qedentv bundle ZIP file. c. For the working directory, select the folder in which the customized ISO needs to be created. d. Click Run. Figure 6-32 shows an example. Figure 6-32. ESXi-Customizer Dialog Box 5.
6–Boot from SAN Configuration FCoE Boot from SAN 4. Save the settings and reboot the system. The initiator should connect to an FCOE target and then boot the system from the DVD-ROM device. 5. Boot from the DVD and begin installation. 6. Follow the on-screen instructions. On the window that shows the list of disks available for the installation, the FCOE target disk should be visible because the injected Converged Network Adapter bundle is inside the customized ESXi ISO. Figure 6-33 shows an example.
6–Boot from SAN Configuration FCoE Boot from SAN Figure 6-34 provides two examples. Figure 6-34.
7 RoCE Configuration This chapter describes RDMA over converged Ethernet (RoCE v1 and v2) configuration on the 41xxx Series Adapter, the Ethernet switch, and the Windows, Linux, or VMware host, including: Supported Operating Systems and OFED “Planning for RoCE” on page 128 “Preparing the Adapter” on page 129 “Preparing the Ethernet Switch” on page 129 “Configuring RoCE on the Adapter for Windows Server” on page 133 “Configuring RoCE on the Adapter for Linux” on page 150 “Configur
7–RoCE Configuration Planning for RoCE Table 7-1. OS Support for RoCE v1, RoCE v2, iWARP, iSER, and OFED (Continued) Operating System Inbox OFED-4.17-1 GA RHEL 7.7 RoCE v1, RoCE v2, iWARP, iSER No RHEL 8.0 RoCE v1, RoCE v2, iWARP, iSER No RHEL 8.1 RoCE v1, RoCE v2, iWARP, iSER No SLES 12 SP4 RoCE v1, RoCE v2, iWARP, iSER RoCE v1, RoCE v2, iWARP SLES 15 SP0 RoCE v1, RoCE v2, iWARP, iSER RoCE v1, RoCE v2, iWARP SLES 15 SP1 RoCE v1, RoCE v2, iWARP, iSER No CentOS 7.
7–RoCE Configuration Preparing the Adapter Preparing the Adapter Follow these steps to enable DCBX and specify the RoCE priority using the HII management application. For information about the HII application, see Chapter 5 Adapter Preboot Configuration. To prepare the adapter: 1. On the Main Configuration Page, select Data Center Bridging (DCB) Settings, and then click Finish. 2. In the Data Center Bridging (DCB) Settings window, click the DCBX Protocol option.
7–RoCE Configuration Preparing the Ethernet Switch 2. Configure the quality of service (QoS) class map and set the RoCE priority (cos) to match the adapter (5) as follows: switch(config)# class-map type qos class-roce switch(config)# match cos 5 3. Configure queuing class maps as follows: switch(config)# class-map type queuing class-roce switch(config)# match qos-group 3 4.
7–RoCE Configuration Preparing the Ethernet Switch Configuring the Dell Z9100 Ethernet Switch for RoCE Configuring the Dell Z9100 Ethernet Switch for RoCE comprises configuring a DCB map for RoCE, configuring priority-based flow control (PFC) and enhanced transmission selection (ETS), verifying the DCB map, applying the DCB map to the port, verifying PFC and ETS on the port, specifying the DCB protocol, and assigning a VLAN ID to the switch port.
7–RoCE Configuration Preparing the Ethernet Switch 5. Apply the DCB map to the port. Dell(conf)# interface twentyFiveGigE 1/8/1 Dell(conf-if-tf-1/8/1)# dcb-map roce 6. Verify the ETS and PFC configuration on the port. The following examples show summarized interface information for ETS and detailed interface information for PFC.
7–RoCE Configuration Configuring RoCE on the Adapter for Windows Server ISCSI TLV Tx Status is enabled Local FCOE PriorityMap is 0x0 Local ISCSI PriorityMap is 0x20 Remote ISCSI PriorityMap is 0x200 66 Input TLV pkts, 99 Output TLV pkts, 0 Error pkts, 0 Pause Tx pkts, 0 Pause Rx pkts 66 Input Appln Priority TLV pkts, 99 Output Appln Priority TLV pkts, 0 Error Appln Priority TLV Pkts 7. Configure the DCBX protocol (CEE in this example).
7–RoCE Configuration Configuring RoCE on the Adapter for Windows Server Table 7-2. Advanced Properties for RoCE Property Value or Description NetworkDirect Functionality Enabled Network Direct Mtu Size The network direct MTU size must be less than the jumbo packet size. Quality of Service For RoCE v1/v2, always select Enabled to allow Windows DCB-QoS service to control and monitor DCB.
7–RoCE Configuration Configuring RoCE on the Adapter for Windows Server 2. Using Windows PowerShell, verify that RDMA is enabled on the adapter. The Get-NetAdapterRdma command lists the adapters that support RDMA—both ports are enabled. NOTE If you are configuring RoCE over Hyper-V, do not assign a vLAN ID to the physical interface. PS C:\Users\Administrator> Get-NetAdapterRdma 3. Name ----- InterfaceDescription -------------------- Enabled ------- SLOT 4 3 Port 1 QLogic FastLinQ QL41262...
7–RoCE Configuration Configuring RoCE on the Adapter for Windows Server Viewing RDMA Counters The following procedure also applies to iWARP. To view RDMA counters for RoCE: 1. Launch Performance Monitor. 2. Open the Add Counters dialog box. Figure 7-2 shows an example. Figure 7-2.
7–RoCE Configuration Configuring RoCE on the Adapter for Windows Server NOTE If Marvell RDMA counters are not listed in the Performance Monitor Add Counters dialog box, manually add them by issuing the following command from the driver location: Lodctr /M:qend.man 3. Select one of the following counter types: 4. Cavium FastLinQ Congestion Control: Increment when there is congestion in the network and ECN is enabled on the switch.
7–RoCE Configuration Configuring RoCE on the Adapter for Windows Server Figure 7-3 shows three examples of the counter monitoring output. Figure 7-3. Performance Monitor: 41xxx Series Adapters’ Counters Table 7-3 provides details about error counters. Table 7-3. Marvell FastLinQ RDMA Error Counters RDMA Error Counter Description Applies to RoCE? Applies to iWARP? Troubleshooting CQ overflow A completion queue on which an RDMA work request is posted.
7–RoCE Configuration Configuring RoCE on the Adapter for Windows Server Table 7-3. Marvell FastLinQ RDMA Error Counters (Continued) RDMA Error Counter Description Applies to RoCE? Applies to iWARP? Troubleshooting Requestor CQEs flushed with error Posted work requests may be flushed by sending completions with a flush status to the CQ (without completing the actual execution of the work request) if the QP moves to an error state for any reason and pending work requests exist.
7–RoCE Configuration Configuring RoCE on the Adapter for Windows Server Table 7-3. Marvell FastLinQ RDMA Error Counters (Continued) Applies to RoCE? Applies to iWARP? Remote side could not complete the operation requested due to a local issue. Yes Yes A software issue at the remote side (for example, one that caused a QP error or a malformed WQE on the RQ) prevented operation completion. Requestor retry exceeded Transport retries have exceeded the maximum limit.
7–RoCE Configuration Configuring RoCE on the Adapter for Windows Server Table 7-3. Marvell FastLinQ RDMA Error Counters (Continued) RDMA Error Counter Description Applies to RoCE? Applies to iWARP? Troubleshooting Responder Local QP Operation error An internal QP consistency error was detected while processing this work request. Yes Yes Indicates a software issue. Responder remote invalid request The responder detected an invalid inbound message on the channel.
7–RoCE Configuration Configuring RoCE on the Adapter for Windows Server c. Open the Virtual Switch Manager from the right pane. d. Select New Virtual Network switch with type External. Figure 7-4 shows an example. Figure 7-4.
7–RoCE Configuration Configuring RoCE on the Adapter for Windows Server e. Click the External network button, and then select the appropriate adapter. Click Enable single-root I/O virtualization (SR-IOV). Figure 7-5 shows an example. Figure 7-5.
7–RoCE Configuration Configuring RoCE on the Adapter for Windows Server f. Create a VM and open the VM settings. Figure 7-6 shows an example. Figure 7-6. VM Settings g. Select Add Hardware, and then select Network Adapter to assign the virtual network adapters (VMNICs) to the VM. h. Select the newly created virtual switch.
7–RoCE Configuration Configuring RoCE on the Adapter for Windows Server i. Enable VLAN to the network adapter. Figure 7-7 shows an example. Figure 7-7.
7–RoCE Configuration Configuring RoCE on the Adapter for Windows Server j. Expand the network adapter settings. Under Single-root I/O virtualization, select Enable SR-IOV to enable SR-IOV capabilities for the VMNIC. Figure 7-8 shows an example. Figure 7-8. Enabling SR-IOV for the Network Adapter 4. Issue the following PowerShell command on the host to enable RDMA capabilities for the VMNIC (SR-IOV VF).
7–RoCE Configuration Configuring RoCE on the Adapter for Windows Server 5. Upgrade the Marvell drivers in the VM by booting the VM and installing the latest drivers using the Windows Super Installer on the Marvell CD. Figure 7-9 shows an example. Figure 7-9.
7–RoCE Configuration Configuring RoCE on the Adapter for Windows Server 6. Enable RMDA on the Microsoft network device associated with the VF inside the VM. Figure 7-10 shows an example. Figure 7-10. Enabling RDMA on the VMNIC 7. Start the VM RMDA traffic: a. Connect a server message block (SMB) drive, run RoCE traffic, and verify the results. b. Open the Performance monitor in the VM, and then add RDMA Activity counter.
7–RoCE Configuration Configuring RoCE on the Adapter for Windows Server c. Verify that RDMA traffic is running. Figure 7-11 provides an example. Figure 7-11. RDMA Traffic Limitations VF RDMA has the following limitations: VF RDMA is supported only for 41xxx-based devices. At the time of publication, only RoCEv2 is supported for VF RDMA. The same network direct technology must be configured in physical functions on both the host and SR-IOV VFs in the VM.
7–RoCE Configuration Configuring RoCE on the Adapter for Linux On some older server platforms, VF devices may not be enumerated for one of the NIC PCI functions (PF). This limitation is because of the increased PCI base address register (BAR) requirements to support VF RDMA, meaning that the OS/BIOS cannot assign the required BAR for each VF. To support the maximum number of QPs in a VM, approximately 8GB of RAM must be available, assuming that only one VF is assigned to the VM.
7–RoCE Configuration Configuring RoCE on the Adapter for Linux NOTE During installation, if you already selected the previously mentioned packages, you need not reinstall them. The inbox OFED and support packages may vary depending on the operating system version. 3. Install the new Linux drivers as described in “Installing the Linux Drivers with RDMA” on page 14. RoCE Configuration for SLES To configure RoCE on the adapter for a SLES host, OFED must be installed and configured on the SLES host.
7–RoCE Configuration Configuring RoCE on the Adapter for Linux Verifying the RoCE Configuration on Linux After installing OFED, installing the Linux driver, and loading the RoCE drivers, verify that the RoCE devices were detected on all Linux operating systems. To verify RoCE configuration on Linux: 1. Stop firewall tables using service/systemctl commands. 2. For RHEL only: If the RDMA service is installed (yum install rdma), verify that the RDMA service has started. NOTE For RHEL 7.
7–RoCE Configuration Configuring RoCE on the Adapter for Linux 5. Configure the IP address and enable the port using a configuration method such as ifconfig. For example: # ifconfig ethX 192.168.10.10/24 up 6. Issue the ibv_devinfo command. For each PCI function, you should see a separate hca_id, as shown in the following example: root@captain:~# ibv_devinfo hca_id: qedr0 transport: InfiniBand (0) fw_ver: 8.3.9.
7–RoCE Configuration Configuring RoCE on the Adapter for Linux The following are examples of successful ping pong tests on the server and the client. Server Ping: root@captain:~# ibv_rc_pingpong -d qedr0 -g 0 local address: LID 0x0000, QPN 0xff0000, PSN 0xb3e07e, GID fe80::20e:1eff:fe50:c7c0 remote address: LID 0x0000, QPN 0xff0000, PSN 0x934d28, GID fe80::20e:1eff:fe50:c570 8192000 bytes in 0.05 seconds = 1436.97 Mbit/sec 1000 iters in 0.05 seconds = 45.
7–RoCE Configuration Configuring RoCE on the Adapter for Linux NOTE The default GID value is zero (0) for back-to-back or pause settings. For server and switch configurations, you must identify the proper GID value. If you are using a switch, refer to the corresponding switch configuration documents for the correct settings. RoCE v2 Configuration for Linux To verify RoCE v2 functionality, you must use RoCE v2 supported kernels. To configure RoCE v2 for Linux: 1.
7–RoCE Configuration Configuring RoCE on the Adapter for Linux GID[ 5]: 3ffe:ffff:0000:0f21:0000:0000:0000:0004 GID[ 6]: 0000:0000:0000:0000:0000:ffff:c0a8:6403 GID[ 7]: 0000:0000:0000:0000:0000:ffff:c0a8:6403 Verifying the RoCE v1 or RoCE v2 GID Index and Address from sys and class Parameters Use one of the following options to verify the RoCE v1 or RoCE v2 GID Index and address from the sys and class parameters: Option 1: # cat /sys/class/infiniband/qedr0/ports/1/gid_attrs/types/0 IB/RoCE v1
7–RoCE Configuration Configuring RoCE on the Adapter for Linux NOTE You must specify the GID index values for RoCE v1- or RoCE v2-based server or switch configuration (Pause/PFC). Use the GID index for the link local IPv6 address, IPv4 address, or IPv6 address. To use vLAN tagged frames for RoCE traffic, you must specify GID index values that are derived from the vLAN IPv4 or IPv6 address.
7–RoCE Configuration Configuring RoCE on the Adapter for Linux To verify RoCE v2 through different subnets: 1. Set the route configuration for the server and client using the DCBX-PFC configuration. System Settings: Server VLAN IP : 192.168.100.3 and Gateway :192.168.100.1 Client VLAN IP : 192.168.101.3 and Gateway :192.168.101.1 Server Configuration: #/sbin/ip link add link p4p1 name p4p1.100 type vlan id 100 #ifconfig p4p1.100 192.168.100.3/24 up #ip route add 192.168.101.0/24 via 192.168.100.
7–RoCE Configuration Configuring RoCE on the Adapter for Linux Server Switch Settings: Figure 7-12. Switch Settings, Server Client Switch Settings: Figure 7-13.
7–RoCE Configuration Configuring RoCE on the Adapter for Linux Configuring RoCE v1 or RoCE v2 Settings for RDMA_CM Applications To configure RoCE, use the following scripts from the FastLinQ source package: # ./show_rdma_cm_roce_ver.sh qedr0 is configured to IB/RoCE v1 qedr1 is configured to IB/RoCE v1 # ./config_rdma_cm_roce_ver.sh v2 configured rdma_cm for qedr0 to RoCE v2 configured rdma_cm for qedr1 to RoCE v2 Server Settings: Figure 7-14.
7–RoCE Configuration Configuring RoCE on the Adapter for Linux Table 7-4 lists the supported Linux OS combinations. Table 7-4. Supported Linux OSs for VF RDMA Guest OS RHEL 7.6 RHEL 7.7 RHEL 8.0 SLES12 SP4 SLES15 SP0 SLES15 SP1 Yes Yes Yes Yes Yes Yes RHEL 7.7 Yes Yes Yes Yes Yes Yes RHEL 8.
7–RoCE Configuration Configuring RoCE on the Adapter for Linux vf 1 MAC 00:00:00:00:00:00, spoof checking off, link-state auto 2.
7–RoCE Configuration Configuring RoCE on the Adapter for Linux device node GUID ------ ---------------- qedr0 1602ecfffececfa0 qedr1 1602ecfffececfa1 qedr_vf0 b81aadfffe088900 qedr_vf1 944061fffe49cd68 Number of VFs Supported for RDMA For the 41xxx Series Adapters, the number of VFs for L2 and RDMA are shared based on resources availability. Dual Port Adapters Each PF supports a maximum of 40 VFs for RDMA. If the number of VFs exceeds 56, it will be subtracted by the total number of VFs (96).
7–RoCE Configuration Configuring RoCE on the Adapter for VMware ESX Limitations VF RMDA has the following limitations: No iWARP support No NPAR support Cross OS is not supported; for example, a Linux hypervisor cannot use a Windows guest OS (VM) Perftest latency test on VF interfaces can be run only with the inline size zero -I 0 option. Neither the default nor more than one inline size works.
7–RoCE Configuration Configuring RoCE on the Adapter for VMware ESX Configuring RDMA Interfaces To configure the RDMA interfaces: 1. Install both Marvell NIC and RoCE drivers. 2. Using the module parameter, enable the RoCE function from the NIC driver by issuing the following command: esxcfg-module -s 'enable_roce=1' qedentv To apply the change, reload the NIC driver or reboot the system. 3. To view a list of the NIC interfaces, issue the esxcfg-nics -l command.
7–RoCE Configuration Configuring RoCE on the Adapter for VMware ESX 7. To create a new port group on this vSwitch, issue the following command: # esxcli network vswitch standard portgroup add -p roce_pg -v roce_vs For example: # esxcli network vswitch standard portgroup add -p roce_pg -v roce_vs 8.
7–RoCE Configuration Configuring RoCE on the Adapter for VMware ESX Packets received: 0 Packets sent: 0 Bytes received: 0 Bytes sent: 0 Error packets received: 0 Error packets sent: 0 Error length packets received: 0 Unicast packets received: 0 Multicast packets received: 0 Unicast bytes received: 0 Multicast bytes received: 0 Unicast packets sent: 0 Multicast packets sent: 0 Unicast bytes sent: 0 Multicast bytes sent: 0 Queue pairs allocated: 0 Queue pairs in RESET state: 0 Queue pairs in INIT state: 0 Qu
7–RoCE Configuration Configuring RoCE on the Adapter for VMware ESX To configure PVRDMA using a vCenter interface: 1. Create and configure a new distributed virtual switch as follows: a. In the VMware vSphere® Web Client, right-click the RoCE node in the left pane of the Navigator window. b. On the Actions menu, point to Distributed Switch, and then click New Distributed Switch. c. Select version 6.5.0. d.
7–RoCE Configuration Configuring RoCE on the Adapter for VMware ESX 3. Assign a vmknic for PVRDMA to use on ESX hosts: a. Right-click a host, and then click Settings. b. On the Settings page, expand the System node, and then click Advanced System Settings. c. The Advanced System Settings page shows the key-pair value and its summary. Click Edit. d. On the Edit Advanced System Settings page, filter on PVRDMA to narrow all the settings to just Net.PVRDMAVmknic. e. Set the Net.
7–RoCE Configuration Configuring RoCE on the Adapter for VMware ESX Figure 7-18 shows an example. Figure 7-18. Setting the Firewall Rule 5. Set up the VM for PVRDMA as follows: a. Install the following supported guest OS: RHEL 7.5, 7.6, and 8.0 b. Install OFED4.17-1. c. Compile and install the PVRDMA guest driver and library. d. Add a new PVRDMA network adapter to the VM as follows: e. Edit the VM settings. Add a new network adapter. Select the newly added DVS port group as Network.
7–RoCE Configuration Configuring DCQCN Configuring DCQCN Data Center Quantized Congestion Notification (DCQCN) is a feature that determines how an RoCE receiver notifies a transmitter that a switch between them has provided an explicit congestion notification (notification point), and how a transmitter reacts to such notification (reaction point).
7–RoCE Configuration Configuring DCQCN DSCP-PFC is a feature that allows a receiver to interpret the priority of an incoming packet for PFC purposes, rather than according to the vLAN priority or the DSCP field in the IPv4 header. You may use an indirection table to indicate a specified DSCP value to a vLAN priority value. DSCP-PFC can work across L2 networks because it is an L3 (IPv4) feature.
7–RoCE Configuration Configuring DCQCN DCB-related Parameters Use DCB to map priorities to traffic classes (priority groups). DCB also controls which priority groups are subject to PFC (lossless traffic), and the related bandwidth allocation (ETS). Global Settings on RDMA Traffic Global settings on RDMA traffic include configuration of vLAN priority, ECN, and DSCP.
7–RoCE Configuration Configuring DCQCN Enabling DCQCN To enable DCQCN for RoCE traffic, probe the qed driver with the dcqcn_enable module parameter. DCQCN requires enabled ECN indications (see “Setting ECN on RDMA Traffic” on page 173). Configuring CNP Congestion notification packets (CNPs) can have a separate configuration of vLAN priority and DSCP. Control these packets using the dcqcn_cnp_dscp and dcqcn_cnp_vlan_priority module parameters.
7–RoCE Configuration Configuring DCQCN Table 7-5. DCQCN Algorithm Parameters (Continued) Parameter Description and Values dcqcn_k_us Alpha update interval dcqcn_timeout_us DCQCN timeout MAC Statistics To view MAC statistics, including per-priority PFC statistics, issue the phy_mac_stats command. For example, to view statistics on port 1 issue the following command: ./debugfs.
7–RoCE Configuration Configuring DCQCN # observe PFCs being generated on multiple priorities debugfs.sh -n ens6f0 -d phy_mac_stat -P 0 | grep "Class Based Flow Control" Limitations DCQCN has the following limitations: DCQCN mode currently supports only up to 64 QPs. Marvell adapters can determine vLAN priority for PFC purposes from vLAN priority or from DSCP bits in the ToS field. However, in the presence of both, vLAN takes precedence.
8 iWARP Configuration Internet wide area RDMA protocol (iWARP) is a computer networking protocol that implements RDMA for efficient data transfer over IP networks. iWARP is designed for multiple environments, including LANs, storage networks, data center networks, and WANs.
8–iWARP Configuration Configuring iWARP on Windows 6. In the Warning - Saving Changes message box, click Yes to save the configuration. 7. In the Success - Saving Changes message box, click OK. 8. Repeat Step 2 through Step 7 to configure the NIC and iWARP for the other ports. 9. To complete adapter preparation of both ports: a. On the Device Settings page, click Finish. b. On the main menu, click Finish. c. Exit to reboot the system.
8–iWARP Configuration Configuring iWARP on Windows 2. Using Windows PowerShell, verify that RDMA is enabled. The Get-NetAdapterRdma command output (Figure 8-1) shows the adapters that support RDMA. Figure 8-1. Windows PowerShell Command: Get-NetAdapterRdma 3. Using Windows PowerShell, verify that NetworkDirect is enabled. The Get-NetOffloadGlobalSetting command output (Figure 8-2) shows NetworkDirect as Enabled. Figure 8-2.
8–iWARP Configuration Configuring iWARP on Windows Figure 8-3 shows an example. Figure 8-3.
8–iWARP Configuration Configuring iWARP on Windows If iWARP traffic is running, counters appear as shown in the Figure 8-4 example. Figure 8-4. Perfmon: Verifying iWARP Traffic NOTE . For more information on how to view Marvell RDMA counters in Windows, see “Viewing RDMA Counters” on page 136. 4. To verify the SMB connection: a. At a command prompt, issue the net use command as follows: C:\Users\Administrator> net use New connections will be remembered.
8–iWARP Configuration Configuring iWARP on Linux Kernel 56 Connection 192.168.11.20:15903 192.168.11.10:445 0 Kernel 60 Listener [fe80::e11d:9ab5:a47d:4f0a%56]:445 NA 0 Kernel 60 Listener 192.168.11.20:445 0 Kernel 60 Listener [fe80::71ea:bdd2:ae41:b95f%60]:445 NA 0 Kernel 60 Listener 192.168.11.20:16159 192.168.11.10:445 0 NA Configuring iWARP on Linux Marvell 41xxx Series Adapters support iWARP on the Linux Open Fabric Enterprise Distributions (OFEDs) listed in Table 7-1 on page 127.
8–iWARP Configuration Configuring iWARP on Linux The RDMA protocol (p) values are as follows: 0—Accept the default (RoCE) 1—No RDMA 2—RoCE 3—iWARP For example, to change the interface on the port given by 04:00.0 from RoCE to iWARP, issue the following command: # modprobe -v qed rdma_protocol_map=04:00.0-3 3.
8–iWARP Configuration Configuring iWARP on Linux transport: iWARP (1) fw_ver: 8.14.7.
8–iWARP Configuration Configuring iWARP on Linux Running Perftest for iWARP All perftest tools are supported over the iWARP transport type. You must run the tools using the RDMA connection manager (with the -R option). Example: 1. On one server, issue the following command (using the second port in this example): # ib_send_bw -d qedr1 -F -R 2. On one client, issue the following command (using the second port in this example): [root@localhost ~]# ib_send_bw -d qedr1 -F -R 192.168.11.
8–iWARP Configuration Configuring iWARP on Linux NOTE For latency applications (send/write), if the perftest version is the latest (for example, perftest-3.0-0.21.g21dc344.x86_64.rpm), use the supported inline size value: 0-128. Configuring NFS-RDMA NFS-RDMA for iWARP includes both server and client configuration steps. To configure the NFS server: 1. Create an nfs-server directory and grant permission by issuing the following commands: # mkdir /tmp/nfs-server # chmod 777 /tmp/nfs-server 2.
8–iWARP Configuration Configuring iWARP on Linux To configure the NFS client: NOTE This procedure for NFS client configuration also applies to RoCE. 1. Create an nfs-client directory and grant permission by issuing the following commands: # mkdir /tmp/nfs-client # chmod 777 /tmp/nfs-client 2. Load the xprtrdma module as follows: # modprobe xprtrdma 3. Mount the NFS file system as appropriate for your version: For NFS Version 3: # mount -o rdma,port=20049 192.168.2.
9 iSER Configuration This chapter provides procedures for configuring iSCSI Extensions for RDMA (iSER) for Linux (RHEL and SLES) and VMware ESXi 6.7, including: Before You Begin “Configuring iSER for RHEL” on page 189 “Configuring iSER for SLES 12 and Later” on page 192 “Using iSER with iWARP on RHEL and SLES” on page 193 “Optimizing Linux Performance” on page 195 “Configuring iSER on ESXi 6.
9–iSER Configuration Configuring iSER for RHEL Configuring iSER for RHEL To configure iSER for RHEL: 1. Install inbox OFED as described in “RoCE Configuration for RHEL” on page 150. NOTE Out-of-box OFEDs are not supported for iSER because the ib_isert module is not available in the out-of-box OFED 3.18-2 GA/3.18-3 GA versions. The inbox ib_isert module does not work with any out-of-box OFED versions. 2. Unload any existing FastLinQ drivers as described in “Removing the Linux Drivers” on page 10. 3.
9–iSER Configuration Configuring iSER for RHEL Figure 9-1 shows an example of a successful RDMA ping. Figure 9-1. RDMA Ping Successful 8. You can use a Linux TCM-LIO target to test iSER. The setup is the same for any iSCSI target, except that you issue the command enable_iser Boolean=true on the applicable portals. The portal instances are identified as iser in Figure 9-2. Figure 9-2. iSER Portal Instances 9. Install Linux iSCSI Initiator Utilities using the yum install iscsi-initiator-utils commands.
9–iSER Configuration Configuring iSER for RHEL b. To change the transport mode to iSER, issue the iscsiadm command. For example: iscsiadm -m node -T iqn.2015-06.test.target1 -o update -n iface.transport_name -v iser c. To connect to or log in to the iSER target, issue the iscsiadm command. For example: iscsiadm -m node -l -p 192.168.100.99:3260 -T iqn.2015-06.test.target1 d. Confirm that the Iface Transport is iser in the target connection, as shown in Figure 9-3.
9–iSER Configuration Configuring iSER for SLES 12 and Later Figure 9-4. Checking for New iSCSI Device Configuring iSER for SLES 12 and Later Because the targetcli is not inbox on SLES 12 and later, you must complete the following procedure. To configure iSER for SLES 12 and later: 1. Install targetcli. For SLES 12: Locate, copy and install the following RPMs from the ISO image (x86_64 and noarch location). lio-utils-4.1-14.6.x86_64.rpm python-configobj-4.7.2-18.10.noarch.rpm python-PrettyTable-0.7.2-8.5.
9–iSER Configuration Using iSER with iWARP on RHEL and SLES 3. Before configuring iSER targets, configure NIC interfaces and run L2 and RoCE traffic, as described in Step 7 on page 153. 4. For SLES 15 and SLES 15 SP1, insert the SLES Package DVD and install the targetcli utility. This command also installs all the dependency packages. # zypper install python3-targetcli-fb 5. Start the targetcli utility, and configure your targets on the iSER target system.
9–iSER Configuration Using iSER with iWARP on RHEL and SLES Figure 9-5 shows the target configuration for LIO. Figure 9-5. LIO Target Configuration To configure an initiator for iWARP: 1. To discover the iSER LIO target using port 3261, issue the iscsiadm command as follows: # iscsiadm -m discovery -t st -p 192.168.21.4:3261 -I iser 192.168.21.4:3261,1 iqn.2017-04.com.org.iserport1.target1 2. Change the transport mode to iser as follows: # iscsiadm -m node -o update -T iqn.2017-04.com.org.iserport1.
9–iSER Configuration Optimizing Linux Performance Optimizing Linux Performance Consider the following Linux performance configuration enhancements described in this section.
9–iSER Configuration Configuring iSER on ESXi 6.7 Configuring IRQ Affinity Settings The following example sets CPU core 0, 1, 2, and 3 to interrupt request (IRQ) XX, YY, ZZ, and XYZ respectively. Perform these steps for each IRQ assigned to a port (default is eight queues per port).
9–iSER Configuration Configuring iSER on ESXi 6.7 vmk0 Management Network 64 IPv6 e0:db:55:0c:5f:94 1500 fe80::e2db:55ff:fe0c:5f94 65535 true STATIC, PREFERRED defaultTcpipStack The iSER target is configured to communicate with the iSER initiator. Configuring iSER for ESXi 6.7 To configure iSER for ESXi 6.7: 1.
9–iSER Configuration Configuring iSER on ESXi 6.7 esxcli iscsi networkportal add -A vmhba67 -n vmk1 esxcli iscsi networkportal list esxcli iscsi adapter get -A vmhba65 vmhba65 Name: iqn.1998-01.com.vmware:localhost.punelab.qlogic.com qlogic.org qlogic.com mv.qlogic.
9–iSER Configuration Configuring iSER on ESXi 6.7 Console Device: /vmfs/devices/cdrom/mpx.vmhba0:C0:T4:L0 Devfs Path: /vmfs/devices/cdrom/mpx.vmhba0:C0:T4:L0 Vendor: TSSTcorp SCSI Level: 5 Model: DVD-ROM SN-108BB Revis: D150 Is Pseudo: false Status: on Is RDM Capable: false Is Removable: true Is Local: true Is SSD: false Other Names: vml.0005000000766d686261303a343a30 VAAI Status: unsupported naa.
10 iSCSI Configuration This chapter provides the following iSCSI configuration information: iSCSI Boot “iSCSI Offload in Windows Server” on page 200 “iSCSI Offload in Linux Environments” on page 209 NOTE Some iSCSI features may not be fully enabled in the current release. For details, refer to Appendix D Feature Constraints. To enable iSCSI-Offload mode, see the Application Note, Enabling Storage Offloads on Dell and Marvell FastLinQ 41000 Series Adapters at https://www.marvell.
10–iSCSI Configuration iSCSI Offload in Windows Server With the proper iSCSI offload licensing, you can configure your iSCSI-capable 41xxx Series Adapter to offload iSCSI processing from the host processor.
10–iSCSI Configuration iSCSI Offload in Windows Server Figure 10-1. iSCSI Initiator Properties, Configuration Page c. In the iSCSI Initiator Name dialog box, type the new initiator IQN name, and then click OK. (Figure 10-2) Figure 10-2. iSCSI Initiator Node Name Change 3. On the iSCSI Initiator Properties, click the Discovery tab.
10–iSCSI Configuration iSCSI Offload in Windows Server 4. On the Discovery page (Figure 10-3) under Target portals, click Discover Portal. Figure 10-3. iSCSI Initiator—Discover Target Portal 5. In the Discover Target Portal dialog box (Figure 10-4): a. In the IP address or DNS name box, type the IP address of the target. b. Click Advanced.
10–iSCSI Configuration iSCSI Offload in Windows Server Figure 10-4. Target Portal IP Address 6. In the Advanced Settings dialog box (Figure 10-5), complete the following under Connect using: a. For Local adapter, select the QLogic Adapter. b. For Initiator IP, select the adapter IP address. c. Click OK.
10–iSCSI Configuration iSCSI Offload in Windows Server Figure 10-5. Selecting the Initiator IP Address 7. On the iSCSI Initiator Properties, Discovery page, click OK.
10–iSCSI Configuration iSCSI Offload in Windows Server 8. Click the Targets tab, and then on the Targets page (Figure 10-6), click Connect. Figure 10-6.
10–iSCSI Configuration iSCSI Offload in Windows Server 9. On the Connect To Target dialog box (Figure 10-7), click Advanced. Figure 10-7. Connect To Target Dialog Box 10. In the Local Adapter dialog box, select the QLogic Adapter, and then click OK. 11. Click OK again to close Microsoft Initiator. 12. To format the iSCSI partition, use Disk Manager. NOTE Some limitations of the teaming functionality include: Teaming does not support iSCSI adapters.
10–iSCSI Configuration iSCSI Offload in Windows Server Question: What tools should I use to create the connection to the target? Answer: Use Microsoft iSCSI Software Initiator (version 2.08 or later). Question: How do I know that the connection is offloaded? Answer: Use Microsoft iSCSI Software Initiator. From a command line, type oiscsicli sessionlist. From Initiator Name, an iSCSI offloaded connection will display an entry beginning with B06BDRV.
10–iSCSI Configuration iSCSI Offload in Linux Environments 9. To proceed with Windows Server 2012 R2/2016 installation, click Next, and then follow the on-screen instructions. The server will undergo a reboot multiple times as part of the installation process. 10. After the server boots to the OS, you should run the driver installer to complete the Marvell drivers and application installation.
10–iSCSI Configuration iSCSI Offload in Linux Environments Differences from bnx2i Some key differences exist between qedi—the driver for the Marvell FastLinQ 41xxx Series Adapter (iSCSI)—and the previous Marvell iSCSI offload driver—bnx2i for the Marvell 8400 Series Adapters. Some of these differences include: qedi directly binds to a PCI function exposed by the CNA. qedi does not sit on top of the net_device. qedi is not dependent on a network driver such as bnx2x and cnic.
10–iSCSI Configuration iSCSI Offload in Linux Environments iscsi_boot_sysfs 2. 16000 1 qedi To verify that the iSCSI interfaces were detected properly, issue the following command. In this example, two iSCSI CNA devices are detected with SCSI host numbers 4 and 5. # dmesg | grep qedi [0000:00:00.0]:[qedi_init:3696]: QLogic iSCSI Offload Driver v8.15.6.0. .... [0000:42:00.4]:[__qedi_probe:3563]:59: QLogic FastLinQ iSCSI Module qedi 8.15.6.0, FW 8.15.3.0 .... [0000:42:00.
10–iSCSI Configuration iSCSI Offload in Linux Environments 192.168.25.100:3260,1 iqn.2003-04.com.sanblaze:virtualun.virtualun.target-0500000c 192.168.25.100:3260,1 iqn.200304.com.sanblaze:virtualun.virtualun.target-05000001 192.168.25.100:3260,1 iqn.2003-04.com.sanblaze:virtualun.virtualun.target-05000002 6. Log into the iSCSI target using the IQN obtained in Step 5.
11 FCoE Configuration This chapter provides the following Fibre Channel over Ethernet (FCoE) configuration information: “Configuring Linux FCoE Offload” on page 213 NOTE FCoE offload is supported on all 41xxx Series Adapters. Some FCoE features may not be fully enabled in the current release. For details, refer to Appendix D Feature Constraints. To enable iSCSI-Offload mode, see the Application Note, Enabling Storage Offloads on Dell and Marvell FastLinQ 41000 Series Adapters at https://www.marvell.
11–FCoE Configuration Configuring Linux FCoE Offload These modules must be loaded before qedf can be functional, otherwise errors such as “unresolved symbol” can result. If the qedf module is installed in the distribution update path, the requisite modules are automatically loaded by modprobe. Marvell FastLinQ 41xxx Series Adapters support FCoE offload. This section provides the following information about FCoE offload in Linux: Differences Between qedf and bnx2fc Configuring qedf.
11–FCoE Configuration Configuring Linux FCoE Offload NOTE For more information on FastLinQ driver installation, see Chapter 3 Driver Installation. The load qedf.ko kernel module performs the following: # modprobe qed # modprobe libfcoe # modprobe qedf Verifying FCoE Devices in Linux Follow these steps to verify that the FCoE devices were detected correctly after installing and loading the qedf kernel module. To verify FCoE devices in Linux: 1.
11–FCoE Configuration Configuring Linux FCoE Offload [ 243.991851] [0000:21:00.3]: [qedf_link_update:489]:5: LINK UP (40 GB/s). 3. Check for discovered FCoE devices using the lsscsi or lsblk -S commands. An example of each command follows. # lsscsi [0:2:0:0] disk DELL PERC H700 2.10 /dev/sda [2:0:0:0] cd/dvd TEAC DVD-ROM DV-28SW R.
12 SR-IOV Configuration Single root input/output virtualization (SR-IOV) is a specification by the PCI SIG that enables a single PCI Express (PCIe) device to appear as multiple, separate physical PCIe devices. SR-IOV permits isolation of PCIe resources for performance, interoperability, and manageability. NOTE Some SR-IOV features may not be fully enabled in the current release.
12–SR-IOV Configuration Configuring SR-IOV on Windows Figure 12-1. System Setup for SR-IOV: Integrated Devices 4. On the Main Configuration Page for the selected adapter, click Device Level Configuration. 5. On the Main Configuration Page - Device Level Configuration (Figure 12-2): a. Set the Virtualization Mode to SR-IOV, or NPAR+SR-IOV if you are using NPAR mode. b. Click Back. Figure 12-2. System Setup for SR-IOV: Device Level Configuration 6. On the Main Configuration Page, click Finish. 7.
12–SR-IOV Configuration Configuring SR-IOV on Windows 9. To enable SR-IOV on the miniport adapter: a. Access Device Manager. b. Open the miniport adapter properties, and then click the Advanced tab. c. On the Advanced properties page (Figure 12-3) under Property, select SR-IOV, and then set the value to Enabled. d. Click OK. Figure 12-3. Adapter Properties, Advanced: Enabling SR-IOV 10. To create a Virtual Machine Switch (vSwitch) with SR-IOV (Figure 12-4 on page 220): a.
12–SR-IOV Configuration Configuring SR-IOV on Windows NOTE Be sure to enable SR-IOV when you create the vSwitch. This option is unavailable after the vSwitch is created. Figure 12-4. Virtual Switch Manager: Enabling SR-IOV f. The Apply Networking Changes message box advises you that Pending changes may disrupt network connectivity. To save your changes and continue, click Yes.
12–SR-IOV Configuration Configuring SR-IOV on Windows 11. To get the virtual machine switch capability, issue the following Windows PowerShell command: PS C:\Users\Administrator> Get-VMSwitch -Name SR-IOV_vSwitch | fl Output of the Get-VMSwitch command includes the following SR-IOV capabilities: 12. IovVirtualFunctionCount : 80 IovVirtualFunctionsInUse : 1 To create a virtual machine (VM) and export the virtual function (VF) in the VM: a. Create a virtual machine. b.
12–SR-IOV Configuration Configuring SR-IOV on Windows Figure 12-5. Settings for VM: Enabling SR-IOV 13. Install the Marvell drivers for the adapters detected in the VM. Use the latest drivers available from your vendor for your host OS (do not use inbox drivers). NOTE Be sure to use the same driver package on both the VM and the host system. For example, use the same qeVBD and qeND driver version on the Windows VM and in the Windows Hyper-V host.
12–SR-IOV Configuration Configuring SR-IOV on Windows After installing the drivers, the adapter is listed in the VM. Figure 12-6 shows an example. Figure 12-6. Device Manager: VM with QLogic Adapter 14. To view the SR-IOV VF details, issue the following Windows PowerShell command: PS C:\Users\Administrator> Get-NetadapterSriovVf Figure 12-7 shows example output. Figure 12-7.
12–SR-IOV Configuration Configuring SR-IOV on Linux Configuring SR-IOV on Linux To configure SR-IOV on Linux: 1. Access the server BIOS System Setup, and then click System BIOS Settings. 2. On the System BIOS Settings page, click Integrated Devices. 3. On the System Integrated Devices page (see Figure 12-1 on page 218): a. Set the SR-IOV Global Enable option to Enabled. b. Click Back. 4. On the System BIOS Settings page, click Processor Settings. 5.
12–SR-IOV Configuration Configuring SR-IOV on Linux 7. On the Device Settings page, select Port 1 for the Marvell adapter. 8. On the Device Level Configuration page (Figure 12-9): a. Set the Virtualization Mode to SR-IOV. b. Click Back. Figure 12-9. System Setup for SR-IOV: Integrated Devices 9. On the Main Configuration Page, click Finish, save your settings, and then reboot the system. 10. To enable and verify virtualization: a. Open the grub.
12–SR-IOV Configuration Configuring SR-IOV on Linux Figure 12-10. Editing the grub.conf File for SR-IOV b. Save the grub.conf file and then reboot the system. c. To verify that the changes are in effect, issue the following command: dmesg | grep -i iommu A successful input–output memory management unit (IOMMU) command output should show, for example: Intel-IOMMU: enabled d. To view VF details (number of VFs and total VFs), issue the following command: find /sys/|grep -i sriov 11.
12–SR-IOV Configuration Configuring SR-IOV on Linux b. Review the command output (Figure 12-11) to confirm that actual VFs were created on bus 4, device 2 (from the 0000:00:02.0 parameter), functions 0 through 7. Note that the actual device ID is different on the PFs (8070 in this example) versus the VFs (8090 in this example). Figure 12-11. Command Output for sriov_numvfs 12.
12–SR-IOV Configuration Configuring SR-IOV on Linux b. 14. Ensure that the VF interface is up and running with the assigned MAC address. Power off the VM and attach the VF. (Some OSs support hot-plugging of VFs to the VM.) a. In the Virtual Machine dialog box (Figure 12-13), click Add Hardware. Figure 12-13. RHEL68 Virtual Machine b. In the left pane of the Add New Virtual Hardware dialog box (Figure 12-14), click PCI Host Device. c. In the right pane, select a host device. d. Click Finish.
12–SR-IOV Configuration Configuring SR-IOV on Linux Figure 12-14. Add New Virtual Hardware 15. Power on the VM, and then issue the following command: check lspci -vv|grep -I ether 16. Install the drivers for the adapters detected in the VM. Use the latest drivers available from your vendor for your host OS (do not use inbox drivers). The same driver version must be installed on the host and the VM. 17. As needed, add more VFs in the VM.
12–SR-IOV Configuration Configuring SR-IOV on VMware To enable IOMMU for SR-IOV on SLES 12.x: 1. In the /etc/default/grub file, locate GRUB_CMDLINE_LINUX_DEFAULT, and then append the intel_iommu=on boot parameter. 2. To update the grub configuration file, issue the following command: grub2-mkconfig -o /boot/grub2/grub.cfg To enable IOMMU for SR-IOV on SLES 15.x and later: 1. In the /etc/default/grub file, locate GRUB_CMDLINE_LINUX_DEFAULT, and then append the intel_iommu=on boot parameter. 2.
12–SR-IOV Configuration Configuring SR-IOV on VMware 9. To enable the needed quantity of VFs per port (in this example, 16 on each port of a dual-port adapter), issue the following command: "esxcfg-module -s "max_vfs=16,16" qedentv" NOTE Each Ethernet function of the 41xxx Series Adapter must have its own entry. 10. Reboot the host. 11.
12–SR-IOV Configuration Configuring SR-IOV on VMware 0000:05:0e.3 Network controller: QLogic Corp. QLogic FastLinQ QL41xxx Series 10/25 GbE Controller (SR-IOV VF) [PF_0.5.1_VF_3] . . . 0000:05:0f.6 Network controller: QLogic Corp. QLogic FastLinQ QL41xxx Series 10/25 GbE Controller (SR-IOV VF) [PF_0.5.1_VF_14] 0000:05:0f.7 Network controller: QLogic Corp. QLogic FastLinQ QL41xxx Series 10/25 GbE Controller (SR-IOV VF) [PF_0.5.1_VF_15] 13. 14. Attach VFs to the VM as follows: a.
12–SR-IOV Configuration Configuring SR-IOV on VMware Figure 12-15. VMware Host Edit Settings 15. To validate the VFs per port, issue the esxcli command as follows: [root@localhost:~] esxcli network sriovnic vf list -n vmnic6 VF ID Active PCI Address Owner World ID ----- ------ ----------- -------------- 0 true 005:02.0 60591 1 true 005:02.
12–SR-IOV Configuration Configuring SR-IOV on VMware 2 false 005:02.2 - 3 false 005:02.3 - 4 false 005:02.4 - 5 false 005:02.5 - 6 false 005:02.6 - 7 false 005:02.7 - 8 false 005:03.0 - 9 false 005:03.1 - 10 false 005:03.2 - 11 false 005:03.3 - 12 false 005:03.4 - 13 false 005:03.5 - 14 false 005:03.6 - 15 false 005:03.7 - 16. Install the Marvell drivers for the adapters detected in the VM.
13 NVMe-oF Configuration with RDMA Non-Volatile Memory Express over Fabrics (NVMe-oF) enables the use of alternate transports to PCIe to extend the distance over which an NVMe host device and an NVMe storage drive or subsystem can connect. NVMe-oF defines a common architecture that supports a range of storage networking fabrics for the NVMe block storage protocol over a storage networking fabric.
13–NVMe-oF Configuration with RDMA Figure 13-1 illustrates an example network. 41xxx Series Adapter 41xxx Series Adapter Figure 13-1.
13–NVMe-oF Configuration with RDMA Installing Device Drivers on Both Servers Installing Device Drivers on Both Servers After installing your operating system (SLES 12 SP3), install device drivers on both servers. To upgrade the kernel to the latest Linux upstream kernel, go to: https://www.kernel.org/pub/linux/kernel/v4.x/ 1. Install and load the latest FastLinQ drivers (qed, qede, libqedr/qedr) following all installation instructions in the README. 2.
13–NVMe-oF Configuration with RDMA Configuring the Target Server 3. Enable and start the RDMA service as follows: # systemctl enable rdma.service # systemctl start rdma.service Disregard the RDMA Service Failed error. All OFED modules required by qedr are already loaded. Configuring the Target Server Configure the target server after the reboot process. After the server is operating, you cannot change the configuration without rebooting.
13–NVMe-oF Configuration with RDMA Configuring the Target Server Table 13-1. Target Parameters (Continued) Command Description # echo -n /dev/nvme0n1 >namespaces/ 1/device_path Sets the NVMe device path. The NVMe device path can differ between systems. Check the device path using the lsblk command. This system has two NVMe devices: nvme0n1 and nvme1n1. # echo 1 > namespaces/1/enable Enables the namespace. # mkdir /sys/kernel/config/nvmet/ ports/1 Creates NVMe port 1.
13–NVMe-oF Configuration with RDMA Configuring the Initiator Server Configuring the Initiator Server You must configure the initiator server after the reboot process. After the server is operating, you cannot change the configuration without rebooting. If you are using a startup script to configure the initiator server, consider pausing the script (using the wait command or something similar) as needed to ensure that each command finishes before executing the next command.
13–NVMe-oF Configuration with RDMA Preconditioning the Target Server 5. Connect to the discovered NVMe-oF target (nvme-qlogic-tgt1) using the NQN. Issue the following command after each server reboot. For example: # nvme connect -t rdma -n nvme-qlogic-tgt1 -a 1.1.1.1 -s 1023 6. Confirm the NVMe-oF target connection with the NVMe-oF device as follows: # dmesg | grep nvme # lsblk # list nvme Figure 13-3 shows an example. Figure 13-3.
13–NVMe-oF Configuration with RDMA Testing the NVMe-oF Devices Testing the NVMe-oF Devices Compare the latency of the local NVMe device on the target server with that of the NVMe-oF device on the initiator server to show the latency that NVMe adds to the system. To test the NVMe-oF device: 1.
13–NVMe-oF Configuration with RDMA Optimizing Performance In this example, the target NVMe device latency is 8µsec. The total latency that results from the use of NVMe-oF is the difference between the initiator device NVMe-oF latency (30µsec) and the target device NVMe-oF latency (8µsec), or 22µsec. 4. Run FIO to measure bandwidth of the local NVMe device on the target server.
13–NVMe-oF Configuration with RDMA Optimizing Performance 3. Set the IRQ affinity for all 41xxx Series Adapters. The multi_rss-affin.sh file is a script file that is listed in “IRQ Affinity (multi_rss-affin.sh)” on page 244. # systemctl stop irqbalance # ./multi_rss-affin.sh eth1 NOTE A different version of this script, qedr_affin.sh, is in the 41xxx Linux Source Code Package in the \add-ons\performance\roce directory. For an explanation of the IRQ affinity settings, refer to the multiple_irqs.
13–NVMe-oF Configuration with RDMA Optimizing Performance CPUID=$((CPUID*OFFSET)) for ((A=1; A<=${NUM_FP}; A=${A}+1)) ; do INT='grep -m $A $eth /proc/interrupts | tail -1 | cut -d ":" -f 1' SMP='echo $CPUID 16 o p | dc' echo ${INT} smp affinity set to ${SMP} echo $((${SMP})) > /proc/irq/$((${INT}))/smp_affinity CPUID=$((CPUID*FACTOR)) if [ ${CPUID} -gt ${MAXCPUID} ]; then CPUID=1 CPUID=$((CPUID*OFFSET)) fi done done CPU Frequency (cpufreq.sh) The following script sets the CPU frequency. #Usage ".
13–NVMe-oF Configuration with RDMA Optimizing Performance NOTE The following commands apply only to the initiator server.
14 VXLAN Configuration This chapter provides instructions for: Configuring VXLAN in Linux “Configuring VXLAN in VMware” on page 249 “Configuring VXLAN in Windows Server 2016” on page 250 Configuring VXLAN in Linux To configure VXLAN in Linux: 1. Download, extract, and configure the openvswitch (OVS) tar ball. a. Download the appropriate openvswitch release from the following location: http://www.openvswitch.org/download/ b.
14–VXLAN Configuration Configuring VXLAN in Linux 2. Create the bridge. a. To configure Host 1, issue the following commands: ovs-vsctl add-br br0 ovs-vsctl add-br br1 ovs-vsctl add-port br0 eth0 ifconfig eth0 0 && ifconfig br0 192.168.1.10 netmask 255.255.255.0 route add default gw 192.168.1.1 br0 ifconfig br1 10.1.2.10 netmask 255.255.255.0 ovs-vsctl add-port br1 vx1 -- set interface vx1 type=vxlan options:remote_ip=192.168.1.11 (peer IP address) b.
14–VXLAN Configuration Configuring VXLAN in VMware 4. Configure the bridge as a passthrough to the VMs, and then check connectivity from the VM to the Peer. a. Create a VM through virt-manager. b.
14–VXLAN Configuration Configuring VXLAN in Windows Server 2016 https://pubs.vmware.com/nsx-63/topic/com.vmware.nsx.troubleshooting.doc/GUI D-EA1DB524-DD2E-4157-956E-F36BDD20CDB2.html https://communities.vmware.com/api/core/v3/attachments/124957/data Configuring VXLAN in Windows Server 2016 VXLAN configuration in Windows Server 2016 includes: Enabling VXLAN Offload on the Adapter Deploying a Software Defined Network Enabling VXLAN Offload on the Adapter To enable VXLAN offload on the adapter: 1.
14–VXLAN Configuration Configuring VXLAN in Windows Server 2016 Deploying a Software Defined Network To take advantage of VXLAN encapsulation task offload on virtual machines, you must deploy a Software Defined Networking (SDN) stack that utilizes a Microsoft Network Controller. For more details, refer to the following Microsoft TechNet link on Software Defined Networking: https://technet.microsoft.
15 Windows Server 2016 This chapter provides the following information for Windows Server 2016: Configuring RoCE Interfaces with Hyper-V “RoCE over Switch Embedded Teaming” on page 258 “Configuring QoS for RoCE” on page 259 “Configuring VMMQ” on page 268 “Configuring Storage Spaces Direct” on page 272 Configuring RoCE Interfaces with Hyper-V In Windows Server 2016, Hyper-V with Network Direct Kernel Provider Interface (NDKPI) Mode-2, host virtual network adapters (host virtual NICs) sup
15–Windows Server 2016 Configuring RoCE Interfaces with Hyper-V Creating a Hyper-V Virtual Switch with an RDMA NIC Follow the procedures in this section to create a Hyper-V virtual switch and then enable RDMA in the host VNIC. To create a Hyper-V virtual switch with an RDMA virtual NIC: 1. On all physical interfaces, set the value of the NetworkDirect Functionality parameter to Enabled. 2. Launch Hyper-V Manager. 3. Click Virtual Switch Manager (see Figure 15-1). Figure 15-1.
15–Windows Server 2016 Configuring RoCE Interfaces with Hyper-V 3. On the Advanced page (Figure 15-2): a. Under Property, select Network Direct (RDMA). b. Under Value, select Enabled. c. Click OK. Figure 15-2. Hyper-V Virtual Ethernet Adapter Properties 4. To enable RDMA through PowerShell, issue the following Windows PowerShell command: PS C:\Users\Administrator> Enable-NetAdapterRdma "vEthernet (New Virtual Switch)" PS C:\Users\Administrator> Adding a vLAN ID to Host Virtual NIC To add a vLAN ID
15–Windows Server 2016 Configuring RoCE Interfaces with Hyper-V Figure 15-3 shows the command output. Figure 15-3. Windows PowerShell Command: Get-VMNetworkAdapter 2. To set the vLAN ID to the host virtual NIC, issue the following Windows PowerShell command: PS C:\Users\Administrator> Set-VMNetworkAdaptervlan -VMNetworkAdapterName "New Virtual Switch" -VlanId 5 -Access -ManagementOS NOTE Note the following about adding a vLAN ID to a host virtual NIC: A vLAN ID must be assigned to a host virtual NIC.
15–Windows Server 2016 Configuring RoCE Interfaces with Hyper-V Adding Host Virtual NICs (Virtual Ports) To add host virtual NICs: 1. To add a host virtual NIC, issue the following command: Add-VMNetworkAdapter -SwitchName "New Virtual Switch" -Name SMB - ManagementOS 2. Enable RDMA on host virtual NICs as shown in “To enable RDMA in a host virtual NIC:” on page 253. 3.
15–Windows Server 2016 Configuring RoCE Interfaces with Hyper-V Figure 15-5. Add Counters Dialog Box If the RoCE traffic is running, counters appear as shown in Figure 15-6. Figure 15-6.
15–Windows Server 2016 RoCE over Switch Embedded Teaming RoCE over Switch Embedded Teaming Switch Embedded Teaming (SET) is Microsoft’s alternative NIC teaming solution available to use in environments that include Hyper-V and the Software Defined Networking (SDN) stack in Windows Server 2016 Technical Preview. SET integrates limited NIC Teaming functionality into the Hyper-V Virtual Switch.
15–Windows Server 2016 Configuring QoS for RoCE Figure 15-8 shows command output. Figure 15-8. Windows PowerShell Command: Get-NetAdapter 2. To enable RDMA on SET, issue the following Windows PowerShell command: PS C:\Users\Administrator> Enable-NetAdapterRdma "vEthernet (SET)" Assigning a vLAN ID on SET To assign a vLAN ID on SET: To assign a vLAN ID on SET, issue the following Windows PowerShell command: PS C:\Users\Administrator> Set-VMNetworkAdapterVlan -VMNetworkAdapterName "SET" -VlanId 5 -Acce
15–Windows Server 2016 Configuring QoS for RoCE Configuring QoS by Disabling DCBX on the Adapter All configuration must be completed on all of the systems in use before configuring QoS by disabling DCBX on the adapter. The priority-based flow control (PFC), enhanced transition services (ETS), and traffic classes configuration must be the same on the switch and server. To configure QoS by disabling DCBX: 1. Disable DCBX on the adapter. 2. Using HII, set the RoCE Priority to 0. 3.
15–Windows Server 2016 Configuring QoS for RoCE Figure 15-9. Advanced Properties: Enable QoS 6. Assign the vLAN ID to the interface as follows: a. Open the miniport properties, and then click the Advanced tab. b. On the adapter properties’ Advanced page (Figure 15-10) under Property, select VLAN ID, and then set the value. c. Click OK. NOTE The preceding step is required for priority flow control (PFC).
15–Windows Server 2016 Configuring QoS for RoCE Figure 15-10. Advanced Properties: Setting VLAN ID 7. To enable PFC for RoCE on a specific priority, issue the following command: PS C:\Users\Administrators> Enable-NetQoSFlowControl -Priority 5 NOTE If configuring RoCE over Hyper-V, do not assign a vLAN ID to the physical interface. 8. To disable priority flow control on any other priority, issue the following commands: PS C:\Users\Administrator> Disable-NetQosFlowControl 0,1,2,3,4,6,7 PS C:\Users\Admini
15–Windows Server 2016 Configuring QoS for RoCE 5 True Global 6 False Global 7 False Global 9. To configure QoS and assign relevant priority to each type of traffic, issue the following commands (where Priority 5 is tagged for RoCE and Priority 0 is tagged for TCP): PS C:\Users\Administrators> New-NetQosPolicy "SMB" -NetDirectPortMatchCondition 445 -PriorityValue8021Action 5 -PolicyStore ActiveStore PS C:\Users\Administrators> New-NetQosPolicy "TCP" -IPProtocolMatchCondition TCP -PriorityValue80
15–Windows Server 2016 Configuring QoS for RoCE Name Algorithm Bandwidth(%) Priority ---- --------- ------------ -------- PolicySet --------- [Default] ETS 20 1-4,6-7 Global RDMA class ETS 50 5 Global TCP class ETS 30 0 Global 11. IfIndex IfAlias ------- ------- To see the network adapter QoS from the preceding configuration, issue the following Windows PowerShell command: PS C:\Users\Administrator> Get-NetAdapterQos Name : SLOT 4 Port 1 Enabled : True Capabilities : Hardware
15–Windows Server 2016 Configuring QoS for RoCE NOTE If the switch does not have a way of designating the RoCE traffic, you may need to set the RoCE Priority to the number used by the switch. Arista® switches can do so, but some other switches cannot. 3. To install the DCB role in the host, issue the following Windows PowerShell command: PS C:\Users\Administrators> Install-WindowsFeature Data-Center-Bridging NOTE For this configuration, set the DCBX Protocol to CEE. 4.
15–Windows Server 2016 Configuring QoS for RoCE Figure 15-11. Advanced Properties: Enabling QoS 6. Assign the vLAN ID to the interface (required for PFC) as follows: a. Open the miniport properties, and then click the Advanced tab. b. On the adapter properties’ Advanced page (Figure 15-12) under Property, select VLAN ID, and then set the value. c. Click OK.
15–Windows Server 2016 Configuring QoS for RoCE Figure 15-12. Advanced Properties: Setting VLAN ID 7. To configure the switch, issue the following Windows PowerShell command: PS C:\Users\Administrators> Get-NetAdapterQoS Name : Ethernet 5 Enabled : True Capabilities : Hardware Current -------- ------- MacSecBypass : NotSupported NotSupported DcbxSupport : CEE NumTCs(Max/ETS/PFC) : 4/4/4 OperationalTrafficClasses OperationalFlowControl : TC TSA Bandwidth Priorities -- --- --------- ----
15–Windows Server 2016 Configuring VMMQ OperationalClassifications : Protocol Port/Type Priority -------- --------- -------- NetDirect 445 RemoteTrafficClasses 5 : TC TSA Bandwidth Priorities -- --- --------- ---------- 0 ETS 5% 0-4,6-7 1 ETS 95% 5 RemoteFlowControl : Priority 5 Enabled RemoteClassifications : Protocol Port/Type Priority -------- --------- -------- NetDirect 445 5 NOTE The preceding example is taken when the adapter port is connected to an Arista 7060X switch.
15–Windows Server 2016 Configuring VMMQ Enabling VMMQ on the Adapter To enable VMMQ on the adapter: 1. Open the miniport properties, and then click the Advanced tab. 2. On the adapter properties’ Advanced page (Figure 15-13) under Property, select Virtual Switch RSS, and then set the value to Enabled. 3. Click OK. Figure 15-13. Advanced Properties: Enabling Virtual Switch RSS Creating a Virtual Machine Switch with or Without SR-IOV To create a virtual machine switch with or without SR-IOV: 1.
15–Windows Server 2016 Configuring VMMQ Figure 15-14. Virtual Switch Manager 5. Click OK. Enabling VMMQ on the Virtual Machine Switch To enable VMMQ on the virtual machine switch: Issue the following Windows PowerShell command: PS C:\Users\Administrators> Set-VMSwitch -name q1 -defaultqueuevmmqenabled $true -defaultqueuevmmqqueuepairs 4 270 AH0054602-00 M
15–Windows Server 2016 Configuring VMMQ Getting the Virtual Machine Switch Capability To get the virtual machine switch capability: Issue the following Windows PowerShell command: PS C:\Users\Administrator> Get-VMSwitch -Name ql | fl Figure 15-15 shows example output. Figure 15-15. Windows PowerShell Command: Get-VMSwitch Creating a VM and Enabling VMMQ on VMNetworkAdapters in the VM To create a virtual machine (VM) and enable VMMQ on VMNetworksadapters in the VM: 1. Create a VM. 2.
15–Windows Server 2016 Configuring Storage Spaces Direct 4. To enable VMMQ on the VM, issue the following Windows PowerShell command: PS C:\Users\Administrators> set-vmnetworkadapter -vmname vm1 -VMNetworkAdapterName "network adapter" -vmmqenabled $true -vmmqqueuepairs 4 Enabling and Disabling VMMQ on a Management NIC To enable or disable VMMQ on a management NIC: To enable VMMQ on a management NIC, issue the following command: PS C:\Users\Administrator> Set-VMNetworkAdapter –ManagementOS –vmmqEnabled
15–Windows Server 2016 Configuring Storage Spaces Direct Configuring the Hardware Figure 15-16 shows an example of hardware configuration. Figure 15-16. Example Hardware Configuration NOTE The disks used in this example are 4 × 400G NVMe™, and 12 × 200G SSD disks. Deploying a Hyper-Converged System This section includes instructions to install and configure the components of a Hyper-Converged system using the Windows Server 2016.
15–Windows Server 2016 Configuring Storage Spaces Direct Deploying the Operating System To deploy the operating systems: 1. Install the operating system. 2. Install the Windows Server roles (Hyper-V). 3. Install the following features: 4. Failover Cluster Data center bridging (DCB) Connect the nodes to a domain and add domain accounts. Configuring the Network To deploy Storage Spaces Direct, the Hyper-V switch must be deployed with RDMA-enabled host virtual NICs.
15–Windows Server 2016 Configuring Storage Spaces Direct dcbx version cee no shutdown 2. Enable Network Quality of Service. NOTE Network Quality of Service is used to ensure that the Software Defined Storage system has enough bandwidth to communicate between the nodes to ensure resiliency and performance. To configure QoS on the adapter, see “Configuring QoS for RoCE” on page 259. 3. Create a Hyper-V virtual switch with Switch Embedded Teaming (SET) and RDMA virtual NIC as follows: a.
15–Windows Server 2016 Configuring Storage Spaces Direct NOTE These commands can be on the same or different vLANs. e. To verify that the vLAN ID is set, issue the following command: Get-VMNetworkAdapterVlan -ManagementOS f. To disable and enable each host virtual NIC adapter so that the vLAN is active, issue the following commands: Disable-NetAdapter "vEthernet (SMB_1)" Enable-NetAdapter "vEthernet (SMB_1)" Disable-NetAdapter "vEthernet (SMB_2)" Enable-NetAdapter "vEthernet (SMB_2)" g.
15–Windows Server 2016 Configuring Storage Spaces Direct To validate a set of servers for use as a Storage Spaces Direct cluster, issue the following Windows PowerShell command: Test-Cluster -Node -Include "Storage Spaces Direct", Inventory, Network, "System Configuration" Step 2. Creating a Cluster Create a cluster with the four nodes (which was validated for cluster creation) in Step 1. Running a Cluster Validation Tool.
15–Windows Server 2016 Configuring Storage Spaces Direct Get-StoragePool |? IsPrimordial -eq $false | Set-StoragePool -IsReadOnly:$false -ErrorAction SilentlyContinue Get-StoragePool |? IsPrimordial -eq $false | Get-VirtualDisk | Remove-VirtualDisk -Confirm:$false -ErrorAction SilentlyContinue Get-StoragePool |? IsPrimordial -eq $false | Remove-StoragePool -Confirm:$false -ErrorAction SilentlyContinue Get-PhysicalDisk | Reset-PhysicalDisk -ErrorAction SilentlyContinue Get-Disk |? Number -ne $null |? IsBoot
15–Windows Server 2016 Configuring Storage Spaces Direct The following Windows PowerShell command creates a virtual disk with both mirror and parity resiliency on the storage pool: New-Volume -StoragePoolFriendlyName "S2D*" -FriendlyName -FileSystem CSVFS_ReFS -StorageTierfriendlyNames Capacity,Performance -StorageTierSizes , -CimSession Step 7.
16 Windows Server 2019 This chapter provides the following information for Windows Server 2019: RSSv2 for Hyper-V “Windows Server 2019 Behaviors” on page 281 “New Adapter Properties” on page 282 RSSv2 for Hyper-V In Windows Server 2019, Microsoft added support for Receive Side Scaling version 2 (RSSv2) with Hyper-V (RSSv2 per vPort). RSSv2 Description Compared to RSSv1, RSSv2 decreases the time between the CPU load measurement and the indirection table update.
16–Windows Server 2019 Windows Server 2019 Behaviors Known Event Log Errors Under typical operation, the dynamic algorithm of RSSv2 may initiate an indirection table update that is incompatible with the driver and return an appropriate status code. In such cases, an event log error occurs, even though no functional operation issue exists. Figure 16-1 shows an example. Figure 16-1.
16–Windows Server 2019 New Adapter Properties New Adapter Properties New user-configurable properties available in Windows Server 2019 are described in the following sections: Max Queue Pairs (L2) Per VPort Network Direct Technology Virtualization Resources VMQ and VMMQ Default Accelerations Single VPort Pool Max Queue Pairs (L2) Per VPort As explained in VMMQ Is Enabled by Default, Windows 2019 (and Windows 2016) introduced a new user-configurable parameter, Max Queue Pairs (L2) per VP
16–Windows Server 2019 New Adapter Properties Virtualization Resources Table 16-1 lists the maximum quantities of virtualization resources in Windows 2019 for Dell 41xxx Series Adapters. Table 16-1.
16–Windows Server 2019 New Adapter Properties VMQ and VMMQ Default Accelerations Table 16-2 lists the VMQ and VMMQ default and other values for accelerations in Windows Server 2019 for Dell 41xxx Series Adapters. Table 16-2.
16–Windows Server 2019 New Adapter Properties PF Non-Default VPort: For the host: Set-VMNetworkAdapter -ManagementOS -VmmqEnabled:1 -VmmqQueuePairs: For the VM: Set-VMNetworkAdapter -VMName -VmmqEnabled:1 -VmmqQueuePairs: VF Non-Default VPort: Set-VMNetworkAdapter -VMName -IovWeight:100 -IovQueuePairsRequested: NOTE The default quantity of QPs assigned for a VF (IovQueuePairsRequested) is still 1.
17 Troubleshooting This chapter provides the following troubleshooting information: Troubleshooting Checklist “Verifying that Current Drivers Are Loaded” on page 287 “Testing Network Connectivity” on page 288 “Microsoft Virtualization with Hyper-V” on page 289 “Linux-specific Issues” on page 289 “Miscellaneous Issues” on page 289 “Collecting Debug Data” on page 290 Troubleshooting Checklist CAUTION Before you open the server cabinet to add or remove the adapter, review the “Safe
17–Troubleshooting Verifying that Current Drivers Are Loaded Replace the failed adapter with one that is known to work properly. If the second adapter works in the slot where the first one failed, the original adapter is probably defective. Install the adapter in another functioning system, and then run the tests again. If the adapter passes the tests in the new system, the original system may be defective. Remove all other adapters from the system, and then run the tests again.
17–Troubleshooting Testing Network Connectivity If you loaded a new driver, but have not yet rebooted, the modinfo command will not show the updated driver information. Instead, issue the following dmesg command to view the logs. In this example, the last entry identifies the driver that will be active upon reboot. # dmesg | grep -i "Cavium" | grep -i "qede" [ 10.097526] QLogic FastLinQ 4xxxx Ethernet Driver qede x.x.x.x [ 23.093526] QLogic FastLinQ 4xxxx Ethernet Driver qede x.x.x.x [ 34.
17–Troubleshooting Microsoft Virtualization with Hyper-V Testing Network Connectivity for Linux To verify that the Ethernet interface is up and running: 1. To check the status of the Ethernet interface, issue the ifconfig command. 2. To check the statistics on the Ethernet interface, issue the netstat -i command. To verify that the connection has been established: 1. Ping an IP host on the network. From the command line, issue the following command: ping 2. Press ENTER.
17–Troubleshooting Collecting Debug Data Problem: In an ESXi environment, with the iSCSI driver (qedil) installed, sometimes, the VI-client cannot access the host. This is due to the termination of the hostd daemon, which affects connectivity with the VI-client. Solution: Contact VMware technical support. Collecting Debug Data Use the commands in Table 17-1 to collect debug data. Table 17-1.
A Adapter LEDS Table A-1 lists the LED indicators for the state of the adapter port link and activity. Table A-1.
B Cables and Optical Modules This appendix provides the following information for the supported cables and optical modules: Supported Specifications “Tested Cables and Optical Modules” on page 293 “Tested Switches” on page 297 Supported Specifications The 41xxx Series Adapters support a variety of cables and optical modules that comply with SFF8024.
B–Cables and Optical Modules Tested Cables and Optical Modules Tested Cables and Optical Modules Marvell does not guarantee that every cable or optical module that satisfies the compliance requirements will operate with the 41xxx Series Adapters. Marvell has tested the components listed in Table B-1 and presents this list for your convenience. Table B-1.
B–Cables and Optical Modules Tested Cables and Optical Modules Table B-1.
B–Cables and Optical Modules Tested Cables and Optical Modules Table B-1.
B–Cables and Optical Modules Tested Cables and Optical Modules Table B-1. Tested Cables and Optical Modules (Continued) Speed/Form Factor 10G AOCc Manufacturer Dell Dell 25G AOC InnoLight® a Cable length is indicated in meters. b DAC is direct attach cable. c AOC is active optical cable.
B–Cables and Optical Modules Tested Switches Tested Switches Table B-2 lists the switches that have been tested for interoperability with the 41xxx Series Adapters. This list is based on switches that are available at the time of product release, and is subject to change over time as new switches enter the market or are discontinued. Table B-2.
C Dell Z9100 Switch Configuration The 41xxx Series Adapters support connections with the Dell Z9100 Ethernet Switch. However, until the auto-negotiation process is standardized, the switch must be explicitly configured to connect to the adapter at 25Gbps. To configure a Dell Z9100 switch port to connect to the 41xxx Series Adapter at 25Gbps: 1. Establish a serial port connection between your management workstation and the switch. 2.
C–Dell Z9100 Switch Configuration 25G Quad port mode with 25G speed Dell(conf)#stack-unit 1 port 5 portmode quad speed 25G For information about changing the adapter link speed, see “Testing Network Connectivity” on page 288. 5. Verify that the port is operating at 25Gbps: Dell# Dell#show running-config | grep "port 5" stack-unit 1 port 5 portmode quad speed 25G 6. To disable auto-negotiation on switch port 5, follow these steps: a.
D Feature Constraints This appendix provides information about feature constraints implemented in the current release. These feature coexistence constraints may be removed in a future release. At that time, you should be able to use the feature combinations without any additional configuration steps beyond what would be usually required to enable the features.
D–Feature Constraints NIC and SAN Boot to Base Is Supported Only on Select PFs Ethernet (such as software iSCSI remote boot) and PXE boot are currently supported only on the first Ethernet PF of a physical port. In NPAR Mode configuration, the first Ethernet PF (that is, not the other Ethernet PFs) supports Ethernet (such as software iSCSI remote boot) and PXE boot. Not all devices support FCoE-Offload and iSCSI-Offload.
E Revision History Document Revision History Revision A, April 28, 2017 Revision B, August 24, 2017 Revision C, October 1, 2017 Revision D, January 24, 2018 Revision E, March 15, 2018 Revision F, April 19, 2018 Revision G, May 22, 2018 Revision H, August 23, 2018 Revision J, January 23, 2019 Revision K, July 2, 2019 Revision L, July 3, 2019 Revision M, October 16, 2019 Changes Sections Affected Added the following adapters to the list of Marvell products: QL41164HFRJ-DE, QL41164HFRJ-DE, QL41164HFCU-DE,
E–Revision History In the bullets following the second paragraph, added text to further describe Dell iSCSI HW and SW installation. “iSCSI Preboot Configuration” on page 68 Moved section to be closer to other relevant sections. “Configuring the Storage Target” on page 71 In the first paragraph, corrected the first sentence to “The Boot Mode option is listed under NIC “Selecting the iSCSI UEFI Boot Protocol” on page 72 Configuration…” Added instructions for setting UEFI iSCSI HBA.
E–Revision History In Step 1, updated the second and third bullets to the currently supported OSs for SLES 12 and RHEL, respectively. “RoCE v2 Configuration for Linux” on page 155 Changed Step 4 part b to “Set the RDMA Proto- “Preparing the Adapter for iWARP” on page 177 col Support to RoCE/iWARP or iWARP.” Removed reference to appendix C; added configuration information. “Configuring the Dell Z9100 Ethernet Switch for RoCE” on page 131 Updated list of OSs that support inbox OFED.
Glossary ACPI The Advanced Configuration and Power Interface (ACPI) specification provides an open standard for unified operating system-centric device configuration and power management. The ACPI defines platform-independent interfaces for hardware discovery, configuration, power management, and monitoring.
User’s Guide—Converged Network Adapters 41xxx Series DCBX Data center bridging exchange. A protocol used by DCB devices to exchange configuration information with directly connected peers. The protocol may also be used for misconfiguration detection and for configuration of the peer. CHAP Challenge-handshake authentication protocol (CHAP) is used for remote logon, usually between a client and server or a Web browser and Web server.
User’s Guide—Converged Network Adapters 41xxx Series EEE Energy-efficient Ethernet. A set of enhancements to the twisted-pair and backplane Ethernet family of computer networking standards that allows for less power consumption during periods of low data activity. The intention was to reduce power consumption by 50 percent or more, while retaining full compatibility with existing equipment. The Institute of Electrical and Electronics Engineers (IEEE), through the IEEE 802.
User’s Guide—Converged Network Adapters 41xxx Series IP FTP File transfer protocol. A standard network protocol used to transfer files from one host to another host over a TCP-based network, such as the Internet. FTP is required for out-of-band firmware uploads that will complete faster than in-band firmware uploads. Internet protocol. A method by which data is sent from one computer to another over the Internet. IP specifies the format of packets, also called datagrams, and the addressing scheme.
User’s Guide—Converged Network Adapters 41xxx Series maximum transmission unit See MTU. Layer 2 Refers to the data link layer of the multilayered communication model, Open Systems Interconnection (OSI). The function of the data link layer is to move data across the physical links in a network, where a switch redirects data messages at the Layer 2 level using the destination MAC address to determine the message destination. message signaled interrupts See MSI, MSI-X.
User’s Guide—Converged Network Adapters 41xxx Series NPAR NIC partitioning. The division of a single NIC port into multiple physical functions or partitions, each with a user-configurable bandwidth and personality (interface type). Personalities include NIC, FCoE, and iSCSI. quality of service See QoS. PF Physical function. RDMA Remote direct memory access. The ability for one node to write directly to the memory of another (with address and size semantics) over a network.
User’s Guide—Converged Network Adapters 41xxx Series A target is a device that responds to a requested by an initiator (the host system). Peripherals are targets, but for some commands (for example, a SCSI COPY command), the peripheral may act as an initiator. SCSI Small computer system interface. A high-speed interface used to connect devices, such as hard drives, CD drives, printers, and scanners, to a computer. The SCSI can connect many devices using a single controller.
User’s Guide—Converged Network Adapters 41xxx Series UDP User datagram protocol. A connectionless transport protocol without any guarantee of packet sequence or delivery. It functions directly on top of IP. virtual port See vPort. vLAN Virtual logical area network (LAN). A group of hosts with a common set of requirements that communicate as if they were attached to the same wire, regardless of their physical location.
Contact Information Marvell Technology Group http://www.marvell.com Marvell.