Marvell® FastLinQ® Ethernet iSCSI Adapters and Ethernet FCoE Adapters 5740/57810/57800 Adapters and other 57xx and 57xxx Adapters User’s Guide Third party information brought to you courtesy of Dell. Doc No. BC0054508-00 Rev.
Marvell® FastLinQ® Ethernet iSCSI Adapters and Ethernet FCoE Adapters User’s Guide THIS DOCUMENT AND THE INFORMATION FURNISHED IN THIS DOCUMENT ARE PROVIDED “AS IS” WITHOUT ANY WARRANTY.
Table of Contents Preface Intended Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . What Is in This Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Related Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Documentation Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Laser Safety Information . . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters Marvell 5740/57810/57800 Adapters and other 57xx and 57xxx Adapters 4 Installing the Hardware System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hardware Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Operating System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters Marvell 5740/57810/57800 Adapters and other 57xx and 57xxx Adapters Installing the Source RPM Package . . . . . . . . . . . . . . . . . . . . . . . . . . Installing the KMP Package. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Building the Driver from the Source TAR File . . . . . . . . . . . . . . . . . . . Installing the Binary DKMS RPM Driver Package . . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters Marvell 5740/57810/57800 Adapters and other 57xx and 57xxx Adapters debug_logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . cnic Driver Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . cnic_debug . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . cnic_dump_kwqe_enable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Driver Defaults . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters Marvell 5740/57810/57800 Adapters and other 57xx and 57xxx Adapters No Valid License to Start FCoE . . . . . . . . . . . . . . . . . . . . . . . . . Session Failures Due to Exceeding Maximum Allowed FCoE Offload Connection Limit or Memory Limits . . . . . . . . . . Session Offload Failures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Session Upload Failures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters Marvell 5740/57810/57800 Adapters and other 57xx and 57xxx Adapters num_queues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pri_map. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . tx_switching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . full_promiscous. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters Marvell 5740/57810/57800 Adapters and other 57xx and 57xxx Adapters rx_filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rxqueue_nr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rxring_bd_nr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . txqueue_nr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters Marvell 5740/57810/57800 Adapters and other 57xx and 57xxx Adapters qfle3f_create vmkMgmt_Entry . . . . . . . . . . . . . . . . . . . . . . . . . . Driver Defaults. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . bnx2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . qfle3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters Marvell 5740/57810/57800 Adapters and other 57xx and 57xxx Adapters DHCP iSCSI Boot Configuration for IPv6 . . . . . . . . . . . . . . . . . . Configuring the DHCP Server. . . . . . . . . . . . . . . . . . . . . . . . . . . Preparing the iSCSI Boot Image . . . . . . . . . . . . . . . . . . . . . . . . Booting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other iSCSI Boot Considerations . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters Marvell 5740/57810/57800 Adapters and other 57xx and 57xxx Adapters Outbound Traffic Flow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Inbound Traffic Flow (SLB Only). . . . . . . . . . . . . . . . . . . . . . . . . Protocol Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Types of Teams . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters Marvell 5740/57810/57800 Adapters and other 57xx and 57xxx Adapters Fault Tolerance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Troubleshooting Teaming Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Teaming Configuration Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Troubleshooting Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters Marvell 5740/57810/57800 Adapters and other 57xx and 57xxx Adapters Configuring FCoE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N_Port ID Virtualization (NPIV) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Data Center Bridging Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DCB Capabilities . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters Marvell 5740/57810/57800 Adapters and other 57xx and 57xxx Adapters Canadian Regulatory Information (Canada Only) . . . . . . . . . . . . . . . . . . . . Industry Canada, Class B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Industry Canada, Class A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Industry Canada, classe B. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters Marvell 5740/57810/57800 Adapters and other 57xx and 57xxx Adapters Single Network Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Windows Server 2012, 2012 R2, 2016, 2019, and Azure Stack HCI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Teamed Network Adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring VMQ with SLB Teaming . . . . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters Marvell 5740/57810/57800 Adapters and other 57xx and 57xxx Adapters List of Figures Figure Page 3-1 Example of Servers Supporting Multiple VLANs with Tagging. . . . . . . . . . . . . . . . . 14 6-1 CCM MBA Configuration Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 6-2 System Setup, Device Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 6-3 Device Settings .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters Marvell 5740/57810/57800 Adapters and other 57xx and 57xxx Adapters 14-3 14-4 14-5 14-6 14-7 14-8 14-9 14-10 14-11 14-12 14-13 14-14 14-15 14-16 14-17 14-18 14-19 14-20 14-21 14-22 14-23 14-24 14-25 14-26 14-27 14-28 14-29 14-30 CCM Device Hardware Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CCM MBA Configuration Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters Marvell 5740/57810/57800 Adapters and other 57xx and 57xxx Adapters List of Tables Table 1-1 1-2 2-1 3-1 4-1 4-2 7-1 8-1 8-2 8-3 8-4 9-1 11-1 11-2 11-3 11-4 11-5 12-1 12-2 12-3 12-4 12-5 12-6 12-7 12-8 12-9 12-10 13-1 13-2 14-1 17-1 17-2 17-3 17-4 17-5 17-6 17-7 17-8 17-9 17-10 17-11 17-12 17-13 Network Link and Activity Indicated by the RJ45 Port LEDs . . . . . . . . . . . . . . . . . . Network Link and Activity Indicated by the Port LED . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters Marvell 5740/57810/57800 Adapters and other 57xx and 57xxx Adapters 17-14 17-15 17-16 18-1 18-2 18-3 18-4 18-5 18-6 18-7 18-8 19-1 19-2 957810A1006G Environmental Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . 957810A1008G Environmental Specifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 957840A4007G Environmental Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Preface This section provides information about this guide’s intended audience, content, document conventions, and laser safety information. Intended Audience This guide is intended for personnel responsible for installing and maintaining computer networking equipment. What Is in This Guide This guide describes the features, installation, and configuration of the Marvell FastLinQ 57840/57810/57800 and other 57xx and 57xxx Converged Network Adapters and Intelligent Ethernet Adapters.
Preface Documentation Conventions Text in blue font indicates a hyperlink (jump) to a figure, table, or section in this guide, and links to Web sites are shown in underlined blue. For example: Table 9-2 lists problems related to the user interface and remote agent. See “Installation Checklist” on page 6. For more information, visit www.marvell.com. Text in bold font indicates user interface elements such as a menu items, buttons, check boxes, or column headings.
Preface Laser Safety Information Laser Safety Information This product may use Class 1 laser optical transceivers to communicate over the fiber optic conductors. The U.S. Department of Health and Human Services (DHHS) does not consider Class 1 lasers to be hazardous. The International Electrotechnical Commission (IEC) 825 Laser Safety Standard requires labeling in English, German, Finnish, and French stating that the product uses Class 1 lasers.
1 Functionality and Features This chapter covers the following for the adapters: Functional Description “Features” on page 2 “Supported Operating Environments” on page 6 “Network Link and Activity Indication” on page 7 Functional Description The Marvell 57xx and 57xxx adapter is a new class of gigabit Ethernet (GbE) and 10GbE converged network interface controller (C-NIC) that can simultaneously perform accelerated data networking and storage networking on a standard Ethernet network.
1–Functionality and Features Features Using the Marvell teaming software, you can split your network into virtual LANs (VLANs), as well as group multiple network adapters together into teams to provide network load balancing and fault tolerance functionality. For detailed information about teaming, see Chapter 2 Configuring Teaming in Windows Server and Chapter 12 Marvell Teaming Services. For a description of VLANs, see Chapter 3 Virtual LANs in Windows.
1–Functionality and Features Features Adaptive interrupts (see “Adaptive Interrupt Frequency” on page 6) Receive side scaling (RSS) Manageability: QLogic Control Suite (QCS) CLI diagnostic and configuration software (see “QLogic Control Suite CLI” on page 6) QConvergeConsole (QCC) GUI diagnostics and configuration software for Linux® and Windows® QCC PowerKit diagnostics and configuration software extensions to Microsoft® PowerShell® for Linux, VMware®, and Windows QCC vSphere®
1–Functionality and Features Features High-speed on-chip reduced instruction set computer (RISC) processor (see “ASIC with Embedded RISC Processor” on page 6) Integrated 96KB frame buffer memory Quality of service (QoS) Serial gigabit media independent interface (SGMII), gigabit media independent interface (GMII), and media independent interface (MII) management interface 256 unique MAC unicast addresses Support for multicast addresses through a 128-bit hashing hardware function
1–Functionality and Features Features Because iSCSI uses TCP as its sole transport protocol, it benefits from hardware acceleration of the TCP processing. However, iSCSI as a Layer 5 protocol has additional mechanisms beyond the TCP layer. iSCSI processing can also be offloaded, thereby reducing CPU utilization even further. The Marvell 57xx and 57xxx adapters target best-system performance, maintain system flexibility to changes, and support current and future OS convergence and integration.
1–Functionality and Features Supported Operating Environments Adaptive Interrupt Frequency The adapter driver intelligently adjusts host interrupt frequency based on traffic conditions to increase overall application throughput. When traffic is light, the adapter driver interrupts the host for each received packet, minimizing latency. When traffic is heavy, the adapter issues one host interrupt for multiple, back-to-back incoming packets, preserving host CPU cycles.
1–Functionality and Features Network Link and Activity Indication Network Link and Activity Indication For copper-wire Ethernet connections, the state of the network link and activity is indicated by the LEDs on the RJ45 connector, as described in Table 1-1. Table 1-1.
2 Configuring Teaming in Windows Server Teaming configuration in a Microsoft Windows Server® system includes an overview of load balancing and fault tolerance. NOTE This chapter describes teaming for adapters in Windows Server systems. For more information on a similar technology on Linux operating systems (called “channel bonding”), refer to your operating system documentation.
2–Configuring Teaming in Windows Server Load Balancing and Fault Tolerance Types of Teams The available types of teams for the Windows family of operating systems are: Smart Load Balancing and Failover Link Aggregation (802.3ad) Generic Trunking (FEC/GEC)/802.3ad-Draft Static SLB (Auto-Fallback Disable) Smart Load Balancing and Failover Smart Load Balancing and Failover is the Broadcom® implementation of switch-independent NIC teaming load balancing based on IP flow.
2–Configuring Teaming in Windows Server Load Balancing and Fault Tolerance Link Aggregation (802.3ad) The Link Aggregation mode supports link aggregation and conforms to the IEEE 802.3ad (LACP) specification. Configuration software allows you to dynamically configure the adapters that you want to participate in a specific team. If the link partner is not correctly configured for 802.3ad link configuration, errors are detected and noted.
2–Configuring Teaming in Windows Server Load Balancing and Fault Tolerance SLB (Auto-Fallback Disable) The SLB (Auto-Fallback Disable) type of team is identical to the Smart Load Balancing and Failover type of team, with the following exception: When the standby member is active, if a primary member comes back on line, the team continues using the standby member, rather than switching back to the primary member.
2–Configuring Teaming in Windows Server Load Balancing and Fault Tolerance The Smart Load Balancing type of team works with all Ethernet switches without having to configure the switch ports to any special trunking mode. Only IP traffic is load-balanced in both inbound and outbound directions. IPX traffic is load-balanced in the outbound direction only. Other protocol packets are sent and received through one primary interface only. Failover for non-IP traffic is supported only for Dell network adapters.
3 Virtual LANs in Windows This chapter provides information about VLANs in Windows for teaming. VLAN Overview “Adding VLANs to Teams” on page 16 VLAN Overview Virtual LANs (VLANs) allow you to split your physical LAN into logical parts, to create logical segmentation of work groups, and to enforce security policies for each logical segment.
3–Virtual LANs in Windows VLAN Overview Although VLANs are commonly used to create individual broadcast domains and separate IP subnets, it is sometimes useful for a server to have a simultaneous presence on more than one VLAN. Marvell adapters support multiple VLANs on a per-port or per-team basis, allowing very flexible network configurations. Figure 3-1. Example of Servers Supporting Multiple VLANs with Tagging Figure 3-1 shows an example network that uses VLANs.
3–Virtual LANs in Windows VLAN Overview Table 3-1. Example VLAN Network Topology (Continued) Component Description Main Server A high-use server that needs to be accessed from all VLANs and IP subnets. The Main Server has a Marvell adapter installed. All three IP subnets are accessed through the single physical adapter interface. The server is attached to one of the switch ports, which is configured for VLANs #1, #2, and #3. Both the adapter and the connected switch port have tagging turned on.
3–Virtual LANs in Windows Adding VLANs to Teams Adding VLANs to Teams Each Marvell adapter team supports up to 64 VLANs (63 tagged and 1 untagged). Note that only Marvell adapters and Alteon® AceNIC adapters can be part of a team with VLANs. With multiple VLANs on an adapter, a server with a single adapter can have a logical presence on multiple IP subnets. With multiple VLANs in a team, a server can have a logical presence on multiple IP subnets and benefit from load balancing and failover.
4 Installing the Hardware This chapter applies to Marvell 57xx and 57xxx add-in network interface cards. Hardware installation covers the following: System Requirements “Safety Precautions” on page 19 “Preinstallation Checklist” on page 19 “Installation of the Add-In NIC” on page 20 NOTE Service Personnel: This product is intended only for installation in a Restricted Access Location (RAL).
4–Installing the Hardware System Requirements Operating System Requirements NOTE Because the Dell Update Packages Version xx.xx.xxx User’s Guide is not updated in the same cycle as this Ethernet adapter user’s guide, consider the operating systems listed in this section as the most current. This section describes the requirements for each supported OS. General The following host interface is required: PCI Express v1.
4–Installing the Hardware Safety Precautions Safety Precautions ! WARNING The adapter is being installed in a system that operates with voltages that can be lethal. Before you open the case of your system, observe the following precautions to protect yourself and to prevent damage to the system components: Remove any metallic objects or jewelry from your hands and wrists. Make sure to use only insulated or nonconducting tools.
4–Installing the Hardware Installation of the Add-In NIC Installation of the Add-In NIC The following instructions apply to installing the Marvell 57xx and 57xxx adapters (add-in NIC) in most systems. Refer to the manuals that were supplied with your system for details about performing these tasks on your specific system. Installing the Add-In NIC 1. Review Safety Precautions and Preinstallation Checklist.
4–Installing the Hardware Installation of the Add-In NIC Copper Wire To connect a copper wire: 1. Select an appropriate cable. Table 4-1 lists the copper cable requirements for connecting to 100 and 1000BASE-T and 10GBASE-T ports. Table 4-1.
4–Installing the Hardware Installation of the Add-In NIC Fiber Optic To connect a fiber optic cable: 1. Select an appropriate cable. Table 4-2 lists the fiber optic cable requirements for connecting to 1000 and 2500BASE-X ports. See also the tables in “Supported SFP+ Modules Per NIC” on page 253. Table 4-2.
5 Manageability Information about manageability includes: CIM “Host Bus Adapter API” on page 24 CIM The common information model (CIM) is an industry standard defined by the Distributed Management Task Force (DMTF). Microsoft implements CIM on Windows Server platforms. Marvell supports CIM on Windows Server and Linux platforms. The Marvell implementation of CIM provides various classes to provide information to users through CIM client applications.
5–Manageability Host Bus Adapter API SELECT * FROM __InstanceCreationEvent where TargetInstance ISA "QLGC_ActsAsSpare" SELECT * FROM __InstanceDeletionEvent where TargetInstance ISA "QLGC_ActsAsSpare" For detailed information about these events, see the CIM documentation: http://www.dmtf.org/sites/default/files/standards/documents/DSP0004V2.3_final.pdf Marvell also implements the SMI-S, which defines CIM management profiles for storage systems.
6 Boot Agent Driver Software This chapter covers how to set up MBA in both client and server environments: Overview “Setting Up MBA in a Client Environment” on page 26 “Setting Up MBA in a Linux Server Environment” on page 32 Overview Marvell 57xx and 57xxx adapters support pre-execution environment (PXE), remote program load (RPL), iSCSI, and bootstrap protocol (BOOTP).
6–Boot Agent Driver Software Setting Up MBA in a Client Environment Setting Up MBA in a Client Environment Setting up MBA in a client environment involves the following steps: 1. Configuring the MBA Driver. 2. Setting Up the BIOS for the boot order. Configuring the MBA Driver This section pertains to configuring the MBA driver (located in the adapter firmware) on add-in NIC models of the Marvell network adapter.
6–Boot Agent Driver Software Setting Up MBA in a Client Environment Using Comprehensive Configuration Management To use CCM to configure the MBA driver: 1. Restart the system. 2. Press the CTRL+ S keys within four seconds after you are prompted to do so. A list of adapters appears. a. Select the adapter to configure, and then press the ENTER key. The Main Menu appears. b. Select MBA Configuration to view the MBA Configuration Menu, as shown in Figure 6-1. Figure 6-1.
6–Boot Agent Driver Software Setting Up MBA in a Client Environment 3. To access the Boot Protocol item, press the UP ARROW and DOWN ARROW keys. If other boot protocols besides Preboot Execution Environment (PXE) are available, press RIGHT ARROW or LEFT ARROW to select the boot protocol of choice: FCoE or iSCSI. NOTE For iSCSI and FCoE boot-capable LOMs, set the boot protocol through the BIOS. See your system documentation for more information.
6–Boot Agent Driver Software Setting Up MBA in a Client Environment 3. Select the device on which you want to change MBA settings (see Figure 6-3). Figure 6-3. Device Settings Doc No. BC0054508-00 Rev.
6–Boot Agent Driver Software Setting Up MBA in a Client Environment 4. On the Main Configuration Page, select NIC Configuration (see Figure 6-4). Figure 6-4. Main Configuration Page Doc No. BC0054508-00 Rev.
6–Boot Agent Driver Software Setting Up MBA in a Client Environment 5. In the NIC Configuration page (see Figure 6-5), use the Legacy Boot Protocol drop-down menu to select the boot protocol of choice, if boot protocols other than Preboot Execution Environment (PXE) are available. If available, other boot protocols include iSCSI and FCoE. The 57800’s fixed speed, 1GbE ports support only PXE and iSCSI remote boot. Figure 6-5.
6–Boot Agent Driver Software Setting Up MBA in a Linux Server Environment Setting Up the BIOS To boot from the network with the MBA, make the MBA enabled adapter the first bootable device under the BIOS. This procedure depends on the system BIOS implementation. Refer to the user manual for the system for instructions. Setting Up MBA in a Linux Server Environment The Red Hat Enterprise Linux distribution has PXE Server support.
7 Linux Driver Software Information about the Linux driver software includes: Introduction “Limitations” on page 34 “Packaging” on page 35 “Installing Linux Driver Software” on page 36 “Unloading or Removing the Linux Driver” on page 42 “Patching PCI Files (Optional)” on page 43 “Network Installations” on page 44 “Setting Values for Optional Properties” on page 44 “Driver Defaults” on page 51 “Driver Messages” on page 52 “Teaming with Channel Bonding” on page 57
7–Linux Driver Software Limitations Table 7-1. Marvell 57xx and 57xxx Linux Drivers (Continued) Linux Driver Description bnx2x Linux driver for the 57xxx 1Gb/10Gb network adapters. This driver directly controls the hardware and is responsible for sending and receiving Ethernet packets on behalf of the Linux host networking stack. The driver also receives and processes device interrupts, both on behalf of itself (for Layer 2 networking) and on behalf of the bnx2fc (FCoE) and C-NIC drivers.
7–Linux Driver Software Packaging bnx2x Driver Limitations The current version of the driver has been tested on 2.6.x kernels, starting from the 2.6.9 kernel. The bnx2x driver may not compile on kernels older than 2.6.9. Testing is concentrated on i386 and x86_64 architectures. Only limited testing has been done on some other architectures. Minor changes to some source files and the makefile may be needed on some kernels. bnx2i Driver Limitations The current version of the driver has been tested on 2.6.
7–Linux Driver Software Installing Linux Driver Software The following is a list of included files: netxtreme2-version.src.rpm: RPM package with 57xx and 57xxx bnx2, bnx2x, cnic, bnx2fc, bnx2ilibfc, and libfcoe driver source. netxtreme2-version.tar.gz: TAR zipped package with 57xx and 57xxx bnx2, bnx2x, cnic, bnx2fc, bnx2i, libfc, and libfcoe driver source. iscsiuio-version.tar.gz: iSCSI user space management tool binary.
7–Linux Driver Software Installing Linux Driver Software NOTE For RHEL 8, install the kernel-rpm-macros and kernel-abi-whitelists package before building the binary RPM. For RHEL: cd ~/rpmbuild rpmbuild -bb SPECS/netxtreme2.spec For SLES: cd /usr/src/packages rpmbuild -bb SPECS/netxtreme2.spec 3. Install the newly compiled RPM: rpm -ivh RPMS//netxtreme2-..rpm The --force option may be needed on some Linux distributions if conflicts are reported. 4.
7–Linux Driver Software Installing Linux Driver Software 7. For FCoE offload, after rebooting, create configuration files for all FCoE ethX interfaces: cd /etc/fcoe cp cfg-ethx cfg- NOTE Note that your distribution might have a different naming scheme for Ethernet devices (that is, pXpX or emX instead of ethX). 8. For FCoE offload or iSCSI-offload-TLV, modify /etc/fcoe/cfg- by setting DCB_REQUIRED=yes to DCB_REQUIRED=no. 9. Turn on all ethX interfaces.
7–Linux Driver Software Installing Linux Driver Software }; }; 13. For FCoE offload and iSCSI-offload-TLV, restart lldpad service to apply new settings. service lldpad restart 14. For FCOE offload, restart FCoE service to apply new settings. service fcoe restart Installing the KMP Package NOTE The examples in this procedure refer to the bnx2x driver, but also apply to the bxn2fc and bnx2i drivers. To install the KMP package: 1. Install the KMP package: rpm -ivh rmmod bnx2x 2.
7–Linux Driver Software Installing Linux Driver Software 3. Test the driver by loading it (first unload the existing driver, if necessary): rmmod bnx2x (or bnx2fc or bnx2i) insmod bnx2x/src/bnx2x.ko (or bnx2fc/src/bnx2fc.ko, or bnx2i/src/bnx2i.ko) 4. For iSCSI offload and FCoE offload, load the C-NIC driver (if applicable): insmod cnic.ko 5. Install the driver and man page: make install NOTE See the RPM instructions in the preceding for the location of the installed driver. 6.
7–Linux Driver Software Installing Linux Driver Software Verify that your network adapter supports iSCSI by checking the message log. If the message bnx2i: dev eth0 does not support iSCSI appears in the message log after loading the bnx2i driver, iSCSI is not supported. This message may not appear until the interface is opened, as with: ifconfig eth0 up 4. To use iSCSI, refer to “Load and Run Necessary iSCSI Software Components” on page 42 to load the necessary software components.
7–Linux Driver Software Load and Run Necessary iSCSI Software Components Load and Run Necessary iSCSI Software Components The Marvell iSCSI Offload software suite consists of three kernel modules and a user daemon. Required software components can be loaded either manually or through system services. 1. Unload the existing driver, if necessary. To do so manually, issue the following command: rmmod bnx2i 2. Load the iSCSI driver. To do so manually, issue one of the following commands: insmod bnx2i.
7–Linux Driver Software Patching PCI Files (Optional) If the driver was installed using RPM, issue the following command to remove it: rpm -e netxtreme2 Removing the Driver from a TAR Installation NOTE The examples used in this procedure refer to the bnx2x driver, but also apply to the bnx2fc and bnx2i drivers. If the driver was installed using make install from the TAR file, manually delete the bnx2x.ko driver file from the operating system.
7–Linux Driver Software Network Installations Next, back up the old files and rename the new files for use. cp /usr/share/hwdata/pci.ids /usr/share/hwdata/old.pci.ids cp /usr/share/hwdata/pci.ids.new /usr/share/hwdata/pci.ids cp /usr/share/hwdata/pcitable /usr/share/hwdata/old.pcitable cp /usr/share/hwdata/pcitable.
7–Linux Driver Software Setting Values for Optional Properties bnx2x Driver Parameters Parameters for the bnx2x driver are described in the following sections. int_mode Use the optional parameter int_mode to force using an interrupt mode other than MSI-X. By default, the driver tries to enable MSI-X if it is supported by the kernel. If MSI-X is not attainable, the driver tries to enable MSI if it is supported by the kernel. If MSI is not attainable, the driver uses the legacy INTx mode.
7–Linux Driver Software Setting Values for Optional Properties or modprobe bnx2x dropless_fc=1 autogreen The autogreen parameter forces the specific AutoGrEEEN behavior. AutoGrEEEn is a proprietary, pre-IEEE standard Energy Efficient Ethernet (EEE) mode supported by some 1000BASE-T and 10GBASE-T RJ45 interfaced switches. By default, the driver uses the NVRAM configuration settings per port.
7–Linux Driver Software Setting Values for Optional Properties tx_switching The tx_switching parameter sets the L2 Ethernet send direction to test each transmitted packet. If the packet is intended for the transmitting NIC port, it is hair-pin looped back by the adapter. This parameter is relevant only in multifunction (NPAR) mode, especially in virtualized environments.
7–Linux Driver Software Setting Values for Optional Properties bnx2i Driver Parameters Optional parameters en_tcp_dack, error_mask1, and error_mask2 can be supplied as command line arguments to the insmod or modprobe command for bnx2i. error_mask1 and error_mask2 Use the error_mask (Configure firmware iSCSI error mask #) parameters to configure a specific iSCSI protocol violation to be treated either as a warning or a fatal error.
7–Linux Driver Software Setting Values for Optional Properties sq_size Use the sq_size parameter to choose send queue size for offloaded connections and SQ size determines the maximum SCSI commands that can be queued. SQ size also has a bearing on the quantity of connections that can be offloaded; as QP size increases, the quantity of connections supported decreases. With the default values, the BCM5708 adapters can offload 28 connections.
7–Linux Driver Software Setting Values for Optional Properties ooo_enable The ooo_enable (enable TCP out-of-order) parameter feature enables and disables TCP out-of-order RX handling feature on offloaded iSCSI connections. Default: TCP out-of-order feature is ENABLED. For example: insmod bnx2i.ko ooo_enable=1 or modprobe bnx2i ooo_enable=1 bnx2fc Driver Parameter You can supply the optional parameter debug_logging as a command line argument to the insmod or modprobe command for bnx2fc.
7–Linux Driver Software Driver Defaults cnic_dump_kwqe_enable The cnic_dump_kwe_en parameter enables and disables single work-queue element message (kwqe) logging. By default, this parameter is set to 1 (disabled).
7–Linux Driver Software Driver Messages bnx2x Driver Defaults Speed: Autonegotiation with all speeds advertised Flow control: Autonegotiation with RX and TX advertised MTU: 1500 (range is 46–9600) RX Ring Size: 4078 (range is 0–4078) TX Ring Size: 4078 (range is (MAX_SKB_FRAGS+4)–4078). MAX_SKB_FRAGS varies on different kernels and different architectures. On a 2.6 kernel for x86, MAX_SKB_FRAGS is 18.
7–Linux Driver Software Driver Messages NIC Detected eth#: QLogic 57xx and 57xxx xGb (B1) PCI-E x8 found at mem f6000000, IRQ 16, node addr 0010180476ae cnic: Added CNIC device: eth0 Link Up and Speed Indication bnx2x: eth# NIC Link is Up, 10000 Mbps full duplex Link Down Indication bnx2x: eth# NIC Link is Down MSI-X Enabled Successfully bnx2x: eth0: using MSI-X bnx2i Driver Messages The bnx2i driver messages include the following. BNX2I Driver Sign-on QLogic 57xx and 57xxx iSCSI Driver bnx2i v2.1.
7–Linux Driver Software Driver Messages Exceeds Maximum Allowed iSCSI Connection Offload Limit bnx2i: alloc_ep: unable to allocate iscsi cid bnx2i: unable to allocate iSCSI context resources Network Route to Target Node and Transport Name Binding Are Two Different Devices bnx2i: conn bind, ep=0x...
7–Linux Driver Software Driver Messages bnx2i: iscsi_error - F-bit not set bnx2i: iscsi_error - invalid TTT bnx2i: iscsi_error - invalid DataSN bnx2i: iscsi_error - burst len violation bnx2i: iscsi_error - buf offset violation bnx2i: iscsi_error - invalid LUN field bnx2i: iscsi_error - invalid R2TSN field bnx2i: iscsi_error - invalid cmd len1 bnx2i: iscsi_error - invalid cmd len2 bnx2i: iscsi_error - pend r2t exceeds MaxOutstandingR2T value bnx2i: iscsi_error - TTT is rsvd bnx2i: iscsi_error - MBL violatio
7–Linux Driver Software Driver Messages [20]: 2a 0 0 2 ffffffc8 14 0 0 [28]: 40 0 0 0 0 0 0 0 Open-iSCSI Daemon Handing Over Session to Driver bnx2i: conn update - MBL 0x800 FBL 0x800MRDSL_I 0x800 MRDSL_T 0x2000 bnx2fc Driver Messages The bnx2fc driver messages include the following. BNX2FC Driver Signon QLogic FCoE Driver bnx2fc v0.8.7 (Mar 25, 2011) Driver Completes Handshake with FCoE Offload Enabled C-NIC Device bnx2fc [04:00.
7–Linux Driver Software Teaming with Channel Bonding Session Upload Failures bnx2fc: ERROR!! destroy timed out bnx2fc: Disable request timed out.
7–Linux Driver Software Linux iSCSI Offload Linux iSCSI Offload iSCSI offload information for Linux includes the following: Open iSCSI User Applications User Application iscsiuio Bind iSCSI Target to Marvell iSCSI Transport Name VLAN Configuration for iSCSI Offload (Linux) Making Connections to iSCSI Targets Maximum Offload iSCSI Connections Linux iSCSI Offload FAQ Open iSCSI User Applications Install and run the inbox Open-iSCSI initiator programs from the DVD.
7–Linux Driver Software Linux iSCSI Offload Bind iSCSI Target to Marvell iSCSI Transport Name By default, the Open-iSCSI daemon connects to discovered targets using software initiator (transport name = 'tcp'). Users who want to offload iSCSI connection onto C-NIC device should explicitly change transport binding of the iSCSI iface. Perform the binding change using the iscsiadm CLI utility as follows, iscsiadm -m iface -I -n iface.
7–Linux Driver Software Linux iSCSI Offload Iface.port = 0 #END Record NOTE Although not strictly required, Marvell recommends configuring the same VLAN ID on the iface.iface_num field for iface file identification purposes. Making Connections to iSCSI Targets Refer to Open-iSCSI documentation for a comprehensive list of iscsiadm commands. The following is a sample list of commands to discovery targets and to create iSCSI connections to a target.
7–Linux Driver Software Linux iSCSI Offload Linux iSCSI Offload FAQ Not all Marvell 57xx and 57xxx adapters support iSCSI offload. The iSCSI session will not recover after a hot remove and hot plug. For Microsoft Multipath I/O (MPIO) to work properly, you must enable iSCSI noopout on each iSCSI session. For procedures on setting up noop_out_interval and noop_out_timeout values, refer to Open-iSCSI documentation.
8 VMware Driver Software This chapter covers the following for the VMware driver software: Introduction “Packaging” on page 63 “Download, Install, and Update Drivers” on page 64 “FCoE Support” on page 83 “iSCSI Support” on page 86 NOTE Information in this chapter applies primarily to the currently supported VMware versions: ESXi 6.7 and ESXi 7.0. ESXi 6.7 uses native drivers for all protocols.
8–VMware Driver Software Packaging Table 8-1. Marvell 57xx and 57xxx VMware Drivers (Continued) VMware Driver Description bnx2x VMware legacy driver for the 57xxx 1/10Gb network adapters. This driver directly controls the hardware and is responsible for sending and receiving Ethernet packets on behalf of the VMware host networking stack.
8–VMware Driver Software Download, Install, and Update Drivers The VMware driver is released in the packaging formats shown in Table 8-2. Table 8-2. VMware Driver Packaging Format Drivers Compressed ZIP QLG-qcnic-6.7-offline_bundle-.zip (native ESXi 6.7) Compressed ZIP QLG-qcnic-7.0-offline_bundle-.zip (native ESXi 7.0) Download, Install, and Update Drivers To download, install, or update the VMware ESXi drivers for 57xx and 57xxx 10GbE network adapters, see http://www.vmware.
8–VMware Driver Software Driver Parameters Marvell recommends setting the disable_msi parameter to 1 to always disable MSI/MSI-X on all QLogic adapters in the system. Issue one of the following commands: insmod bnx2.ko disable_msi=1 modprobe bnx2 disable_msi=1 This parameter can also be set in the modprobe.conf file. See the man page for more information. bnx2x Driver Parameters You can supply several optional parameters as a command line argument to the vmkload_mod command.
8–VMware Driver Software Driver Parameters dropless_fc The dropless_fc parameter is set to 1 (by default) to enable a complementary flow control mechanism on 57xxx adapters. The normal flow control mechanism is to send pause frames when the on-chip buffer (BRB) is reaching a specific level of occupancy, which is a performance-targeted flow control mechanism.
8–VMware Driver Software Driver Parameters pri_map On earlier versions of Linux that do not support tc-mqprio, use the optional parameter pri_map to map the VLAN PRI value or the IP DSCP value to a different or the same class of service (CoS) in the hardware. This 32-bit parameter is evaluated by the driver as eight values of 4 bits each. Each nibble sets the required hardware queue number for that priority.
8–VMware Driver Software Driver Parameters use_random_vf_mac When this parameter is enabled (set to 1), all created VFs will have a random forced MAC. By default, this parameter is disabled (set to 0). debug The debug parameter sets the default message level (msglevel) on all adapters in the system at one time. To set the message level for a specific adapter, issue the ethtool -s command. RSS Use the optional RSS parameter to specify the quantity of receive side scaling queues.
8–VMware Driver Software Driver Parameters enable_live_grcdump Use the enable_live_grcdump parameter to indicate which firmware dump is collected for troubleshooting. Valid values are: Value Description 0x0 Disable live global register controller (GRC) dump 0x1 Enable parity/live GRC dump (default) 0x2 Enable transmit timeout GRC dump 0x4 Enable statistics timeout GRC dump The default setting is appropriate for most situations. Do not change the default value unless requested by the support team.
8–VMware Driver Software Driver Parameters bnx2i Driver Parameters Optional parameters en_tcp_dack, error_mask1, and error_mask2 can be supplied as command line arguments to the insmod or modprobe command for bnx2i. error_mask1 and error_mask2 Use the error_mask (Configure firmware iSCSI error mask #) parameters to configure a specific iSCSI protocol violation to be treated either as a warning or a fatal error. All fatal iSCSI protocol violations will result in session recovery (ERL 0).
8–VMware Driver Software Driver Parameters sq_size Use the sq_size parameter to choose send queue size for offloaded connections and SQ size determines the maximum SCSI commands that can be queued. SQ size also has a bearing on the quantity of connections that can be offloaded; as QP size increases, the quantity of connections supported decreases. With the default values, the BCM5708 adapters can offload 28 connections.
8–VMware Driver Software Driver Parameters ooo_enable The ooo_enable (enable TCP out-of-order) parameter feature enables and disables TCP out-of-order RX handling feature on offloaded iSCSI connections. Default: TCP out-of-order feature is ENABLED. For example: insmod bnx2i.ko ooo_enable=1 or modprobe bnx2i ooo_enable=1 bnx2fc Driver Parameter You can supply the optional parameter debug_logging as a command line argument to the insmod or modprobe command for bnx2fc.
8–VMware Driver Software Driver Parameters cnic_dump_kwqe_en The cnic_dump_kwe_en parameter enables and disables single work-queue element message (kwqe) logging. By default, this parameter is set to 1 (disabled).
8–VMware Driver Software Driver Parameters 0x00100000 /* debug vlan */ 0x00200000 /* state machine 0x00400000 /* nvm access 0x00800000 /* SRIOV 0x01000000 /* mgmt interface 0x02000000 /* CNIC */ 0x04000000 /* DCB */ 0xFFFFFFFF /* all enabled */ */ */ */ */ enable_fwdump The enable_fwdump parameter enable and disables the firmware dump file. Set to 1 to enable the firmware dump file. Set to 0 (default) to disable the firmware dump file.
8–VMware Driver Software Driver Parameters offload_flags This parameter specifies the offload flags: Value Flag 1 CSO 2 TSO 4 VXLAN offload 8 Geneve offload 15 Default. All tunneled offloads (CSO, TSO, VXLAN, Geneve) are enabled. rx_filters The rx_filters parameter defines the number of receive filters per NetQueue. Set to 1 to use the default number of receive filters based on availability. Set to 0 to disable use of multiple receive filters.
8–VMware Driver Software Driver Parameters DRSS The DRSS parameter sets the number of RSS queues associated with the default queue. The minimum number of RSS queues is 2; the maximum number is 4. To disable this parameter, set it to 0 (default). This parameter is used for VXLAN gateways, where multiple unknown MAC addresses may be received by the default queue. rss_engine_nr The rss_engine_nr parameter sets the number of RSS engines. Valid values are 0 (Disabled) or 1–4 (fixed number of RSS engines).
8–VMware Driver Software Driver Parameters psod_on_error The psod_on_error parameter indicates if the host panics when the interface detects an error. The default setting is 0 (the host does not panic). Set this parameter to 1 for the host to panic when the interface detects an error.
8–VMware Driver Software Driver Parameters en_hba_poll The en_hba_poll parameter sets the adapter poll timer. The default value is 0. en_tcp_dack The en_tcp_dack parameter enables TCP delayed ACK. Enabling TCP delayed ACK helps improve network performance by combining several ACKs in a single response. The default value is 1 (enabled). Certain iSCSI targets do not handle ACK piggybacking. If this parameter is enabled on these types of targets, the host cannot login to the target.
8–VMware Driver Software Driver Parameters qfle3i_debug_level The qfle3i_debug_level parameter is a bit mask that enables and disables debug logs. The default is 0 (disabled).
8–VMware Driver Software Driver Parameters Note that Marvell validation is limited to a power of 2; for example, 32, 64, and 128. tcp_buf_size The tcp_buf_size parameter sets the TCP send and receive buffer size. The default is 64 × 1,024. time_stamps The time_stamps parameter enables and disables TCP time stamps. Set to 0 to disable time stamps. Set to 1 (default) to enable time stamps.
8–VMware Driver Software Driver Parameters qfle3f_r_a_tov When the qfle3f_enable_r_a_tov parameter is set to 1, the qfle3f_r_a_tov parameter sets the value of a user-defined R_A_TOV. The default value is 10. qfle3f_autodiscovery The qfle3f_autodiscovery parameter controls auto-FCoE discovery during system boot. Set to 0 (default) to disable auto-FCoE discovery. Set to 1 to enable auto-FCoE discovery. qfle3f_create vmkMgmt_Entry The qfle3f_createvmkMgmt_Entry parameter creates the vmkMgmt interface.
8–VMware Driver Software Driver Parameters Table 8-3. bnx2 Driver Defaults (Continued) Parameter Default Coalesce Rx frames IRQ 2 (range 0–255) Coalesce Tx μsecs 80 (range 0–1023) Coalesce Tx μsecs IRQ 18 (range 0–1023) Coalesce Tx frames 20 (range 0–255) Coalesce Tx frames IRQ 2 (range 0–255) Coalesce stats μsecs 999936 (approximately 1 second) (range 0–16776960 in 256 increments) MSI/MSI-X Enabled (if supported by 2.6/3.x kernel and interrupt test passes) TSO Enabled on 2.6/3.
8–VMware Driver Software FCoE Support Table 8-4. qfle3 Driver Defaults (Continued) Parameter Default Number of Tx BD Buffers 4,096 (16,384 maximum) Number of RSS Queues for Default Queue 0 (Disabled) (2 minimum; 4 maximum) Number of RSS Engines 4 (range 0–4) VXLAN Filters Disabled Pause on Exhausted Host Ring Disabled Number of VFs per PCI Function 0 (Disabled) (range 1–64) Unloading and Removing Drivers The following sections describe how to remove the Ethernet drivers.
8–VMware Driver Software FCoE Support The bnx2fc Marvell VMware FCoE driver is a kernel mode driver used to provide a translation layer between the VMware SCSI stack and the Marvell FCoE firmware and hardware. In addition, the driver interfaces with the networking layer to transmit and receive encapsulated FCoE frames on behalf of the Open-FCoE libfc and libfcoe for FIP and device discovery. Supported Distributions The FCoE and DCB feature set is supported on VMware ESXi 6.0 and later.
8–VMware Driver Software FCoE Support VN2VN Mode Enabled: false The output of this command should show a valid FCoE forwarder (FCF) MAC, VNPort MAC, Priority, and VLAN ID for the fabric that is connected to the C-NIC. You can also issue the following command to verify that the interface is working properly: # esxcfg-scsidevs -a Output example: vmhba34 bnx2fc link-up fcoe.1000:2000 vmhba35 bnx2fc link-up fcoe.
8–VMware Driver Software iSCSI Support Installation Check To verify the correct installation of the driver and to ensure that the host port is seen by the switch, follow these steps. To verify the correct installation of the driver: 1. Verify that the host port shows up in the switch fabric login (FLOGI) database by issuing the one of the following commands: show flogi database (for a Cisco FCF) fcoe -loginshow (for a Brocade FCF) 2.
8–VMware Driver Software iSCSI Support 5. (Optional) On the VM Network Properties, General page, assign a VLAN number in the VLAN ID box. Figure 8-1 and Figure 8-2 show examples. Figure 8-1. VM Network Properties: Example 1 Doc No. BC0054508-00 Rev.
8–VMware Driver Software iSCSI Support Figure 8-2. VM Network Properties: Example 2 6. Configure the VLAN on VMkernel. Doc No. BC0054508-00 Rev.
9 Windows Driver Software Windows driver software information includes the following: Supported Drivers “Installing the Driver Software” on page 90 “Modifying the Driver Software” on page 94 “Repairing or Reinstalling the Driver Software” on page 94 “Removing the Device Drivers” on page 95 “Viewing or Changing the Properties of the Adapter” on page 95 “Setting Power Management Options” on page 95 “Configuring the Communication Protocol to Use with QCC GUI, QCC PowerKit, and Q
9–Windows Driver Software Installing the Driver Software Installing the Driver Software NOTE These instructions are based on the assumption that your Marvell 57xx and 57xxx adapters were not factory installed. If your controller was installed at the factory, the driver software has been installed for you.
9–Windows Driver Software Installing the Driver Software NX RPC Remote Agent installs the RPC remote agent software. iSCSI Crash Dump Driver installs the driver needed for the iSCSI Crash Dump utility. FCoE Crash Dump Driver installs the driver needed for the FCoE Crash Dump utility. FastLinQ HBA Device Mgmt Agent installs the agent for device management. To install the Marvell 57xx and 57xxx drivers and management applications: 1. When the Found New Hardware Wizard appears, click Cancel.
9–Windows Driver Software Installing the Driver Software Click No to install WMI. 5. On the InstallShield Welcome window, click Next to continue. 6. After you review the license agreement, click I accept the terms in the license agreement, and then click Next to continue. 7. Select the features you want to install. 8. Click Install. 9. Click Finish to close the wizard. 10. The installer determines if a system restart is necessary. Follow the on-screen instructions.
9–Windows Driver Software Installing the Driver Software setup /s /v/qn To perform a silent reinstall of the same installer: Issue the following command: setup /s /v"/qn REINSTALL=ALL" NOTE The REINSTALL switch should only be used if the same installer is already installed on the system. If upgrading an earlier version of the installer, use setup /s /v/qn as listed in the preceding. To perform a silent install by feature: Use the ADDSOURCE to include any of the following features.
9–Windows Driver Software Modifying the Driver Software Modifying the Driver Software To modify the driver software: 1. In the Control Panel, double-click Add or Remove Programs. 2. Click QLogic Drivers and Management Applications, and then click Change. 3. Click Next to continue. 4. Click Modify, Add, or Remove to change program features. NOTE This option does not install drivers for new adapters.
9–Windows Driver Software Removing the Device Drivers Removing the Device Drivers When removing the device drivers, any management application that is installed is also removed. To remove the device drivers: 1. In the Control Panel, double-click Add or Remove Programs. 2. Click QLogic Drivers and Management Applications, and then click Remove. Follow the on-screen prompts. 3. Reboot your system to completely remove the drivers.
9–Windows Driver Software Setting Power Management Options To have the controller stay on at all times: On the adapter properties’ Power Management page, clear the Allow the computer to turn off the device to save power check box, as shown in Figure 9-2. NOTE Power management options are not available on blade servers. Figure 9-2. Device Power Management Options NOTE The Power Management page is available only for servers that support power management.
9–Windows Driver Software Configuring the Communication Protocol to Use with QCC GUI, QCC PowerKit, and QCS CLI Configuring the Communication Protocol to Use with QCC GUI, QCC PowerKit, and QCS CLI There are two main components of the QCC GUI, QCC PowerKit, and QCS CLI management applications: the RPC agent and the client software. An RPC agent is installed on a server, or managed host, that contains one or more Converged Network Adapters.
10 Citrix XenServer Driver Software This chapter describes how to install the Citrix driver on a XenServer operating system using the driver update disk (DUD). NOTE The procedures in this section apply only to Citrix XenServer 8.0 and later distributions. These procedures use both the DUD and the OS installation disk. To install the Citrix hypervisor driver: 1. Insert the XenServer installation CD and begin the installation in shell mode (see Figure 10-1). Figure 10-1. Starting in Shell Mode 2.
10–Citrix XenServer Driver Software 4. Insert the DUD CD/ISO. The GUI Welcome screen appears (see Figure 10-3). Figure 10-3. Loading the Device Driver Press F9 to load the driver. The Load Repository window appears (see Figure 10-4.) Figure 10-4. Locating the Device Driver 5. Click Use. The Drivers Loaded window appears (see Figure 10-5). Figure 10-5. Driver Installed Successfully Doc No. BC0054508-00 Rev.
10–Citrix XenServer Driver Software 6. Press ALT+F2 to return to shell mode, and then load the out-of-box (OOB) driver (see Figure 10-6). Figure 10-6. Loading the OOB Driver 7. Press ALT+F1 to return to the GUI installer, and then continue the installation. Do not remove the driver CD/ISO. 8. When prompted, skip the supplemental package installation. 9. When prompted, reboot the system after removing the OS installer CD and the DUD. The hypervisor should boot with the new driver installed. Doc No.
11 iSCSI Protocol This chapter provides the following information about the iSCSI protocol: iSCSI Boot “iSCSI Crash Dump” on page 128 “iSCSI Offload in Windows Server” on page 128 iSCSI Boot Marvell 57xx and 57xxx gigabit Ethernet (GbE) adapters support iSCSI boot to enable network boot of operating systems to diskless systems. iSCSI boot allows a Windows, Linux, or VMware operating system boot from an iSCSI target machine located remotely over a standard IP network.
11–iSCSI Protocol iSCSI Boot Supported Operating Systems for iSCSI Boot The Marvell 57xx and 57xxx gigabit Ethernet adapters support iSCSI boot on the following operating systems: Windows Server 2012 and later 32-bit and 64-bit (supports offload and non-offload paths) Linux RHEL 6 and later and SLES 11.
11–iSCSI Protocol iSCSI Boot Initiator IQN CHAP ID and secret Configuring iSCSI Boot Parameters To configure the iSCSI boot parameters: 1. In the NIC Configuration page, in the Legacy Boot Protocol drop-down menu, select iSCSI (see Figure 11-1). Figure 11-1. Legacy Boot Protocol Selection As shown in Figure 11-1, UEFI is not supported for the iSCSI protocol for the 57xx and 57xxx adapters. Doc No. BC0054508-00 Rev.
11–iSCSI Protocol iSCSI Boot 2. Configure the iSCSI boot software for either static or dynamic configuration in the CCM, UEFI (see Figure 11-2), QCC GUI, or QCS CLI. Figure 11-2. UEFI, iSCSI Configuration Doc No. BC0054508-00 Rev.
11–iSCSI Protocol iSCSI Boot The configuration options available on the General Parameters window (see Figure 11-3) are listed in Table 11-1. Figure 11-3. UEFI, iSCSI Configuration, iSCSI General Parameters Table 11-1 lists parameters for both IPv4 and IPv6. Parameters specific to either IPv4 or IPv6 are noted. NOTE Availability of IPv6 iSCSI boot is platform and device dependent. Table 11-1. Configuration Options Option TCP/IP parameters through DHCP Description This option is specific to IPv4.
11–iSCSI Protocol iSCSI Boot Table 11-1. Configuration Options (Continued) Option Description IP Autoconfiguration This option is specific to IPv6. Controls whether the iSCSI boot host software will configure a stateless link-local address and/or stateful address if DHCPv6 is present and used (Enabled). Router Solicit packets are sent out up to three times with 4 second intervals in between each retry. Or use a static IP configuration (Disabled).
11–iSCSI Protocol iSCSI Boot Table 11-1. Configuration Options (Continued) Option Description LUN Busy Retry Count Controls the quantity of connection retries the iSCSI Boot initiator will attempt if the iSCSI target LUN is busy. IP Version This option is specific to IPv6. Toggles between the IPv4 or IPv6 protocol. All IP settings will be lost when switching from one protocol version to another.
11–iSCSI Protocol iSCSI Boot LUN Busy Retry Count: 0 IP Version: IPv6 (for IPv6, non-offload) HBA Boot Mode: Disabled NOTE For initial OS installation to a blank iSCSI target LUN from a CD/DVD-ROM or mounted bootable OS installation image, set Boot from Target to One Time Disabled. This setting causes the system not to boot from the configured iSCSI target after establishing a successful login and connection. This setting will revert to Enabled after the next system reboot.
11–iSCSI Protocol iSCSI Boot 4. On the iSCSI Initiator Parameters window (Figure 11-4), type values for the following: IP Address (unspecified IPv4 and IPv6 addresses should be 0.0.0.0 and ::, respectively) NOTE Carefully enter the IP address. There is no error-checking performed against the IP address to check for duplicates or incorrect segment or network assignment.
11–iSCSI Protocol iSCSI Boot 7. On the iSCSI First Target Parameters window (Figure 11-5): a. Enable Connect to connect to the iSCSI target. b. Type values for the following using the values used when configuring the iSCSI target: IP Address TCP Port Boot LUN iSCSI Name CHAP ID CHAP Secret 8. Press ESC to return to the Main menu. 9. (Optional) Configure a secondary iSCSI target by repeating these steps in the iSCSI Second Target Parameter window. 10.
11–iSCSI Protocol iSCSI Boot If DHCP Option 17 is used, the target information is provided by the DHCP server, and the initiator iSCSI name is retrieved from the value programmed on the Initiator Parameters window. If no value was selected, the controller defaults to the following name: iqn.1995-05.com.qlogic.<11.22.33.44.55.66>.iscsiboot Where the string 11.22.33.44.55.66 corresponds to the controller’s MAC address.
11–iSCSI Protocol iSCSI Boot Enabling CHAP Authentication Ensure that CHAP authentication is enabled on the target and initiator. To enable CHAP authentication: 1. On the iSCSI General Parameters window, set CHAP Authentication to Enabled. 2. On the iSCSI Initiator Parameters window, type values for the following: CHAP ID (up to 128 bytes) CHAP Secret (if authentication is required, and must be a minimum of 12 characters; the maximum length is 16 characters) 3.
11–iSCSI Protocol iSCSI Boot DHCP Option 17, Root Path Option 17 is used to pass the iSCSI target information to the iSCSI client. The format of the root path as defined in IETC RFC 4173 is: "iscsi:"":"":"":"":"" Table 11-2 lists the parameters and definitions. Table 11-2.
11–iSCSI Protocol iSCSI Boot Table 11-3 lists the suboption. Table 11-3. DHCP Option 43 Suboption Definition Suboption 201 Definition First iSCSI target information in the standard root path format "iscsi:"":"":"":"": "" Using DHCP option 43 requires more configuration than DHCP option 17, but it provides a richer environment and provides more configuration options.
11–iSCSI Protocol iSCSI Boot The content of Option 16 should be <2-byte length> . DHCPv6 Option 17, Vendor-Specific Information DHCPv6 Option 17 (vendor-specific information) provides more configuration options to the iSCSI client. In this configuration, three additional suboptions are provided that assign the initiator IQN to the iSCSI boot client along with two iSCSI target IQNs that can be used for booting. Table 11-4 lists the suboption. Table 11-4.
11–iSCSI Protocol iSCSI Boot Windows Server 2016/2019/Azure Stack HCI iSCSI Boot Setup Windows Server 2016/2019/Azure Stack HCI support booting as well as installing in either the offload or non-offload paths. Marvell requires the use of a “slipstream” DVD with the latest Marvell drivers injected (see “Injecting (Slipstreaming) Marvell Drivers into Windows Image Files” on page 122). Also refer to the Microsoft knowledge base topic KB974072 at support.microsoft.com.
11–iSCSI Protocol iSCSI Boot 12. Select Next to proceed with Windows Server installation. A few minutes after the Windows Server DVD installation process starts, a system reboot occurs. After the reboot, the Windows Server installation routine should resume and complete the installation. 13. Following another system restart, verify that the remote system is able to boot to the desktop. 14.
11–iSCSI Protocol iSCSI Boot 12. When the system reboots, enable “boot from target” in iSCSI Boot Parameters and continue with installation until it is done. At this stage, the initial installation phase is complete. To create a new customized initrd for any new components update: 1. Update the iSCSI initiator if needed. You must first remove the existing initiator using rpm -e. 2. Make sure all runlevels of network service are on: chkconfig network on 3.
11–iSCSI Protocol iSCSI Boot 17. Continue booting into the iSCSI boot image and select one of the images you created (non-offload or offload). Your choice must correspond with your choice in the iSCSI Boot parameters section. If HBA Boot Mode was enabled in the iSCSI Boot Parameters section, you must boot the offload image. NOTE Marvell supports Host Bus Adapter (offload) starting in SLES 11 SP1 and later. 18.
11–iSCSI Protocol iSCSI Boot # Description: Starts the iSCSI initiator daemon if the # root-filesystem is on an iSCSI device # ### END INIT INFO ISCSIADM=/sbin/iscsiadm ISCSIUIO=/sbin/iscsiuio CONFIG_FILE=/etc/iscsid.conf DAEMON=/sbin/iscsid ARGS="-c $CONFIG_FILE" # Source LSB init functions . /etc/rc.status # # This service is run right after booting. So all targets activated # during mkinitrd run should not be removed when the open-iscsi # service is stopped.
11–iSCSI Protocol iSCSI Boot # Reset status of this service rc_reset # We only need to start this for root on iSCSI if ! grep -q iscsi_tcp /proc/modules ; then if ! grep -q bnx2i /proc/modules ; then rc_failed 6 rc_exit fi fi case "$1" in start) echo -n "Starting iSCSI initiator for the root device: " iscsi_load_iscsiuio startproc $DAEMON $ARGS rc_status -v iscsi_mark_root_nodes ;; stop|restart|reload) rc_failed 0 ;; status) echo -n "Checking for iSCSI initiator service: " if checkproc $DAEMON ; then rc_st
11–iSCSI Protocol iSCSI Boot Removing Inbox Drivers from Windows OS Image 1. Create a temporary folder, such as D:\temp. 2. Create the following two subfolders in the temporary folder: Win2008R2Copy Win2008R2Mod 3. Copy all the contents from the DVD installation media into the Win2008R2Copy folder. 4. Open the Windows Automated Installation Kit (AIK) command prompt in elevated mode from All program, and then issue the following command: attrib -r D:\Temp\Win2008R2Copy\sources\boot.wim 5.
11–iSCSI Protocol iSCSI Boot To inject Marvell drivers into the Windows image files, you must obtain the driver installation packages for the applicable Windows Server version. Place these driver packages to a working directory. For example, copy all driver packages and files applicable to your Windows Server version to example folder location in Step 3: C:\Temp\drivers Finally, inject these drivers into the Windows Image (WIM) files and install the applicable Windows Server version from the updated im
11–iSCSI Protocol iSCSI Boot 8. Issue the following commands to add the following drivers to the currently mounted image: dism /image:.\mnt /add-driver /driver:C:\Temp\drivers /Recurse /ForceUnsigned 9. Issue the following command to unmount the boot.wim image: dism /unmount-wim /mountdir:.\mnt /commit 10. Issue the following command to determine the index of the SKU that you want in the install.wim image: dism /get-wiminfo /wimfile:.\src\sources\install.wim 11.
11–iSCSI Protocol iSCSI Boot Booting After that the system has been prepared for an iSCSI boot and the operating system is present on the iSCSI target, the last step is to perform the actual boot. The system will boot to Windows or Linux over the network and operate as if it were a local disk drive. 1. Reboot the server. 2. Press the CTRL+S keys. 3. To boot through an offload path, set the HBA Boot Mode to Enabled. To boot through a non-offload path, set the HBA Boot Mode to Disabled.
11–iSCSI Protocol iSCSI Boot To create an iSCSI boot image with “DD”: 1. Install Linux OS on your local hard drive and ensure that the Open-iSCSI initiator is up to date. 2. Ensure that all run levels of network service are on. 3. Ensure that the 2, 3, and 5 runlevels of iSCSI service are on. 4. Update iscsiuio. You can get the iscsiuio package from the QLogic CD. This step is not needed for SUSE 10. 5. Install the linux-nx2 package on your Linux system.
11–iSCSI Protocol iSCSI Boot 20. (Optional) Configure iSCSI boot parameters, including CHAP parameters (see “Configuring the iSCSI Target” on page 102). 21. Continue booting into the iSCSI boot image and choose one of the images you created (non-offload or offload). Your choice should correspond with your choice in the iSCSI Boot Parameters section. If HBA Boot Mode was enabled in the iSCSI Boot Parameters section, you must boot the offload image.
11–iSCSI Protocol iSCSI Crash Dump Problem: Unable to update inbox driver if a non-inbox hardware ID present. Solution: Create a custom slipstream DVD image with supported drivers present on the install media. Problem: iSCSI-Offload boot from SAN fails to boot after installation. Solution: Follow the instructions in “Linux” on page 289. Problem: Installing Windows onto an iSCSI target through iSCSI boot fails when connecting to a 1Gbps switch port.
11–iSCSI Protocol iSCSI Offload in Windows Server Installing Marvell Drivers and Management Applications Install the Windows drivers and management applications. Installing the Microsoft iSCSI Initiator For Windows Server 2012 and later, the iSCSI initiator is included inbox. To download the iSCSI initiator from Microsoft (if not already installed), locate the direct link for your system here: http://www.microsoft.com/en-us/download/details.
11–iSCSI Protocol iSCSI Offload in Windows Server On this page, you can change the iSCSI-Offload MTU size, the iSCSI-Offload VLAN ID, the IPv4/IPv6 DHCP setting, the IPv4/IPv6 Static Address/Subnet Mask/Default Gateway settings, and the IPv6 Process Router Advertisements setting (see Figure 11-6). Figure 11-6. Configuring iSCSI Using QCC 4. DHCP is the default for IP address assignment, but you can change it to a static IP address assignment, if this is the preferred method of IP address assignment.
11–iSCSI Protocol iSCSI Offload in Windows Server Configuring Microsoft Initiator to Use the Marvell iSCSI Offload After you have configured the IP address for the iSCSI adapter, you must use Microsoft Initiator to configure and add a connection to the iSCSI target using a Marvell iSCSI adapter. See Microsoft’s user guide for more details on the Microsoft Initiator. 1. Open Microsoft Initiator. 2. Configure the initiator IQN name according to your setup.
11–iSCSI Protocol iSCSI Offload in Windows Server 3. In the Initiator Node Name Change dialog box (see Figure 11-8), type the initiator IQN name, and then click OK. Figure 11-8. Changing the Initiator Node Name 4. On the iSCSI Initiator Properties (Figure 11-9), click the Discovery tab, and then under Target Portals, click Add. Figure 11-9. iSCSI Initiator Properties: Discovery Page Doc No. BC0054508-00 Rev.
11–iSCSI Protocol iSCSI Offload in Windows Server 5. On the Add Target Portal dialog box (Figure 11-10), type the IP address of the target, and then click Advanced. Figure 11-10. Add Target Portal Dialog Box 6. On the Advanced Settings dialog box, complete the General page as follows: a. For the Local adapter, select the Marvell 57xx and 57xxx C-NIC iSCSI adapter. b. For the Source IP, select the IP address for the adapter. c.
11–iSCSI Protocol iSCSI Offload in Windows Server Figure 11-11 shows an example. Figure 11-11. Advanced Settings: General Page Doc No. BC0054508-00 Rev.
11–iSCSI Protocol iSCSI Offload in Windows Server 7. On the iSCSI Initiator Properties, click the Discovery tab, and then on the Discovery page, click OK to add the target portal. Figure 11-12 shows an example. Figure 11-12. iSCSI Initiator Properties: Discovery Page 8. On the iSCSI Initiator Properties, click the Targets tab. Doc No. BC0054508-00 Rev.
11–iSCSI Protocol iSCSI Offload in Windows Server 9. On the Targets page, select the target, and then click Log On to log into your iSCSI target using the Marvell iSCSI adapter. Figure 11-13 shows an example. Figure 11-13. iSCSI Initiator Properties: Targets Page 10. On the Log On To Target dialog box (Figure 11-14), click Advanced. Figure 11-14. Log On to Target Doc No. BC0054508-00 Rev.
11–iSCSI Protocol iSCSI Offload in Windows Server 11. On the Advanced Settings dialog box, General page, select the Marvell 57xx and 57xxx C-NIC iSCSI adapters as the Local adapter, and then click OK. Figure 11-15 shows an example. Figure 11-15. Advanced Settings: General Page, Local Adapter 12. Click OK to close the Microsoft Initiator. Doc No. BC0054508-00 Rev.
11–iSCSI Protocol iSCSI Offload in Windows Server 13. To format your iSCSI partition, use Disk Manager. NOTE Teaming does not support iSCSI adapters. Teaming does not support NDIS adapters that are in the boot path. Teaming supports NDIS adapters that are not in the iSCSI boot path, but only for the SLB or switch-independent team type. iSCSI Offload FAQs Question: How do I assign an IP address for iSCSI offload? Answer: Use the Configurations page in the applicable management utility.
11–iSCSI Protocol iSCSI Offload in Windows Server Table 11-5. Offload iSCSI (OIS) Driver Event Log Messages (Continued) Message Number Severity 4 Error MaxBurstLength is not serially greater than FirstBurstLength. Dump data contains FirstBurstLength followed by MaxBurstLength. 5 Error Failed to setup initiator portal. Error status is specified in the dump data. 6 Error The initiator could not allocate resources for an iSCSI connection. 7 Error The initiator could not send an iSCSI PDU.
11–iSCSI Protocol iSCSI Offload in Windows Server Table 11-5. Offload iSCSI (OIS) Driver Event Log Messages (Continued) Message Number Severity Message 22 Error Header digest error was detected for the specified PDU. Dump data contains the header and digest. 23 Error Target sent an invalid iSCSI PDU. Dump data contains the entire iSCSI header. 24 Error Target sent an iSCSI PDU with an invalid opcode. Dump data contains the entire iSCSI header. 25 Error Data digest error was detected.
11–iSCSI Protocol iSCSI Offload in Windows Server Table 11-5. Offload iSCSI (OIS) Driver Event Log Messages (Continued) Message Number Severity Message 40 Error Target requires logon authentication through CHAP, but Initiator is not configured to perform CHAP. 41 Error Target did not send AuthMethod key during security negotiation phase. 42 Error Target sent an invalid status sequence number for a connection.
11–iSCSI Protocol iSCSI Offload in Windows Server Table 11-5. Offload iSCSI (OIS) Driver Event Log Messages (Continued) Message Number Severity Message 59 Error Target dropped the connection before the initiator could transition to Full Feature Phase. 60 Error Target sent data in SCSI Response PDU instead of Data_IN PDU. Only Sense Data can be sent in SCSI Response. 61 Error Target set DataPduInOrder to NO when initiator requested YES. Login will be failed.
12 Marvell Teaming Services This chapter describes teaming for adapters in Windows Server systems (excluding Windows Server 2016 and later). For more information on a similar technologies on other operating systems (for example, Linux Channel Bonding), refer to your operating system documentation. Microsoft recommends using their in-OS NIC teaming service instead of any adapter vendor-proprietary NIC teaming driver on Windows Server 2012 and later.
12–Marvell Teaming Services Executive Summary This section describes the technology and implementation considerations when working with the network teaming services offered by the Marvell software shipped with Dell’s servers and storage products. The goal of Marvell teaming services is to provide fault tolerance and link aggregation across a team of two or more adapters.
12–Marvell Teaming Services Executive Summary Table 12-1. Glossary (Continued) Term Definition PXE pre-execution environment QCC QConvergeConsole QCS QLogic Control Suite RAID redundant array of inexpensive disks TCP transmission control protocol UDP user datagram protocol WINS Windows Internet Name Service Teaming Concepts The concept of grouping multiple physical devices to provide fault tolerance and load balancing is not new. It has been around for years.
12–Marvell Teaming Services Executive Summary The following information provides a high-level overview of the concepts of network addressing used in an Ethernet network. Every Ethernet network interface in a host platform, such as a computer system, requires a globally unique Layer 2 address and at least one globally unique Layer 3 address. Layer 2 is the data link layer, and Layer 3 is the network layer as defined in the OSI model.
12–Marvell Teaming Services Executive Summary For switch-independent teaming modes, all physical adapters that make up a virtual adapter must use the unique MAC address assigned to them when transmitting data. That is, the frames that are sent by each of the physical adapters in the team must use a unique MAC address to be IEEE compliant. It is important to note that ARP cache entries are not learned from received frames, but only from ARP requests and ARP replies.
12–Marvell Teaming Services Executive Summary Smart Load Balancing and Failover The Smart Load Balancing and Failover type of team provides both load balancing and failover when configured for load balancing, and only failover when configured for fault tolerance. This type of team works with any Ethernet switch and requires no trunking configuration on the switch. The team advertises multiple MAC addresses and one or more IP addresses (when using secondary IP addresses).
12–Marvell Teaming Services Executive Summary SLB receive load balancing attempts to load balance incoming traffic for client machines across physical ports in the team. It uses a modified gratuitous ARP to advertise a different MAC address for the team IP address in the sender physical and protocol address. The G-ARP is unicast with the MAC and IP Address of a client machine in the target physical and protocol address, respectively.
12–Marvell Teaming Services Executive Summary Generic Trunking Generic Trunking is a switch-assisted teaming mode and requires configuring ports at both ends of the link: server interfaces and switch ports. This port configuration is often referred to as Cisco Fast EtherChannel or Gigabit EtherChannel. In addition, generic trunking supports similar implementations by other switch OEMs such as Extreme Networks Load Sharing and Bay Networks or IEEE 802.3ad Link Aggregation static mode.
12–Marvell Teaming Services Executive Summary The Link Aggregation control function determines which links may be aggregated and then binds the ports to an Aggregator function in the system and monitors conditions to determine if a change in the aggregation group is required. Link aggregation combines the individual capacity of multiple links to form a high performance virtual link. The failure or replacement of a link in an LACP trunk will not cause loss of connectivity.
12–Marvell Teaming Services Executive Summary Table 12-3 describes the four software components and their associated files for supported operating systems. Table 12-3. Marvell Teaming Software Component Software Component Network Adapter or Operating System System Architecture Windows File Name 57xx 32-bit bxvbdx.sys 57xx 64-bit bxvbda.sys 5771x, 578xx 32-bit evbdx.sys 5771x, 578xx 64-bit evbda.
12–Marvell Teaming Services Executive Summary The use of a repeater requires that each station participating within the collision domain operate in half-duplex mode. Although half-duplex mode is supported for Gigabit Ethernet (GbE) adapters in the IEEE 802.3 specification, half-duplex mode is not supported by the majority of GbE adapter manufacturers. Therefore, half-duplex mode is not considered here.
12–Marvell Teaming Services Executive Summary Supported Features by Team Type Table 12-4 provides a feature comparison across the team types supported by Dell. Use this table to determine the best type of team for your application. The teaming software supports up to eight ports in a single team and up to 16 teams in a single system. These teams can be any combination of the supported teaming types, but each team must be on a separate network or subnet. Table 12-4.
12–Marvell Teaming Services Executive Summary Table 12-4. Comparison of Team Types (Continued) Type of Team Fault Tolerance Load Balancing Switch-Dependent Static Trunking Switch-Independent Dynamic Link Aggregation (IEEE 802.
12–Marvell Teaming Services Executive Summary Second choice: Generic Trunking Third choice: SLB, when using unmanaged switches or switches that do not support the first two choices. If switch fault tolerance is a requirement, either SLB or in-OS switch independent NIC teaming is the only choice. Doc No. BC0054508-00 Rev.
12–Marvell Teaming Services Executive Summary Figure 12-1 shows a flow chart for determining the team type. Figure 12-1. Process for Selecting a Team Type Doc No. BC0054508-00 Rev.
12–Marvell Teaming Services Teaming Mechanisms Teaming Mechanisms This section provides the following information about teaming mechanisms: Architecture Types of Teams Attributes of the Features Associated with Each Type of Team Speeds Supported for Each Type of Team Doc No. BC0054508-00 Rev.
12–Marvell Teaming Services Teaming Mechanisms Architecture The NDIS intermediate driver (see Figure 12-2) operates below protocol stacks such as TCP/IP and IPX and appears as a virtual adapter. This virtual adapter inherits the MAC Address of the first port initialized in the team. A Layer 3 address must also be configured for the virtual adapter.
12–Marvell Teaming Services Teaming Mechanisms Outbound Traffic Flow The Marvell intermediate driver manages the outbound traffic flow for all teaming modes. For outbound traffic, every packet is first classified into a flow, and then distributed to the selected physical adapter for transmission. The flow classification involves an efficient hash computation over known protocol fields. The resulting hash value is used to index into an Outbound Flow Hash Table.
12–Marvell Teaming Services Teaming Mechanisms When an inbound IP Datagram arrives, the appropriate Inbound Flow Head Entry is located by hashing the source IP address of the IP Datagram. Two statistics counters stored in the selected entry are also updated. These counters are used in the same fashion as the outbound counters by the load-balancing engine periodically to reassign the flows to the physical adapter.
12–Marvell Teaming Services Teaming Mechanisms The actual assignment between adapters may change over time, but any protocol that is not TCP/UDP based goes over the same physical adapter because only the IP address is used in the hash. Performance Modern network interface cards provide many hardware features that reduce CPU utilization by offloading specific CPU intensive operations (see “Teaming and Other Advanced Networking Properties” on page 168).
12–Marvell Teaming Services Teaming Mechanisms Network Communications Key attributes of SLB include: Failover mechanism—Link loss detection. Load Balancing Algorithm—Inbound and outbound traffic are balanced through a Marvell proprietary mechanism based on Layer 4 flows. Outbound Load Balancing using MAC Address—No Outbound Load Balancing using IP Address—Yes Multivendor Teaming—Supported (must include at least one Marvell Ethernet adapter as a team member).
12–Marvell Teaming Services Teaming Mechanisms Network Communications The following are the key attributes of Generic Static Trunking: Failover mechanism—Link loss detection Load Balancing Algorithm—Outbound traffic is balanced through Marvell proprietary mechanism-based Layer 4 flows. Inbound traffic is balanced according to a switch specific mechanism.
12–Marvell Teaming Services Teaming Mechanisms Network Communications The following are the key attributes of dynamic trunking: Failover mechanism—Link loss detection Load Balancing Algorithm—Outbound traffic is balanced through a Marvell proprietary mechanism based on Layer 4 flows. Inbound traffic is balanced according to a switch specific mechanism.
12–Marvell Teaming Services Teaming Mechanisms LiveLink functionality is supported in both 32-bit and 64-bit Windows operating systems. For similar functionality in Linux operating systems, see the Channel Bonding information in your Red Hat documentation. Attributes of the Features Associated with Each Type of Team The attributes of the features associated with each type of team are summarized in Table 12-5. Table 12-5.
12–Marvell Teaming Services Teaming Mechanisms Table 12-5. Teaming Attributes (Continued) Feature Attribute Hot add Yes Hot remove Yes Link speed support Different speeds b Frame protocol All Incoming packet management Switch Outgoing packet management QLASP Failover event Loss of link only Failover time < 500ms Fallback time 1.
12–Marvell Teaming Services Teaming and Other Advanced Networking Properties a Make sure that Port Fast or Edge Port is enabled. b Some switches require matching link speeds to correctly negotiate between trunk connections. Speeds Supported for Each Type of Team The various link speeds that are supported for each type of team are listed in Table 12-6. Mixed speed refers to the capability of teaming adapters that are running at different link speeds. Table 12-6.
12–Marvell Teaming Services Teaming and Other Advanced Networking Properties Before creating a team, adding or removing team members, or changing advanced settings of a team member, make sure each team member has been configured similarly. Settings to check include VLANs and QoS Packet Tagging, Jumbo Frames, and the various offloads. Advanced adapter properties and teaming support are listed in Table 12-7. Table 12-7.
12–Marvell Teaming Services Teaming and Other Advanced Networking Properties Checksum Offload Checksum Offload is a property of the Marvell network adapters that allows the TCP/IP/UDP checksums for send and receive traffic to be calculated by the adapter hardware rather than by the host CPU. In high-traffic situations, this can allow a system to handle more connections more efficiently than if the host CPU were forced to calculate the checksums.
12–Marvell Teaming Services Teaming and Other Advanced Networking Properties IEEE 802.1Q VLANs In 1998, the IEEE approved the 802.3ac standard, which defines frame format extensions to support Virtual Bridged Local Area Network tagging on Ethernet networks as specified in the IEEE 802.1Q specification. The VLAN protocol permits insertion of a tag into an Ethernet frame to identify the VLAN to which a frame belongs.
12–Marvell Teaming Services General Network Considerations Preboot Execution Environment The preboot execution environment (PXE) allows a system to boot from an operating system image over the network. By definition, PXE is invoked before an operating system is loaded, so there is no opportunity for the driver to load and enable a team.
12–Marvell Teaming Services General Network Considerations Teaming Across Switches SLB teaming can be configured across switches. The switches, however, must be connected together. Generic Trunking and Link Aggregation do not work across switches because each of these implementations requires that all physical adapters in a team share the same Ethernet MAC address. It is important to note that SLB can only detect the loss of link between the ports in the team and their immediate link partner.
12–Marvell Teaming Services General Network Considerations Furthermore, a failover event would cause additional loss of connectivity. Consider a cable disconnect on the Top Switch port 4. In this case, Gray would send the ICMP Request to Red 49:C9, but because the Bottom Switch has no entry for 49:C9 in its CAM Table, the frame is flooded to all its ports but cannot find a way to get to 49:C9. Figure 12-3. Teaming Across Switches Without an Inter-Switch Link Doc No. BC0054508-00 Rev.
12–Marvell Teaming Services General Network Considerations The addition of a link between the switches allows traffic from and to Blue and Gray to reach each other without any problems. Note the additional entries in the CAM table for both switches. The link interconnect is critical for the proper operation of the team. As a result, Marvell highly advises that you have a link aggregation trunk to interconnect the two switches to ensure high availability for the connection. Figure 12-4.
12–Marvell Teaming Services General Network Considerations Figure 12-5 represents a failover event in which the cable is unplugged on the Top Switch port 4. This event is a successful failover with all stations pinging each other without loss of connectivity. Figure 12-5. Failover Event Doc No. BC0054508-00 Rev.
12–Marvell Teaming Services General Network Considerations Spanning Tree Algorithm In Ethernet networks, only one active path may exist between any two bridges or switches. Multiple active paths between switches can cause loops in the network. When loops occur, some switches recognize stations on both sides of the switch. This situation causes the forwarding algorithm to malfunction allowing duplicate frames to be forwarded.
12–Marvell Teaming Services General Network Considerations Topology Change Notice (TCN) A bridge or switch creates a forwarding table of MAC addresses and port numbers by learning the source MAC address that received on a specific port. The table is used to forward frames to a specific port rather than flooding the frame to all ports. The typical maximum aging time of entries in the table is 5 minutes. Only when a host has been silent for 5 minutes would its entry be removed from the table.
12–Marvell Teaming Services General Network Considerations Layer 3 Routing and Switching The switch that the teamed ports are connected to must not be a Layer 3 switch or router. The ports in the team must be in the same network. Teaming with Hubs (for Troubleshooting Purposes Only) SLB teaming can be used with 10 and 100 hubs, but Marvell recommends using it only for troubleshooting purposes, such as connecting a network analyzer in the event that switch port mirroring is not an option.
12–Marvell Teaming Services General Network Considerations SLB Team Connected to a Single Hub SLB teams configured as shown in Figure 12-6 maintain their fault tolerance properties. Either server connection could potentially fail, and network functionality is maintained. Clients could be connected directly to the hub, and fault tolerance would still be maintained; server performance, however, would be degraded. Figure 12-6. Team Connected to a Single Hub Generic and Dynamic Trunking (FEC/GEC/IEEE 802.
12–Marvell Teaming Services Application Considerations Application Considerations Application considerations covered: Teaming and Clustering Teaming and Network Backup Teaming and Clustering Teaming and clustering information includes: Microsoft Cluster Software High-Performance Computing Cluster Oracle Microsoft Cluster Software Dell Server cluster solutions integrate Microsoft Cluster Services (MSCS) with PowerVault™ SCSI or Dell and EMC Fibre Channel-based storage, Dell servers, stor
12–Marvell Teaming Services Application Considerations Figure 12-7 shows a two-node Fibre-Channel cluster with three network interfaces per cluster node: one private and two public. On each node, the two public adapters are teamed, and the private adapter is not. Teaming is supported across the same switch or across two switches. Figure 12-8 on page 184 shows the same two-node Fibre-Channel cluster in this configuration. Figure 12-7.
12–Marvell Teaming Services Application Considerations High-Performance Computing Cluster Gigabit Ethernet is typically used for the following purposes in high-performance computing cluster (HPCC) applications: Inter-process communications (IPC): For applications that do not require low-latency, high-bandwidth interconnects (such as Myrinet™ or InfiniBand®), Gigabit Ethernet can be used for communication between the compute nodes.
12–Marvell Teaming Services Application Considerations Oracle In the Marvell Oracle® solution stacks, Marvell supports adapter teaming in both the private network (interconnect between Real Application Cluster [RAC] nodes) and public network with clients or the application layer above the database layer, as shown in Figure 12-8. Figure 12-8. Clustering with Teaming Across Two Switches Doc No. BC0054508-00 Rev.
12–Marvell Teaming Services Application Considerations Teaming and Network Backup When you perform network backups in a nonteamed environment, overall throughput on a backup server adapter can be easily impacted due to excessive traffic and adapter overloading. Depending on the quantity of backup servers, data streams, and tape drive speed, backup traffic can easily consume a high percentage of the network link bandwidth, thus impacting production data and tape backup performance.
12–Marvell Teaming Services Application Considerations Because there are four client servers, the backup server can simultaneously stream four backup jobs (one per client) to a multidrive autoloader. Because of the single link between the switch and the backup server; however, a four-stream backup can easily saturate the adapter and link.
12–Marvell Teaming Services Application Considerations The designated path is determined by two factors: Client-Server ARP cache points to the backup server MAC address. This address is determined by the Marvell intermediate driver inbound load balancing algorithm. The physical adapter interface on Client-Server Red transmits the data.
12–Marvell Teaming Services Application Considerations Fault Tolerance If a network link fails during tape backup operations, all traffic between the backup server and client stops and backup jobs fail. If, however, the network topology was configured for both Marvell SLB and switch fault tolerance, this configuration would allow tape backup operations to continue without interruption during the link failure. All failover processes within the network are transparent to tape backup software applications.
12–Marvell Teaming Services Application Considerations To understand how backup data streams are directed during network failover process, consider the topology in Figure 12-10. Client-Server Red is transmitting data to the backup server through Path 1, but a link failure occurs between the backup server and the switch.
12–Marvell Teaming Services Troubleshooting Teaming Problems Troubleshooting Teaming Problems When running a protocol analyzer over a virtual adapter teamed interface, the MAC address shown in the transmitted frames may not be correct. The analyzer does not show the frames as constructed by QLASP and shows the MAC address of the team and not the MAC address of the interface transmitting the frame.
12–Marvell Teaming Services Troubleshooting Teaming Problems A team that requires maximum throughput should use LACP or GEC\FEC. In these cases, the intermediate driver is only responsible for the outbound load balancing while the switch performs the inbound load balancing. Aggregated teams (802.3ad\LACP and GEC\FEC) must be connected to only a single switch that supports IEEE 802.3a, LACP, or GEC/FEC.
12–Marvell Teaming Services Frequently Asked Questions 5. Check that the adapters and the switch are configured identically for link speed and duplex. 6. If possible, break the team and check for connectivity to each adapter independently to confirm that the problem is directly associated with teaming. 7. Check that all switch ports connected to the team are on the same VLAN. 8. Check that the switch ports are configured properly for Generic Trunking (FEC/GEC)/802.
12–Marvell Teaming Services Frequently Asked Questions Question: Can I connect the teamed adapters to a hub? Answer: Teamed ports can be connected to a hub for troubleshooting purposes only. However, this practice is not recommended for normal operation because the performance would be degraded due to hub limitations. Connect the teamed ports to a switch instead. Question: Can I connect the teamed adapters to ports in a router? Answer: No.
12–Marvell Teaming Services Frequently Asked Questions Question: How do I upgrade the intermediate driver (QLASP)? Answer: The intermediate driver cannot be upgraded through the Local Area Connection Properties. It must be upgraded using the Setup installer. Question: How can I determine the performance statistics on a virtual adapter (team)? Answer: In QLogic Control Suite, click the Statistics tab for the virtual adapter.
12–Marvell Teaming Services Event Log Messages Question: Why does my team lose connectivity for the first 30 to 50 seconds after the primary adapter is restored (fall-back after a failover)? Answer: During a fall-back event, link is restored causing Spanning Tree Protocol to configure the port for blocking until it determines that it can move to the forwarding state. You must enable Port Fast or Edge Port on the switch ports connected to the team to prevent the loss of communications caused by STP.
12–Marvell Teaming Services Event Log Messages Base Driver (Physical Adapter or Miniport) The base driver is identified by source L2ND. Table 12-8 lists the event log messages supported by the base driver, explains the cause for the message, and provides the recommended action. NOTE In Table 12-8, message numbers 1 through 17 apply to both NDIS 5.x and NDIS 6.x drivers, message numbers 18 through 23 apply only to the NDIS 6.x driver. Table 12-8.
12–Marvell Teaming Services Event Log Messages Table 12-8. Base Driver Event Log Messages (Continued) Message Number Severity Message Cause 6 Informational Network controller configured for 10Mb half-duplex link. The adapter has been manually configured for the selected line speed and duplex settings. No action is required. 7 Informational Network controller configured for 10Mb full-duplex link. The adapter has been manually configured for the selected line speed and duplex settings.
12–Marvell Teaming Services Event Log Messages Table 12-8. Base Driver Event Log Messages (Continued) Message Number Severity Message Cause Corrective Action 15 Error Unable to map I/O space. The device driver cannot allocate memory-mapped I/O to access driver registers. Remove other adapters from the system, reduce the amount of physical memory installed, and replace the adapter. 16 Informational Driver initialized successfully. The driver has successfully loaded. No action is required.
12–Marvell Teaming Services Event Log Messages Table 12-8. Base Driver Event Log Messages (Continued) Message Number 23 Severity Error Message Cause Corrective Action Network controller failed to exchange the interface with the bus driver. The driver and the bus driver are not compatible. Update to the latest driver set, ensuring the major and minor versions for both NDIS and the bus driver are the same.
12–Marvell Teaming Services Event Log Messages Table 12-9. Intermediate Driver Event Log Messages (Continued) System Event Message Number Severity Message Cause Corrective Action 7 Error Could not allocate memory for internal data structures. The driver cannot allocate memory from the operating system. Close running applications to free memory. 8 Warning Could not bind to adapter. The driver could not open one of the team physical adapters.
12–Marvell Teaming Services Event Log Messages Table 12-9. Intermediate Driver Event Log Messages (Continued) System Event Message Number Severity Message Cause 14 Informational Network adapter does not support Advanced Failover. The physical adapter does not support the Marvell NIC Extension (NICE). Replace the adapter with one that does support NICE. 15 Informational Network adapter is enabled through management interface.
12–Marvell Teaming Services Event Log Messages Virtual Bus Driver (VBD) Table 12-10 lists VBD event log messages. Table 12-10. Virtual Bus Driver (VBD) Event Log Messages Message Number Severity Message Cause Corrective Action 1 Error Failed to allocate memory for the device block. Check system memory resource usage. The driver cannot allocate memory from the operating system. Close running applications to free memory. 2 Informational The network link is down.
12–Marvell Teaming Services Event Log Messages Table 12-10. Virtual Bus Driver (VBD) Event Log Messages (Continued) Message Number Severity Message Cause Corrective Action 8 Informational Network controller configured for 1Gb half-duplex link. The adapter has been manually configured for the selected line speed and duplex settings. No action is required. 9 Informational Network controller configured for 1Gb full-duplex link.
13 NIC Partitioning and Bandwidth Management NIC partitioning and bandwidth management covered in this chapter includes: Overview “Configuring for NIC Partitioning” on page 205 Overview NIC partitioning (NPAR) divides a Marvell 57xx and 57xxx 10-gigabit Ethernet NIC into multiple virtual NICs by having multiple PCI physical functions per port. Each PCI function is associated with a different virtual NIC. To the OS and the network, each physical function appears as a separate NIC port.
13–NIC Partitioning and Bandwidth Management Configuring for NIC Partitioning Supported Operating Systems for NIC Partitioning The Marvell 57xx and 57xxx 10-gigabit Ethernet adapters support NIC partitioning on the following operating systems: Windows Linux 2016 Server 2019 Server Azure Stack HCI RHEL 8.x and later family RHEL 7.x and later family SLES 15.x and later family VMware ESX 6.x and later family ESX 7.
13–NIC Partitioning and Bandwidth Management Configuring for NIC Partitioning NOTE In NPAR mode, SR-IOV cannot be enabled on any partition or PF (VNIC) on which storage offload (FCoE or iSCSI) is configured. This does not apply to adapters in Single Function (SF) mode. Configure NPAR mode (and reboot the system) before attempting to configure the SR-IOV settings on any NPAR-ed partitions of an adapter port. The NPAR mode configuration will take precedence over the SR-IOV configuration.
13–NIC Partitioning and Bandwidth Management Configuring for NIC Partitioning Table 13-2 describes the functions available from the PF# X window. Table 13-2. Function Description Function Description Ethernet Protocol Enables and disables the Ethernet protocol. Option Enable Disable iSCSI Offload Protocol Enables and disables the iSCSI protocol. Enable Disable FCoE Offload protocol Enables and disables the FCoE protocol.
13–NIC Partitioning and Bandwidth Management Configuring for NIC Partitioning Consider this example configuration: Four functions (or partitions) are configured with a total of six protocols, as shown in the following. Function 0 Ethernet FCoE Function 1 Ethernet Function 2 Ethernet Function 3 Ethernet iSCSI 1. If Relative Bandwidth Weight is configured as “0” for all four physical functions (PFs), all six offloads share the bandwidth equally.
14 Fibre Channel Over Ethernet Fibre Channel over Ethernet (FCoE) information includes: Overview “FCoE Boot from SAN” on page 210 “Configuring FCoE” on page 238 “N_Port ID Virtualization (NPIV)” on page 240 Overview In today’s data center, multiple networks, including network attached storage (NAS), management, IPC, and storage, are used to achieve the performance and versatility that you require.
14–Fibre Channel Over Ethernet FCoE Boot from SAN Data center bridging (DCB) provides lossless behavior with priority flow control (PFC) DCB allocates a share of link bandwidth to FCoE traffic with enhanced transmission selection (ETS) DCB supports storage, management, computing, and communications fabrics onto a single physical fabric that is simpler to deploy, upgrade, and maintain than in standard Ethernet networks.
14–Fibre Channel Over Ethernet FCoE Boot from SAN Preparing Marvell Multiple Boot Agent for FCoE Boot (CCM) CCM is available only when the system is set to legacy boot mode; it is not available when the systems is set to UEFI boot mode. The UEFI device configuration pages are available in both modes. 1. Invoke the CCM utility during POST. At the QLogic Ethernet Boot Agent banner (Figure 14-1), press the CTRL+S keys. Figure 14-1. Invoking the CCM Utility 2.
14–Fibre Channel Over Ethernet FCoE Boot from SAN 3. Ensure that DCB and DCBX are enabled on the device (Figure 14-3). FCoE boot is only supported on DCBX-capable configurations. As such, DCB and DCBX must be enabled, and the directly attached link peer must also be DCBX-capable with parameters that allow for full DCBX synchronization. Figure 14-3. CCM Device Hardware Configuration 4.
14–Fibre Channel Over Ethernet FCoE Boot from SAN For all other devices, use the CCM MBA Configuration Menu to set the Boot Protocol option to FCoE (Figure 14-4). Figure 14-4. CCM MBA Configuration Menu 5. Configure the boot target and LUN. From the Target Information menu, select the first available path (Figure 14-5). Figure 14-5. CCM Target Information Doc No. BC0054508-00 Rev.
14–Fibre Channel Over Ethernet FCoE Boot from SAN 6. Enable the Connect option, and then the target WWPN and Boot LUN information for the target to be used for boot (Figure 14-6). Figure 14-6. CCM Target Parameters The target information shows the changes (Figure 14-7). Figure 14-7. CCM Target Information (After Configuration) Doc No. BC0054508-00 Rev.
14–Fibre Channel Over Ethernet FCoE Boot from SAN 7. Press the ESC key until prompted to exit and save changes. To exit CCM, restart the system, and apply changes, press the CTRL+ALT+DEL keys. 8. Proceed to OS installation after storage access has been provisioned in the SAN. Preparing Marvell Multiple Boot Agent for FCoE Boot (UEFI) To prepare the Marvell multiple boot agent for FCOE boot (UEFI): 1.
14–Fibre Channel Over Ethernet FCoE Boot from SAN 5. In the FCoE Configuration menu, select FCoE General Parameters. The FCoE General Parameters menu appears (see Figure 14-9). Figure 14-9. FCoE Boot Configuration Menu, FCoE General Parameters 6. In the FCoE General Parameters menu: a. Select the desired Boot to FCoE Target mode (see One-Time Disabled).
14–Fibre Channel Over Ethernet FCoE Boot from SAN Provisioning Storage Access in the SAN Storage access consists of zone provisioning and storage selective LUN presentation, each of which is commonly provisioned per initiator WWPN.
14–Fibre Channel Over Ethernet FCoE Boot from SAN When the initiator boot starts, it begins DCBX sync, FIP Discovery, Fabric Login, Target Login, and LUN readiness checks. As each of these phases completes, if the initiator is unable to proceed to the next phase, MBA presents the option to press the CTRL+R keys. 3. Press the CTRL+R keys. 4.
14–Fibre Channel Over Ethernet FCoE Boot from SAN For OS installation over the FCoE path, you must instruct the Option ROM to bypass FCoE and skip to CD or DVD installation media. As instructed in “Preparing Marvell Multiple Boot Agent for FCoE Boot (CCM)” on page 211, the boot order must be configured with Marvell boot first and installation media second. Furthermore, during OS installation, it is necessary to bypass the FCoE boot and pass through to the installation media for boot.
14–Fibre Channel Over Ethernet FCoE Boot from SAN Windows Server 2016/2019/Azure Stack HCI FCoE Boot Installation Windows Server 2016/2019/Azure Stack HCI boot from SAN installation requires the use of a “slipstream” DVD or ISO image with the latest Marvell drivers injected (see “Injecting (Slipstreaming) Marvell Drivers into Windows Image Files” on page 122). Also, refer to the Microsoft Knowledge Base topic KB974072 at support.microsoft.
14–Fibre Channel Over Ethernet FCoE Boot from SAN e. Click Installation to proceed (Figure 14-11). Figure 14-11. Starting SLES Installation Doc No. BC0054508-00 Rev.
14–Fibre Channel Over Ethernet FCoE Boot from SAN 2. Follow the prompts to choose the driver update medium (Figure 14-12) and load the drivers (Figure 14-13). Figure 14-12. Selecting Driver Update Medium Figure 14-13. Loading the Drivers 3. After the driver update is complete, select Next to continue with OS installation. Doc No. BC0054508-00 Rev.
14–Fibre Channel Over Ethernet FCoE Boot from SAN 4. When requested, click Configure FCoE Interfaces (Figure 14-14). Figure 14-14. Activating the Disk Doc No. BC0054508-00 Rev.
14–Fibre Channel Over Ethernet FCoE Boot from SAN 5. Ensure that FCoE Enable is set to yes on the 10GbE Marvell initiator ports that you want to use as the SAN boot paths (Figure 14-15). Figure 14-15. Enabling FCoE 6. For each interface to be enabled for FCoE boot: a. Click Change Settings. b. On the Change FCoE Settings window (Figure 14-16), ensure that FCoE Enable and Auto_VLAN are set to yes. c. Ensure that DCB Required is set to no. d. Click Next to save the settings. Doc No.
14–Fibre Channel Over Ethernet FCoE Boot from SAN Figure 14-16. Changing FCoE Settings 7. For each interface to be enabled for FCoE boot: a. Click Create FCoE VLAN Interface. b. On the VLAN interface creation dialog box, click Yes to confirm and trigger automatic FIP VLAN discovery. If successful, the VLAN is displayed under FCoE VLAN Interface. If no VLAN is visible, check your connectivity and switch configuration. Doc No. BC0054508-00 Rev.
14–Fibre Channel Over Ethernet FCoE Boot from SAN 8. After completing the configuration of all interfaces, click OK to proceed (Figure 14-17). Figure 14-17. FCoE Interface Configuration 9. Click Next to continue installation. Doc No. BC0054508-00 Rev.
14–Fibre Channel Over Ethernet FCoE Boot from SAN 10. YaST2 prompts you to activate multipath. Answer as appropriate (Figure 14-18). Figure 14-18. Disk Activation 11. Continue installation as usual. Doc No. BC0054508-00 Rev.
14–Fibre Channel Over Ethernet FCoE Boot from SAN 12. On the Expert page on the Installation Settings window, click Booting (Figure 14-19). Figure 14-19. Installation Settings Doc No. BC0054508-00 Rev.
14–Fibre Channel Over Ethernet FCoE Boot from SAN 13. Click the Boot Loader Installation tab, and then select Boot Loader Installation Details. Make sure you have one boot loader entry here; delete all redundant entries (Figure 14-20). Figure 14-20. Boot Loader Device Map 14. Click OK to proceed and complete installation. Booting from RHEL 7.x Installation Media With the FCoE Target Already Connected To install Linux FCoE boot on RHEL 7.x: 1. Boot from the RHEL 7.
14–Fibre Channel Over Ethernet FCoE Boot from SAN 4. Select the kernel line, and then press the E key to edit the line. 5. Issue the following command, and then press ENTER: inst.dd modprobe.blacklist=bnx2x,bnx2fc,bnx2i,cnic 6. At the Driver disk device selection prompt: a. Refresh the device list by pressing the R key. b. Type the appropriate number for your media. c. Press the C key to continue.
14–Fibre Channel Over Ethernet FCoE Boot from SAN 13. On the Installation Destination window (Figure 14-21) under Other Storage Options, select your Partitioning options, and then click Done. Figure 14-21. Selecting Partitioning Options 14. On the Installation Summary window, click Begin Installation. Linux: Adding Boot Paths RHEL requires updates to the network configuration when adding new boot through an FCoE initiator that was not configured during installation. RHEL 6.2 and Later On RHEL 6.
14–Fibre Channel Over Ethernet FCoE Boot from SAN 3. Create a /etc/fcoe/cfg- file for each new FCoE initiator by duplicating the /etc/fcoe/cfg- file that was already configured during initial installation. 4. Issue the following command: nm-connection-editor 5. a. Open Network Connection and choose each new interface. b. Configure each interface as needed, including DHCP settings. c. Click Apply to save.
14–Fibre Channel Over Ethernet FCoE Boot from SAN 4. On the Select a Disk window (Figure 14-22), scroll to the boot LUN for installation, and then press ENTER to continue. Figure 14-22. ESXi Disk Selection 5. On the ESXi and VMFS Found window (Figure 14-23), select the installation method. Figure 14-23. ESXi and VMFS Found 6. Follow the prompts to: a. Select the keyboard layout. b. Enter and confirm the root password. Doc No. BC0054508-00 Rev.
14–Fibre Channel Over Ethernet FCoE Boot from SAN 7. On the Confirm Install window (Figure 14-24), press the F11 key to confirm the installation and repartition. Figure 14-24. ESXi Confirm Install 8. After successful installation (Figure 14-25), press ENTER to reboot. Figure 14-25. ESXi Installation Complete 9. On 57800 and 57810 boards, the management network is not vmnic0.
14–Fibre Channel Over Ethernet FCoE Boot from SAN 10. For 57800 and 57810 boards, the FCoE boot devices must have a separate vSwitch other than vSwitch0. This switch allows DHCP to assign the IP address to the management network rather than to the FCoE boot device. To create a vSwitch for the FCoE boot devices, add the boot device vmnics in vSphere Client on the Configuration page under Networking. Figure 14-27 shows an example. Figure 14-27.
14–Fibre Channel Over Ethernet Booting from SAN After Installation Booting from SAN After Installation After boot configuration and OS installation are complete, you can reboot and test the installation. On this and all future reboots, no other user interactivity is required. Ignore the CTRL+D prompt and allow the system to boot through to the FCoE SAN LUN, as shown in Figure 14-28. Figure 14-28.
14–Fibre Channel Over Ethernet Booting from SAN After Installation 3. 4. 5. Issue the following command to update the ramdisk (not required for SLES 12 and later, or RHEL 7.x and later): On RHEL 6.x systems, issue: dracut -force On SLES 11 SPX systems, issue: mkinitrd If you are using different name for the initrd under /boot: a. Overwrite it with the default, because dracut/mkinitrd updates the ramdisk with the default original name. b.
14–Fibre Channel Over Ethernet Configuring FCoE To avoid any of the preceding error messages, you must ensure that there is no USB flash drive attached until the setup asks for the drivers. When you load the drivers and see your SAN disks, detach or disconnect the USB flash drive immediately before selecting the disk for further installation. Configuring FCoE By default, DCB is enabled on 57712/578xx FCoE-, DCB-compatible C-NICs. The 57712/578xx FCoE requires a DCB-enabled interface.
14–Fibre Channel Over Ethernet Configuring FCoE To enable and disable the FCoE-offload instance on Windows using QCC GUI: 1. Open QCC GUI. 2. In the tree pane on the left, under the port node, select the port’s virtual bus device instance. 3. In the configuration pane on the right, click the Resource Config tab. The Resource Config page appears (see Figure 14-30). Figure 14-30. Resource Config Page 4.
14–Fibre Channel Over Ethernet N_Port ID Virtualization (NPIV) 5. (optional) To enable or disable FCoE-Offload or iSCSI-Offload in single function or NPAR mode on Windows or Linux using QCS CLI, see the User’s Guide, QLogic Control Suite CLI (part number BC0054511-00). To enable or disable FCoE-Offload or iSCSI-Offload in single function or NPAR mode on Windows or Linux using the QCC PowerKit, see the User’s Guide, PowerShell (part number BC0054518-00).
15 Data Center Bridging This chapter provides the following information about the data center bridging feature: Overview “DCB Capabilities” on page 242 “Configuring DCB” on page 243 “DCB Conditions” on page 243 “Data Center Bridging in Windows Server 2012 and Later” on page 244 Overview Data center bridging (DCB) is a collection of IEEE specified standard extensions to Ethernet to provide lossless data delivery, low latency, and standards-based bandwidth sharing of data center physical
15–Data Center Bridging DCB Capabilities DCB Capabilities DCB capabilities include ETS, PFC, and DCBX, as described in this section. Enhanced Transmission Selection (ETS) Enhanced transmission selection (ETS) provides a common management framework for assignment of bandwidth to traffic classes. Each traffic class or priority can be grouped in a priority group (PG), and it can be considered as a virtual link or virtual interface queue.
15–Data Center Bridging Configuring DCB Data Center Bridging Exchange (DCBX) Data center bridging exchange (DCBX) is a discovery and capability exchange protocol that is used for conveying capabilities and configuration of ETS and PFC between link partners to ensure consistent configuration across the network fabric. In order for two devices to exchange information, one device must be willing to adopt network configuration from the other device.
15–Data Center Bridging Data Center Bridging in Windows Server 2012 and Later In NIC partitioned enabled configurations, ETS (if operational) overrides the Bandwidth Relative (minimum) Weights assigned to each function. Transmission selection weights are per protocol per ETS settings instead. Maximum bandwidths per function are still honored in the presence of ETS.
15–Data Center Bridging Data Center Bridging in Windows Server 2012 and Later To revert to standard QCS CLI or QCC GUI control over the Marvell DCB feature set, uninstall the Microsoft QoS feature or disable quality of service in the QCS CLI, QCC GUI, or Device Manager NDIS advance properties page. NOTE Marvell recommends that you do not install the DCB feature if SR-IOV will be used.
16 SR-IOV This chapter provides information about single-root I/O virtualization (SR-IOV): Overview Enabling SR-IOV “Verifying that SR-IOV is Operational” on page 250 “SR-IOV and Storage Functionality” on page 250 “SR-IOV and Jumbo Packets” on page 251 NOTE See the VMware documentation for enabling SR-IOV on a pNIC at the hypervisor/driver level.
16–SR-IOV Enabling SR-IOV Enabling SR-IOV Before attempting to enable SR-IOV, ensure that: The adapter hardware supports SR-IOV. SR-IOV is supported and enabled in the system BIOS. Configure NPAR mode (if using). To enable SR-IOV: 1. Enable the feature on the adapter using either QCC GUI, QCS CLI, QCC PowerKit, Dell pre-boot UEFI, or pre-boot CCM. If using Windows QCC GUI: a. Select the network adapter in the Explorer View pane. Click the Configuration tab and select SR-IOV Global Enable. b.
16–SR-IOV Enabling SR-IOV f. If in SR-IOV mode (without NPAR mode), select the desired number of VFs for this port in the Number of VFs Per PF control window. The 2x1G+2x10G 57800 allows up to 64 VFs per 10G port (the 57800's two 1G ports do not support SR-IOV). The 2x10G 57810 allows up to 64 VFs per port. The 4x10G 57840 allows up to 32 VFs per port. g. If in SR-IOV (with NPAR mode), each partition has a separate Number of VFs Per PF control window.
16–SR-IOV Enabling SR-IOV 3. In Virtual Switch Manager, create a virtual NIC using the appropriate procedure for either Windows or ESX. In Windows: a. Select Allow Management operating system to share the network adapter if the host will use this vSwitch to connect to the associated VMs. b. Create a vSwitch and select the Enable Single root I/O Virtualization option. c. In Virtual Switch Manager, select the virtual adapter and select Hardware Acceleration in the navigation pane.
16–SR-IOV Verifying that SR-IOV is Operational Verifying that SR-IOV is Operational Follow the appropriate steps for Hyper-V, VMware vSphere, or ESXi CLI. To verify SR-IOV in Hyper-V Manager: 1. Start the VM. 2. In Hyper-V Manager, select the adapter and select the VM in the Virtual Machines list. 3. Click the Networking tab at the bottom of the window and view the adapter status. To verify SR-IOV in VMware vSphere 6.0 U2 Web Client: 1.
16–SR-IOV SR-IOV and Jumbo Packets This limitation applies only when the adapter is configured in NPAR mode. It is not relevant when the adapter is configured in single-function (SF) mode. In ESX, after enabling SR-IOV in the OS for SF mode, the storage adapter will not be discovered. SR-IOV and Jumbo Packets If SR-IOV is enabled on a virtual function (VF) on the adapter, ensure that the same jumbo packet settings is configured on both the VF and the Microsoft synthetic adapter.
17 Specifications Specifications, characteristics, and requirements include: 10/100/1000BASE-T and 10GBASE-T Cable Specifications “Interface Specifications” on page 255 “NIC Physical Characteristics” on page 256 “NIC Power Requirements” on page 256 “Wake on LAN Power Requirements” on page 257 “Environmental Specifications” on page 258 10/100/1000BASE-T and 10GBASE-T Cable Specifications Table 17-1.
17–Specifications 10/100/1000BASE-T and 10GBASE-T Cable Specifications Table 17-2. 10GBASE-T Cable Specifications Port Type Connector 10GBASE-T a RJ45 Maximum Distance Media CAT-6 a UTP 131ft (40m) CAT-6A a UTP 328ft (100m) 10GBASE-T signaling requires four twisted pairs of CAT-6 or CAT-6A (augmented CAT-6) balanced cabling, as specified in ISO/IEC 11801:2002 and ANSI/TIA/EIA-568-B Supported SFP+ Modules Per NIC Table 17-3.
17–Specifications 10/100/1000BASE-T and 10GBASE-T Cable Specifications Table 17-4. 57810 Supported Modules Module Type Optic Modules (SR) Direct Attach Cables Dell Part Number Module Vendor Module Part Number W365M Avago AFBR-703SDZ-D1 N743D Finisar Corp. FTLX8571D3BCL R8H2F Intel Corp. AFBR-703SDZ-IN2 R8H2F Intel Corp. FTLX8571D3BCV-IT K585N Cisco-Molex Inc. 74752-9093 J564N Cisco-Molex Inc. 74752-9094 H603N Cisco-Molex Inc. 74752-9096 G840N Cisco-Molex Inc.
17–Specifications Interface Specifications Table 17-5. 57840 Supported Modules Module Type Optic Modules (SR) Direct Attach Cables Dell Part Number Module Vendor R8H2F Module Part Number Intel Corp. AFBR-703SDZ-IN2 Intel Corp. FTLX8571D3BCV-IT K585N Cisco-Molex Inc. 74752-9093 J564N Cisco-Molex Inc. 74752-9094 H603N Cisco-Molex Inc. 74752-9096 G840N Cisco-Molex Inc.
17–Specifications NIC Physical Characteristics NIC Physical Characteristics Table 17-8. NIC Physical Characteristics NIC Type NIC Length 57810S PCI Express x8 low profile 6.6in (16.8cm) NIC Width 2.54in (6.5cm) NIC Power Requirements Table 17-9. 957810A1006G NIC Power Requirements Link 10G SFP Module a NIC 12V Current Draw (A) NIC 3.3V Current Draw (A) NIC Power (W) a 1.00 0.004 12.0 Power, measured in watts (W), is a direct calculation of total current draw (A) multiplied by voltage (V).
17–Specifications Wake on LAN Power Requirements Table 17-11. 957840A4006G Mezzanine Card Power Requirements Link a Total Power (12V and 3.3VAUX) (W) a 10G SFP+ 12.0 Standby WoL Enabled 5.0 Standby WoL Disabled 0.5 Power, measured in watts (W), is a direct calculation of total current draw (A) multiplied by voltage (V). The maximum power consumption for the adapter will not exceed 25W. Table 17-12. 957840A4007G Mezzanine Card Power Requirements Link a Total Power (3.
17–Specifications Environmental Specifications Environmental Specifications Table 17-13. 5709 and 5716 Environmental Specifications Parameter Condition Operating Temperature 32°F to 131°F (0°C to 55°C) Air Flow Requirement (LFM) 0 Storage Temperature –40°F to 149°F (–40°C to 65°C) Storage Humidity 5% to 95% condensing Vibration and Shock IEC 68, FCC Part 68.302, NSTA, 1A Electrostatic/Electromagnetic Susceptibility EN 61000-4-2, EN 55024 Table 17-14.
17–Specifications Environmental Specifications Table 17-16. 957840A4007G Environmental Specifications Parameter Condition Operating Temperature 32°F to 131°F (0°C to 65°C) Air Flow Requirement (LFM) 200 Storage Temperature –40°F to 149°F (–40°C to 65°C) Storage Humidity 5% to 95% condensing Vibration and Shock IEC 68, FCC Part 68.302, NSTA, 1A Electrostatic/Electromagnetic Susceptibility IEC 801-2, 3, 4, 5 Doc No. BC0054508-00 Rev.
18 Regulatory Information Regulatory information covered in this chapter includes the following: Product Safety AS/NZS (C-Tick) “FCC Notice” on page 261 “VCCI Notice” on page 263 “CE Notice” on page 268 “Canadian Regulatory Information (Canada Only)” on page 269 “Korea Communications Commission (KCC) Notice (Republic of Korea Only)” on page 271 “BSMI” on page 274 “Certifications for 95709SA0908G, 957710A1023G (E02D001), and 957711A1123G (E03D001)” on page 274 Product Saf
18–Regulatory Information FCC Notice FCC Notice FCC, Class B Marvell 57xx and 57xxx gigabit Ethernet controller 95708A0804F 95709A0907G 95709A0906G 957810A1008G Marvell Semiconductor, Inc. 15485 San Canyon Ave Irvine, CA 92618 USA The equipment complies with Part 15 of the FCC Rules.
18–Regulatory Information FCC Notice FCC, Class A Marvell 57xx and 57xxx gigabit Ethernet controller: 95709A0916G Marvell 57xx and 57xxx 10-gigabit Ethernet controller: 957800 957710A1022G 957710A1021G 957711A1113G 957711A1102G 957810A1006G (BC0410401) 957840A4006G 957840A4007G Marvell Semiconductor, Inc. 15485 San Canyon Ave Irvine, CA 92618 USA This device complies with Part 15 of the FCC Rules.
18–Regulatory Information VCCI Notice Do not make mechanical or electrical modifications to the equipment. NOTE If the device is changed or modified without permission of Marvell, the user may void his or her authority to operate the equipment. VCCI Notice The following tables provide the VCCI notice physical specifications for the Marvell 57xx and 57xxx adapters for Dell. Table 18-1.
18–Regulatory Information VCCI Notice Table 18-2. Marvell 57800S Quad RJ-45, SFP+, or Direct Attach Rack Network Daughter Card Physical Characteristics (Continued) Item Connectors Description Two ports SFP+ (10GbE) Two ports RJ45 (1GbE) Certifications RoHS, FCC A, UL, CE, VCCI, BSMI, C-Tick, KCC, TUV, and ICES-003 Table 18-3. Marvell 57810S Dual 10GBASE-T PCI-e Card Physical Characteristics Item Description Ports Dual 10Gbps BASE-T Ethernet ports Form Factor PCI Express short, low-profile card 6.
18–Regulatory Information VCCI Notice Table 18-4. Marvell 57810S Dual SFP+ or Direct Attach PCIe Physical Characteristics (Continued) Item Supported Servers Description 13th Generation: R630, R730, R730xd, and T630 12th Generation: R220, R320, R420, R520, R620, R720, R720xd, R820, R920, T420, and T620 Certifications RoHS, FCC A, UL, CE, VCCI, BSMI, C-Tick, KCC, TUV, and ICES-003 Table 18-5.
18–Regulatory Information VCCI Notice Table 18-7. Marvell 57840S Quad 10GbE SFP+ or Direct Attach Rack Network Daughter Card Physical Characteristics Item Description Ports Dual 10Gbps Ethernet Form Factor PCI Express short, low-profile card 6.60in×2.71in (67.64mm×68.
18–Regulatory Information VCCI Notice The equipment is a Class B product based on the standard of the Voluntary Control Council for Interference from Information Technology Equipment (VCCI). If used near a radio or television receiver in a domestic environment, it may cause radio interference. Install and use the equipment according to the instruction manual.
18–Regulatory Information CE Notice VCCI Class A Statement (Japan) CE Notice Marvell 57xx and 57xxx gigabit Ethernet controller 95708A0804F 95709A0907G 95709A0906G 95709A0916G 957810A1008G Marvell 57xx and 57xxx 10-gigabit Ethernet controller 957710A1022G 957710A1021G 957711A1113G 957711A1102G 957840A4006G 957840A4007G This product has been determined to be in compliance with 2006/95/EC (Low Voltage Directive), 2004/108/EC (EMC Directive), and amendments of the European Union.
18–Regulatory Information Canadian Regulatory Information (Canada Only) Canadian Regulatory Information (Canada Only) Industry Canada, Class B Marvell 57xx and 57xxx gigabit Ethernet controller 95708A0804F 95709A0907G 95709A0906G Marvell Semiconductor, Inc. 15485 San Canyon Ave Irvine, CA 92618 USA This Class B digital apparatus complies with Canadian ICES-003.
18–Regulatory Information Canadian Regulatory Information (Canada Only) Industry Canada, classe B Marvell 57xx and 57xxx gigabit Ethernet controller 95708A0804F 95709A0907G 95709A0906G Marvell Semiconductor, Inc. 15485 San Canyon Ave Irvine, CA 92618 USA Cet appareil numérique de la classe B est conforme à la norme canadienne ICES-003.
18–Regulatory Information Korea Communications Commission (KCC) Notice (Republic of Korea Only) Korea Communications Commission (KCC) Notice (Republic of Korea Only) B Class Device Marvell 57xx and 57xxx gigabit Ethernet controller 95708A0804F 95709A0907G 95709A0906G Marvell Semiconductor, Inc. 15485 San Canyon Ave Irvine, CA 92618 USA Doc No. BC0054508-00 Rev.
18–Regulatory Information Korea Communications Commission (KCC) Notice (Republic of Korea Only) Note that this device has been approved for non-business purposes and may be used in any environment, including residential areas. A Class Device Marvell 57xx and 57xxx gigabit Ethernet controller 95709A0916G Marvell 57xx and 57xxx 10-gigabit Ethernet controller 957710A1022G 957710A1021G 957711A1113G 957711A1102G 957810A1008G 957840A4006G 957840A4007G Doc No. BC0054508-00 Rev.
18–Regulatory Information Korea Communications Commission (KCC) Notice (Republic of Korea Only) Marvell Semiconductor, Inc. 15485 San Canyon Ave Irvine, CA 92618 USA Doc No. BC0054508-00 Rev.
18–Regulatory Information BSMI BSMI Certifications for 95709SA0908G, 957710A1023G (E02D001), and 957711A1123G (E03D001) This section is included on behalf of Dell, and Marvell is not responsible for the validity or accuracy of the information.
18–Regulatory Information Certifications for 95709SA0908G, 957710A1023G (E02D001), and 957711A1123G (E03D001) FCC Notice FCC, Class A Marvell 57xx and 57xxx gigabit Ethernet controller 95709SA0908G Marvell 57xx and 57xxx 10-gigabit Ethernet controller 957710A1023G 957711A1123G (E03D001) E02D001 Dell Inc. Worldwide Regulatory Compliance, Engineering and Environmental Affairs One Dell Way PS4-30 Round Rock, Texas 78682, USA 512-338-4400 This device complies with Part 15 of the FCC Rules.
18–Regulatory Information Certifications for 95709SA0908G, 957710A1023G (E02D001), and 957711A1123G (E03D001) Do not make mechanical or electrical modifications to the equipment. NOTE If the device is changed or modified without permission of Dell Inc, the user may void his or her authority to operate the equipment.
18–Regulatory Information Certifications for 95709SA0908G, 957710A1023G (E02D001), and 957711A1123G (E03D001) 957711A1123G (E03D001) E02D001 Dell Inc. Worldwide Regulatory Compliance, Engineering and Environmental Affairs One Dell Way PS4-30 Round Rock, Texas 78682, USA 512-338-4400 This product has been determined to be in compliance with 2006/95/EC (Low Voltage Directive), 2004/108/EC (EMC Directive), and amendments of the European Union.
18–Regulatory Information Certifications for 95709SA0908G, 957710A1023G (E02D001), and 957711A1123G (E03D001) Industry Canada, classe A Marvell 57xx and 57xxx gigabit Ethernet Controller 95709SA0908G Marvell 57xx and 57xxx 10-gigabit Ethernet Controller 957710A1023G 957711A1123G (E03D001) E02D001 Dell Inc.
18–Regulatory Information Certifications for 95709SA0908G, 957710A1023G (E02D001), and 957711A1123G (E03D001) Dell Inc. Worldwide Regulatory Compliance, Engineering and Environmental Affairs One Dell Way PS4-30 Round Rock, Texas 78682, USA 512-338-4400 Doc No. BC0054508-00 Rev.
19 Troubleshooting Troubleshooting topics cover the following: Hardware Diagnostics “Checking Port LEDs” on page 282 “Troubleshooting Checklist” on page 282 “Checking if Current Drivers Are Loaded” on page 283 “Running a Cable Length Test” on page 284 “Testing Network Connectivity” on page 284 “Microsoft Virtualization with Hyper-V” on page 285 “Removing the Marvell 57xx and 57xxx Device Drivers” on page 288 “Upgrading Windows Operating Systems” on page 289 “Marvell B
19–Troubleshooting Hardware Diagnostics QCS CLI and QCC GUI Diagnostic Tests Failures If any of the following tests fail while running the diagnostic tests from QCS CLI or QCC GUI, this may indicate a hardware issue with the NIC or LOM that is installed in the system. Control Registers MII Registers EEPROM Internal Memory On-Chip CPU Interrupt Loopback - MAC Loopback - PHY Test LED Troubleshooting steps that may help correct the failure: 1.
19–Troubleshooting Checking Port LEDs Checking Port LEDs To check the state of the network link and activity, see “Network Link and Activity Indication” on page 7. Troubleshooting Checklist CAUTION Before you open the cabinet of your server to add or remove the adapter, review “Safety Precautions” on page 19. The following checklist provides recommended actions to take to resolve problems installing the Marvell 57xx and 57xxx adapter or running it in your system. Inspect all cables and connections.
19–Troubleshooting Checking if Current Drivers Are Loaded Checking if Current Drivers Are Loaded Follow the appropriate procedure for your operating system to confirm if the current drivers are loaded. Windows See the QCC GUI online help for information on viewing vital information about the adapter, link status, and network connectivity. Linux To verify that the bnx2.
19–Troubleshooting Running a Cable Length Test Following is a sample output. driver: bnx2x version: 1.78.07 firmware-version: bc 7.8.6 bus-info: 0000:04:00.2 If you loaded a new driver but have not yet booted, the modinfo command does not show the updated driver information.
19–Troubleshooting Microsoft Virtualization with Hyper-V Linux To verify that the Ethernet interface is up and running, issue ifconfig to check the status of the Ethernet interface. It is possible to use netstat -i to check the statistics on the Ethernet interface. For information on ifconfig and netstat, see Chapter 7 Linux Driver Software. Ping an IP host on the network to verify connection has been established. From the command line, issue the ping command, and then press ENTER.
19–Troubleshooting Microsoft Virtualization with Hyper-V Table 19-1. Configurable Network Adapter Hyper-V Features (Continued) Feature Supported in Windows Server Version 2012 and Later Comments and Limitations Jumbo frames Yes * OS limitation. RSS Yes * OS limitation. RSC Yes * OS limitation. SR-IOV Yes * OS limitation. NOTE For full functionality, ensure that Integrated Services, which is a component of Hyper-V, is installed in the guest operating system (child partition).
19–Troubleshooting Microsoft Virtualization with Hyper-V Teamed Network Adapters Table 19-2 identifies Hyper-V supported features that are configurable for 57xx and 57xxx teamed network adapters. This table is not an all-inclusive list of Hyper-V features. Table 19-2. Configurable Teamed Network Adapter Hyper-V Features Feature Smart Load Balancing and Failover (SLB) team type Supported in Windows Server Version 2012 Yes Comments and Limitations Multimember SLB team allowed with latest QLASP6 version.
19–Troubleshooting Removing the Marvell 57xx and 57xxx Device Drivers Table 19-2. Configurable Teamed Network Adapter Hyper-V Features (Continued) Feature RSC Supported in Windows Server Version 2012 Yes Comments and Limitations — Configuring VMQ with SLB Teaming When a Hyper-V server is installed on a system configured to use Smart Load Balance and Failover (SLB) type teaming, you can enable virtual machine queue (VMQ) to improve overall network performance.
19–Troubleshooting Upgrading Windows Operating Systems If you manually uninstalled the device drivers with Device Manager and attempted to reinstall the device drivers but could not, run the Repair option from the InstallShield wizard. For information on repairing Marvell 57xx and 57xxx device drivers, see “Repairing or Reinstalling the Driver Software” on page 94. Upgrading Windows Operating Systems This section covers Windows upgrades from Windows Server 2008 R2 to Windows Server 2012.
19–Troubleshooting Linux Problem: Routing does not work for 57xx and 57xxx 10GbE network adapters installed in Linux systems. Solution: For 57xx and 57xxx 10GbE network adapters installed in systems with Linux kernels older than 2.6.26, disable TPA with either ethtool (if available) or with the driver parameter (see “disable_tpa” on page 45). Use ethtool to disable TPA (LRO) for a specific 57xx and 57xxx 10GbE network adapter.
19–Troubleshooting NPAR There is gap between when the pre switch-root iscsistart establishes the connection and when iscsid takes over iSCSI connection. During this time, the OS boot process no way to recovery the iSCSI connection. In some cases, the bnx2x NIC interface’s link 'flaps' during this gap, the iSCSI connection is interrupted, and the iSCSI connection recovery or retries fail. Solution: Avoid the bnx2x NIC interface’s link flap, load the bnx2x driver with the module parameter disable_tpa=1.
19–Troubleshooting Miscellaneous Problem: iSCSI Crash Dump is not working in Windows. Solution: After upgrading the device drivers using the installer, the iSCSI crash dump driver is also upgraded, and iSCSI Crash Dump must be re-enabled from the Advanced section of the QCS Configurations page. Problem: The Marvell 57xx and 57xxx adapter may not perform at optimal level on some systems if it is added after the system has booted.
A Revision History Document Revision History Revision A, February 18, 2015 Revision B, July 29, 2015 Revision C, March 24, 2016 Revision D, April 8, 2016 Revision E, February 2, 2017 Revision F, August 25, 2017 Revision G, December 19, 2017 Revision H, March 15, 2018 Revision J, April 13, 2018 Revision K, October 25, 2018 Revision L, June 7, 2019 Revision M, October 16, 2019 Revision N, April 3, 2020 Revision P, July 7, 2020 Revision R, January 21, 2020 Changes Added support for the following OSs: RHEL 7.
User’s Guide–Ethernet iSCSI Adapters and Ethernet FCoE Adapters Marvell 5740/57810/57800 Adapters and other 57xx and 57xxx Adapters Removed “Downloading Documents” section. Preface Added bullet for VMDirectPath I/O. “Features” on page 2 Revision History Revised the second paragraph to indicate that “Bind iSCSI Target to Marvell iSCSI Transport the iface file information is for all SLES ver- Name” on page 59 sions. Removed sub-section for bnx2x.
Marvell first revolutionized the digital storage industry by moving information at speeds never thought possible. Today, that same breakthrough innovation remains at the heart of the company's storage, networking and connectivity solutions. With leading intellectual property and deep system-level knowledge, Marvell semiconductor solutions continue to transform the enterprise, cloud, automotive, industrial, and consumer markets. For more information, visit www.marvell.com. © 2021 Marvell.