User’s Guide Ethernet iSCSI Adapters and Ethernet FCoE Adapters QLogic BCM57xx and BCM57xxx BC0054508-00 J Third party information brought to you courtesy of Dell.
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters QLogic BCM57xx and BCM57xxx Document Revision History Revision A, February 18, 2015 Revision B, July 29, 2015 Revision C, March 24, 2016 Revision D, April 8, 2016 Revision E, February 2, 2017 Revision F, August 25, 2017 Revision G, December 19, 2017 Revision H, March 15, 2018 Revision J, April 13, 2018 Changes Sections Affected Changed Step 1.
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters QLogic BCM57xx and BCM57xxx Added support for Windows 2016.
Table of Contents Preface Intended Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . What Is in This Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Related Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Documentation Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Downloading Documents . . . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters QLogic BCM57xx and BCM57xxx 3 Virtual LANs in Windows VLAN Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adding VLANs to Teams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Installing the Hardware System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hardware Requirements . . . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters QLogic BCM57xx and BCM57xxx Installing Linux Driver Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installing the Source RPM Package . . . . . . . . . . . . . . . . . . . . . . . . . . Installing the KMP Package. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Building the Driver from the Source TAR File . . . . . . . . . . . . . . . . . . . Installing the Binary DKMS RPM Driver Package . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters QLogic BCM57xx and BCM57xxx Driver Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . bnx2x Driver Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Driver Sign On . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C-NIC Driver Sign On (bnx2 only) . . . . . . . . . . . . . . . . . . . . . . . NIC Detected . . . . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters QLogic BCM57xx and BCM57xxx Unable to Issue I/O Request Due to Session Not Ready . . . . . . Drop Incorrect L2 Receive Frames. . . . . . . . . . . . . . . . . . . . . . . Host Bus Adapter and lport Allocation Failures . . . . . . . . . . . . . NPIV Port Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Teaming with Channel Bonding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Statistics . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters QLogic BCM57xx and BCM57xxx Link Up and Speed Indication. . . . . . . . . . . . . . . . . . . . . . . . . . . Link Down Indication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Memory Limitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Multiqueue and NetQueue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FCoE Support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters QLogic BCM57xx and BCM57xxx Other iSCSI Boot Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . Changing the Speed and Duplex Settings in Windows Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Locally Administered Address. . . . . . . . . . . . . . . . . . . . . . . . . . . Virtual LANs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters QLogic BCM57xx and BCM57xxx Types of Teams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Switch-Independent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Switch-Dependent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LiveLink . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters QLogic BCM57xx and BCM57xxx Frequently Asked Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Event Log Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Windows System Event Log Messages . . . . . . . . . . . . . . . . . . . . . . . Base Driver (Physical Adapter or Miniport) . . . . . . . . . . . . . . . . . . . . . Intermediate Driver (Virtual Adapter or Team) . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters QLogic BCM57xx and BCM57xxx 14 Fibre Channel Over Ethernet Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FCoE Boot from SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Preparing System BIOS for FCoE Build and Boot. . . . . . . . . . . . . . . . Modifying System Boot Order. . . . . . . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters QLogic BCM57xx and BCM57xxx 16 SR-IOV Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enabling SR-IOV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verifying that SR-IOV is Operational . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SR-IOV and Storage Functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters QLogic BCM57xx and BCM57xxx BSMI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Certifications for BCM95709SA0908G, BCM957710A1023G (E02D001), and BCM957711A1123G (E03D001) . . . . . . . . . . . . . . . . . . . FCC Notice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FCC, Class A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters QLogic BCM57xx and BCM57xxx QLogic Boot Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . QLASP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Linux. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NPAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters QLogic BCM57xx and BCM57xxx List of Figures Figure Page 3-1 Example of Servers Supporting Multiple VLANs with Tagging. . . . . . . . . . . . . . . . . 14 6-1 CCM MBA Configuration Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 6-2 System Setup, Device Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 6-3 Device Settings . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters QLogic BCM57xx and BCM57xxx 14-10 14-11 14-12 14-13 14-14 14-15 14-16 14-17 14-18 14-19 14-20 14-21 14-22 14-23 14-24 14-25 14-26 14-27 14-28 14-29 14-30 14-31 14-32 14-33 14-34 14-35 14-36 14-37 14-38 14-39 FCoE Boot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installing EVBD Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters QLogic BCM57xx and BCM57xxx List of Tables Table 1-1 1-2 2-1 3-1 4-1 4-2 7-1 8-1 9-1 10-1 10-2 10-3 10-4 10-5 11-1 11-2 11-3 11-4 11-5 11-6 11-7 11-8 11-9 11-10 12-1 12-2 13-1 13-2 13-3 14-1 17-1 17-2 17-3 17-4 17-5 17-6 17-7 17-8 17-9 17-10 17-11 17-12 17-13 Network Link and Activity Indicated by the RJ45 Port LEDs . . . . . . . . . . . . . . . . . . Network Link and Activity Indicated by the Port LED . . . . . . . . . . . . . . . . . . . .
User’s Guide—Ethernet iSCSI Adapters and Ethernet FCoE Adapters QLogic BCM57xx and BCM57xxx 17-14 17-15 17-16 18-1 18-2 18-3 18-4 18-5 18-6 18-7 18-8 19-1 19-2 BCM957810A1006G Environmental Specifications . . . . . . . . . . . . . . . . . . . . . . . . BCM957810A1008G Environmental Specifications. . . . . . . . . . . . . . . . . . . . . . . . . BCM957840A4007G Environmental Specifications . . . . . . . . . . . . . . . . . . . . . . . .
Preface This section provides information about this guide’s intended audience, content, document conventions, and laser safety information. NOTE QLogic® now supports QConvergeConsole® (QCC) GUI as the only GUI management tool across all QLogic adapters. QLogic Control Suite (QCS) GUI is no longer supported for the QLogic adapters based on 57xx/57xxx controllers, and has been replaced by the QCC GUI management tool. The QCC GUI provides single-pane-of-glass GUI management for all QLogic adapters.
Preface Related Materials Related Materials For additional information, refer to the Migration Guide: QLogic/Broadcom NetXtreme I/II Adapters, document number BC0054606-00. The migration guide presents an overview of QLogic’s acquisition of specific Broadcom® Ethernet assets and its end-user impact, and was written in cooperation between Broadcom and QLogic. Documentation Conventions This guide uses the following documentation conventions: NOTE CAUTION provides additional information.
Preface Downloading Documents Key names and key strokes are indicated with UPPERCASE: Press the CTRL+P keys. Press the UP ARROW key. Text in italics indicates terms, emphasis, variables, or document titles. For example: For a complete listing of license agreements, refer to the QLogic Software End User License Agreement. What are shortcut keys? To enter the date type mm/dd/yyyy (where mm is the month, dd is the day, and yyyy is the year).
Preface Laser Safety Information Laser Safety Information This product may use Class 1 laser optical transceivers to communicate over the fiber optic conductors. The U.S. Department of Health and Human Services (DHHS) does not consider Class 1 lasers to be hazardous. The International Electrotechnical Commission (IEC) 825 Laser Safety Standard requires labeling in English, German, Finnish, and French stating that the product uses Class 1 lasers.
1 Functionality and Features This chapter covers the following for the adapters: Functional Description “Features” on page 2 “Supported Operating Environments” on page 5 “Network Link and Activity Indication” on page 6 Functional Description The QLogic BCM57xx and BCM57xxx adapter is a new class of gigabit Ethernet (GbE) and 10GbE converged network interface controller (C-NIC) that can simultaneously perform accelerated data networking and storage networking on a standard Ethernet network.
1–Functionality and Features Features Using the QLogic teaming software, you can split your network into virtual LANs (VLANs), as well as group multiple network adapters together into teams to provide network load balancing and fault tolerance functionality. For detailed information about teaming, see Chapter 2 Configuring Teaming in Windows Server and Chapter 11 QLogic Teaming Services. For a description of VLANs, see Chapter 3 Virtual LANs in Windows.
1–Functionality and Features Features Wake on LAN (WoL) support Universal management port (UMP) support SMBus controller Advanced configuration and power interface (ACPI) 1.1a compliant (multiple power modes) (see “Power Management” on page 5) Intelligent platform management interface (IPMI) support Advanced network features: Jumbo frames (up to 9,600 bytes). The OS and the link partner must support jumbo frames. Virtual LANs IEEE Std 802.
1–Functionality and Features Features Virtualization: Microsoft® VMware® Linux® XenServer® Single root I/O virtualization (SR-IOV) iSCSI The Internet Engineering Task Force (IETF) has standardized iSCSI. SCSI is a popular protocol that enables systems to communicate with storage devices, using block-level transfer (that is, address data stored on a storage device that is not a whole file).
1–Functionality and Features Supported Operating Environments Power Management The adapter speed setting will link at the configured speed for WoL when the system is powered down. NOTE Dell™ supports WoL on only one adapter in the system at a time. For specific systems, see your system documentation for WoL support. Adaptive Interrupt Frequency The adapter driver intelligently adjusts host interrupt frequency based on traffic conditions to increase overall application throughput.
1–Functionality and Features Network Link and Activity Indication Network Link and Activity Indication For copper-wire Ethernet connections, the state of the network link and activity is indicated by the LEDs on the RJ45 connector, as described in Table 1-1. Table 1-1.
2 Configuring Teaming in Windows Server Teaming configuration in a Microsoft Windows Server® system includes an overview of the QLogic Advanced Server Program (QLASP), load balancing, and fault tolerance. Windows Server 2016 and later do not support QLogic’s QLASP teaming driver. QLASP Overview “Load Balancing and Fault Tolerance” on page 8 NOTE This chapter describes teaming for adapters in Windows Server systems.
2–Configuring Teaming in Windows Server Load Balancing and Fault Tolerance For more information on network adapter teaming concepts, see Chapter 11 QLogic Teaming Services. NOTE Windows Server 2012 and later provide built-in teaming support, called NIC Teaming. QLogic recommends that users do not enable teams through NIC Teaming and QLASP at the same time on the same adapters. Windows Server 2016 does not support QLogic’s QLASP teaming driver.
2–Configuring Teaming in Windows Server Load Balancing and Fault Tolerance Smart Load Balancing and Failover Smart Load Balancing and Failover is the Broadcom® implementation of load balancing based on IP flow. This feature supports balancing IP traffic across multiple adapters (team members) in a bidirectional manner. In this type of team, all adapters in the team have separate MAC addresses.
2–Configuring Teaming in Windows Server Load Balancing and Fault Tolerance Generic Trunking (FEC/GEC)/802.3ad-Draft Static The Generic Trunking (FEC/GEC)/802.3ad-Draft Static type of team is very similar to the Link Aggregation (802.3ad) type of team in that all adapters in the team are configured to receive packets for the same MAC address. The Generic Trunking (FEC/GEC)/802.3ad-Draft Static) type of team, however, does not provide LACP or marker protocol support.
2–Configuring Teaming in Windows Server Load Balancing and Fault Tolerance Limitations of Smart Load Balancing and Failover and SLB (Auto-Fallback Disable) Types of Teams Smart Load Balancing (SLB) is a protocol-specific scheme. The level of support for IP is listed in Table 2-1. Table 2-1.
2–Configuring Teaming in Windows Server Load Balancing and Fault Tolerance Teaming and Large Send Offload and Checksum Offload Support Large send offload (LSO) and checksum offload are enabled for a team only when all of the members support and are configured for the feature.
3 Virtual LANs in Windows This chapter provides information about VLANs in Windows for teaming. VLAN Overview “Adding VLANs to Teams” on page 16 VLAN Overview Virtual LANs (VLANs) allow you to split your physical LAN into logical parts, to create logical segmentation of work groups, and to enforce security policies for each logical segment.
3–Virtual LANs in Windows VLAN Overview Although VLANs are commonly used to create individual broadcast domains and separate IP subnets, it is sometimes useful for a server to have a simultaneous presence on more than one VLAN. QLogic adapters support multiple VLANs on a per-port or per-team basis, allowing very flexible network configurations. Figure 3-1.
3–Virtual LANs in Windows VLAN Overview Figure 3-1 shows an example network that uses VLANs. In this example network, the physical LAN consists of a switch, two servers, and five clients. The LAN is logically organized into three different VLANs, each representing a different IP subnet. Table 3-1 describes the features of this network. Table 3-1. Example VLAN Network Topology Component Description VLAN #1 An IP subnet consisting of the Main Server, PC #3, and PC #5.
3–Virtual LANs in Windows Adding VLANs to Teams NOTE VLAN tagging is only required to be enabled on switch ports that create trunk links to other switches, or on ports connected to tag-capable end-stations, such as servers or workstations with QLogic adapters. For Hyper-V®, create VLANs in the vSwitch-to-VM connection instead of in a team, to allow VM live migrations to occur without having to ensure the future host system has a matching team VLAN setup.
4 Installing the Hardware This chapter applies to QLogic BCM57xx and BCM57xxx add-in network interface cards.Hardware installation covers the following: System Requirements “Safety Precautions” on page 19 “Preinstallation Checklist” on page 19 “Installation of the Add-In NIC” on page 20 NOTE Service Personnel: This product is intended only for installation in a Restricted Access Location (RAL).
4–Installing the Hardware System Requirements Operating System Requirements NOTE Because the Dell Update Packages Version xx.xx.xxx User’s Guide is not updated in the same cycle as this Ethernet adapter user’s guide, consider the operating systems listed in this section as the most current. This section describes the requirements for each supported OS. General The following host interface is required: PCI Express v1.
4–Installing the Hardware Safety Precautions VMware ESXi One of the following versions of vSphere® ESXi: VMware ESXi 6.7 VMware ESXi 6.5 VMware ESXi 6.5 U2 VMware ESXi 6.5 U1 VMware ESXi 6.0 U3 VMware ESXi 6.0 U2 Citrix XenServer The following version of XenServer: Citrix XenServer 6.5 Safety Precautions ! WARNING The adapter is being installed in a system that operates with voltages that can be lethal.
4–Installing the Hardware Installation of the Add-In NIC 3. If your system is active, shut it down. 4. When system shutdown is complete, turn off the power and unplug the power cord. 5. Remove the adapter from its shipping package and place it on an antistatic surface. 6. Check the adapter for visible signs of damage, especially on the edge connector. Never attempt to install a damaged adapter.
4–Installing the Hardware Installation of the Add-In NIC Connecting the Network Cables The QLogic BCM57xx and BCM57xxx adapters have either an RJ45 connector used for attaching the system to an Ethernet copper-wire segment or a fiber optic connector for attaching the system to an Ethernet fiber optic segment. NOTE This section does not apply to blade servers. Copper Wire To connect a copper wire: 1. Select an appropriate cable.
4–Installing the Hardware Installation of the Add-In NIC Fiber Optic To connect a fiber optic cable: 1. Select an appropriate cable. Table 4-2 lists the fiber optic cable requirements for connecting to 1000 and 2500BASE-X ports. See also the tables in “Supported SFP+ Modules Per NIC” on page 251. Table 4-2.
5 Manageability Information about manageability includes: CIM “Host Bus Adapter API” on page 24 CIM The common information model (CIM) is an industry standard defined by the Distributed Management Task Force (DMTF). Microsoft implements CIM on Windows Server platforms. QLogic supports CIM on Windows Server and Linux platforms. NOTE For information on installing a CIM provider on Linux-based systems, see Chapter 13 Linux QCS Installation.
5–Manageability Host Bus Adapter API QLASP provides events through event logs. To inspect or monitor these events, use either the Event Viewer provided by Windows Server platforms or the CIM. The QLogic CIM provider also provides event information through the CIM generic event model. These events are __InstanceCreationEvent, __InstanceDeletionEvent, and __InstanceModificationEvent, and are defined by CIM.
6 Boot Agent Driver Software This chapter covers how to set up MBA in both client and server environments: Overview “Setting Up MBA in a Client Environment” on page 26 “Setting Up MBA in a Linux Server Environment” on page 32 Overview QLogic BCM57xx and BCM57xxx adapters support pre-execution environment (PXE), remote program load (RPL), iSCSI, and bootstrap protocol (BOOTP).
6–Boot Agent Driver Software Setting Up MBA in a Client Environment Setting Up MBA in a Client Environment Setting up MBA in a client environment involves the following steps: 1. Configuring the MBA Driver. 2. Setting Up the BIOS for the boot order. Configuring the MBA Driver This section pertains to configuring the MBA driver on add-in NIC models of the QLogic network adapter. For configuring the MBA driver on LOM models of the QLogic network adapter, check your system documentation.
6–Boot Agent Driver Software Setting Up MBA in a Client Environment Using Comprehensive Configuration Management To use CCM to configure the MBA driver: 1. Restart the system. 2. Press the CTRL+ S keys within four seconds after you are prompted to do so. A list of adapters appears. a. Select the adapter to configure, and then press the ENTER key. The Main Menu appears. b. Select MBA Configuration to view the MBA Configuration Menu, as shown in Figure 6-1. Figure 6-1.
6–Boot Agent Driver Software Setting Up MBA in a Client Environment 3. To access the Boot Protocol item, press the UP ARROW and DOWN ARROW keys. If other boot protocols besides Preboot Execution Environment (PXE) are available, press RIGHT ARROW or LEFT ARROW to select the boot protocol of choice: FCoE or iSCSI. NOTE For iSCSI and FCoE boot-capable LOMs, set the boot protocol through the BIOS. See your system documentation for more information.
6–Boot Agent Driver Software Setting Up MBA in a Client Environment 3. Select the device on which you want to change MBA settings (see Figure 6-3). Figure 6-3.
6–Boot Agent Driver Software Setting Up MBA in a Client Environment 4. On the Main Configuration Page, select NIC Configuration (see Figure 6-4). Figure 6-4.
6–Boot Agent Driver Software Setting Up MBA in a Client Environment 5. In the NIC Configuration page (see Figure 6-5), use the Legacy Boot Protocol drop-down menu to select the boot protocol of choice, if boot protocols other than Preboot Execution Environment (PXE) are available. If available, other boot protocols include iSCSI and FCoE. The BCM57xxx’s 1GbE ports support only PXE and iSCSI remote boot. Figure 6-5.
6–Boot Agent Driver Software Setting Up MBA in a Linux Server Environment Setting Up the BIOS To boot from the network with the MBA, make the MBA enabled adapter the first bootable device under the BIOS. This procedure depends on the system BIOS implementation. Refer to the user manual for the system for instructions. Setting Up MBA in a Linux Server Environment The Red Hat Enterprise Linux distribution has PXE Server support.
7 Linux Driver Software Information about the Linux driver software includes: Introduction “Limitations” on page 34 “Packaging” on page 35 “Installing Linux Driver Software” on page 36 “Unloading or Removing the Linux Driver” on page 41 “Patching PCI Files (Optional)” on page 43 “Network Installations” on page 43 “Setting Values for Optional Properties” on page 44 “Driver Defaults” on page 48 “Driver Messages” on page 50 “Teaming with Channel Bonding” on page 55
7–Linux Driver Software Limitations Table 7-1. QLogic BCM57xx and BCM57xxx Linux Drivers (Continued) Linux Driver Description cnic The C-NIC driver provides the interface between QLogic’s upper-layer protocol (for example, storage) drivers and QLogic’s BCM57xx and BCM57xxx 1Gb and 10Gb network adapters. The C-NIC module works with the bnx2 and bnx2x network drives in the downstream and the bnx2fc (FCoE) and bnx2i (iSCSI) drivers in the upstream.
7–Linux Driver Software Packaging bnx2fc Driver Limitations The current version of the driver has been tested on 2.6.x kernels, starting from 2.6.32 kernel, which is included in RHEL 6.1 distribution. The bnx2fc driver may not compile on older kernels. Testing was limited to i386 and x86_64 architectures, RHEL 6.1, RHEL 7.0, and SLES 11 SP1 and later distributions.
7–Linux Driver Software Installing Linux Driver Software Installing Linux Driver Software Procedures for installing the Linux driver software include: Installing the Source RPM Package Building the Driver from the Source TAR File Installing the Binary DKMS RPM Driver Package Installing the Binary KMOD and KMP Driver Package NOTE If a bnx2x, bnx2i, or bnx2fc driver is loaded and the Linux kernel is updated, the driver module must be recompiled if the driver module was installed using the sou
7–Linux Driver Software Installing Linux Driver Software 4. For FCoE offload, install the Open-FCoE utility. For RHEL 6.4 and legacy versions, one of the following commands: yum install fcoe-utils-.rhel.64.brcm...rpm rpm -ivh fcoe-utils-.rhel.64.brcm...rpm For RHEL 6.4 and legacy versions, the version of fcoe-utils or open-fcoe included in your distribution is sufficient and no out of box upgrades are provided.
7–Linux Driver Software Installing Linux Driver Software 11. For FCoE offload and iSCSI-offload-TLV, disable lldpad on QLogic Converged Network Adapter interfaces. This step is required because QLogic utilizes an offloaded DCBX client. lldptool set-lldp –i adminStatus=disasbled 12. For FCoE offload and iSCSI-offload-TLV, make sure /var/lib/lldpad/lldpad.conf is created and each block does not specify adminStatus, or if specified, it is set to 0 (adminStatus=0) as follows.
7–Linux Driver Software Installing Linux Driver Software To install the KMP package: 1. Install the KMP package: rpm -ivh rmmod bnx2x 2. Load the driver as follows: modprobe bnx2x Building the Driver from the Source TAR File NOTE The examples used in this procedure refer to the bnx2x driver, but also apply to the bnx2i and bnx2fc drivers. To build the driver from the TAR file: 1. Create a directory and extract the TAR files to the directory: tar xvzf netxtreme2-version.tar.gz 2.
7–Linux Driver Software Installing Linux Driver Software Refer to “Load and Run Necessary iSCSI Software Components” on page 41 for instructions on loading the software components required to use the QLogic iSCSI offload feature. To configure the network protocol and address after building the driver, refer to the manuals supplied with your operating system.
7–Linux Driver Software Load and Run Necessary iSCSI Software Components Red Hat: kmod-netxtreme2-...rpm 2. Verify that your network adapter supports iSCSI by checking the message log. If the message, bnx2i: dev eth0 does not support iSCSI, appears in the message log after loading the bnx2i driver, iSCSI is not supported. This message may not appear until the interface is opened, as with: ifconfig eth0 up 3.
7–Linux Driver Software Unloading or Removing the Linux Driver Unloading or Removing the Driver from an RPM Installation NOTE The examples used in this procedure refer to the bnx2x driver, but also apply to the bnx2fc and bnx2i drivers. On 2.6 kernels, it is not necessary to bring down the eth# interfaces before unloading the driver module. If the C-NIC driver is loaded, unload the C-NIC driver before unloading the bnx2x driver.
7–Linux Driver Software Patching PCI Files (Optional) Where is one of the following: QCS CLI QCS-CLI--.rpm RPC agent qlnxremote-..rpm Patching PCI Files (Optional) NOTE The examples used in this procedure refer to the bnx2x driver, but also apply to the bnx2fc and bnx2i drivers.
7–Linux Driver Software Setting Values for Optional Properties Setting Values for Optional Properties Optional properties exist for the different drivers: bnx2x Driver Parameters bnx2i Driver Parameters bnx2fc Driver Parameters bnx2x Driver Parameters Parameters for the bnx2x driver are described in the following sections. disable_tpa The disable_tpa parameter can be supplied as a command line argument to disable the Transparent Packet Aggregation (TPA) feature.
7–Linux Driver Software Setting Values for Optional Properties dropless_fc The dropless_fc parameter can be used to enable a complementary flow control mechanism on BCM57xx and BCM57xxx adapters. The default flow control mechanism is to send pause frames when the on-chip buffer (BRB) is reaching a specific level of occupancy, which is a performance-targeted flow control mechanism.
7–Linux Driver Software Setting Values for Optional Properties bnx2i Driver Parameters Optional parameters en_tcp_dack, error_mask1, and error_mask2 can be supplied as command line arguments to the insmod or modprobe command for bnx2i. error_mask1 and error_mask2 Use Config FW iSCSI Error Mask # to configure a specific iSCSI protocol violation to be treated either as a warning or a fatal error. All fatal iSCSI protocol violations will result in session recovery (ERL 0). These are bit masks.
7–Linux Driver Software Setting Values for Optional Properties sq_size Use Configure SQ size to choose send queue size for offloaded connections and SQ size determines the maximum SCSI commands that can be queued. SQ size also has a bearing on the quantity of connections that can be offloaded; as QP size increases, the quantity of connections supported decreases. With the default values, the BCM5708 adapters can offload 28 connections.
7–Linux Driver Software Driver Defaults ooo_enable The Enable TCP out-of-order feature enables and disables TCP out-of-order RX handling feature on offloaded iSCSI connections. Default: TCP out-of-order feature is ENABLED. For example: insmod bnx2i.ko ooo_enable=1 or modprobe bnx2i ooo_enable=1 bnx2fc Driver Parameters You can supply the optional parameter debug_logging as a command line argument to the insmod or modprobe command for bnx2fc.
7–Linux Driver Software Driver Defaults RX Ring Size: 255 (range is 0–4080) RX Jumbo Ring Size: 0 (range is 0–16320) adjusted by the driver based on MTU and RX Ring Size TX Ring Size: 255 (range is (MAX_SKB_FRAGS+1)–255). MAX_SKB_FRAGS varies on different kernels and different architectures. On a 2.6 kernel for x86, MAX_SKB_FRAGS is 18.
7–Linux Driver Software Driver Messages TSO: Enabled WoL: Disabled Driver Messages The following are the most common sample messages that may be logged in the /var/log/messages file. Issue the dmesg -n command to control the level at which messages appear on the console. Most systems are set to level 6 by default. To see all messages, set the level higher.
7–Linux Driver Software Driver Messages bnx2i Driver Messages The bnx2i driver messages include the following. BNX2I Driver Sign-on QLogic BCM57xx and BCM57xxx iSCSI Driver bnx2i v2.1.1D (May 12, 2015) Network Port to iSCSI Transport Name Binding bnx2i: netif=eth2, iscsi=bcm570x-050000 bnx2i: netif=eth1, iscsi=bcm570x-030c00 Driver Completes Handshake with iSCSI Offload-enabled C-NIC Device bnx2i [05:00.
7–Linux Driver Software Driver Messages Network Route Is Assigned to Network Interface, Which Is Down bnx2i: check route, hba not found SCSI-ML Initiated Host Reset (Session Recovery) bnx2i: attempting to reset host, #3 C-NIC Detects iSCSI Protocol Violation - Fatal Errors bnx2i: iscsi_error - wrong StatSN rcvd bnx2i: iscsi_error - hdr digest err bnx2i: iscsi_error - data digest err bnx2i: iscsi_error - wrong opcode rcvd bnx2i: iscsi_error - AHS len > 0 rcvd bnx2i: iscsi_error - invalid ITT rcvd bnx2i: i
7–Linux Driver Software Driver Messages bnx2i: iscsi_error - async pdu len error bnx2i: iscsi_error - nopin pdu len error bnx2i: iscsi_error - pend r2t in cleanup bnx2i: iscsi_error - IP fragments rcvd bnx2i: iscsi_error - IP options error bnx2i: iscsi_error - urgent flag error C-NIC Detects iSCSI Protocol Violation—Non-FATAL, Warning bnx2i: iscsi_warning - invalid TTT bnx2i: iscsi_warning - invalid DataSN bnx2i: iscsi_warning - invalid LUN field NOTE You must configure the driver to consider a specific
7–Linux Driver Software Driver Messages Driver Completes Handshake with FCoE Offload Enabled C-NIC Device bnx2fc [04:00.
7–Linux Driver Software Teaming with Channel Bonding Unable to Issue I/O Request Due to Session Not Ready bnx2fc: Unable to post io_req Drop Incorrect L2 Receive Frames bnx2fc: FPMA mismatch...
7–Linux Driver Software Linux iSCSI Offload User Application iscsiuio Install and run the iscsiuio daemon before attempting to create iSCSI connections. The driver cannot establish connections to the iSCSI target without the daemon's assistance. To install and run the iscsiuio daemon: 1. Install the iscsiuio source package as follows: # tar -xvzf iscsiuio-.tar.gz 2. CD to the directory where iscsiuio is extracted as follows: # cd iscsiuio- 3. Compile and install as follows: # .
7–Linux Driver Software Linux iSCSI Offload If you want to switch back to use the software initiator, enter the following: iscsiadm -m iface -I -n iface.transport_name -v tcp -o update Where the iface file includes the following information: iface.net_ifacename = ethX iface.iscsi_ifacename = iface.transport_name = tcp VLAN Configuration for iSCSI Offload (Linux) iSCSI traffic on the network may be isolated in a VLAN to segregate it from other traffic.
7–Linux Driver Software Linux iSCSI Offload Making Connections to iSCSI Targets Refer to Open-iSCSI documentation for a comprehensive list of iscsiadm commands. The following is a sample list of commands to discovery targets and to create iSCSI connections to a target. Add Static Entry iscsiadm -m node -p -T iqn.2007-05.com.
7–Linux Driver Software Linux iSCSI Offload Linux iSCSI Offload FAQ Not all QLogic BCM57xx and BCM57xxx adapters support iSCSI offload. The iSCSI session will not recover after a hot remove and hot plug. For Microsoft Multipath I/O (MPIO) to work properly, you must enable iSCSI noopout on each iSCSI session. For procedures on setting up noop_out_interval and noop_out_timeout values, refer to Open-iSCSI documentation.
8 VMware Driver Software This chapter covers the following for the VMware driver software: Packaging Networking Support, Drivers “FCoE Support” on page 66 “iSCSI Support” on page 68 NOTE Information in this chapter applies primarily to the currently supported VMware versions: ESXi 6.0 U2, ESXi 6.5, and ESXi 6.7. ESXi 6.7 uses native drivers for all protocols. Packaging The VMware driver is released in the packaging formats shown in Table 8-1. Table 8-1.
8–VMware Driver Software Networking Support, Drivers Download, Install, and Update Drivers To download, install, or update the VMware ESXi driver for BCM57xx and BCM57xxx 10GbE network adapters, see http://www.vmware.com/support. This package is double zipped—unzip the package before copying it to the ESXi host. Driver Parameters You can supply several optional parameters as a command line argument to the vmkload_mod command. Set these parameters by issuing the esxcfg-module command.
8–VMware Driver Software Networking Support, Drivers pri_map Use the optional parameter pri_map to map the VLAN PRI value or the IP DSCP value to a different or the same CoS in the hardware. This 32-bit parameter is evaluated by the driver as 8 values of 4 bits each. Each nibble sets the required hardware queue number for that priority. For example, set the pri_map parameter to 0x22221100 to map priority 0 and 1 to CoS 0, map priority 2 and 3 to CoS 1, and map priority 4 to 7 to CoS 2.
8–VMware Driver Software Networking Support, Drivers enable_default_queue_filters Use the optional parameter enable_default_queue_filters to enable the classification filters on the default queue. The hardware supports a total of 512 classification filters that are equally divided among the ports of an adapter. For example, a quad-port adapter has 128 filters per port.
8–VMware Driver Software Networking Support, Drivers TSO: Enabled WoL: Disabled Unloading and Removing Driver To unload the bnx2x VMware ESXi driver, issue the following command: vmkload_mod -u bnx2x Driver Messages The following the bnx2x VMware ESXi driver messages are the most common sample messages that may be logged in the file /var/log/vmkernel.log. Issue the dmesg -n command to control the level at which messages appear on the console. Most systems are set to level 6 by default.
8–VMware Driver Software Networking Support, Drivers Memory Limitation Messages such as the following in the log file indicate that the ESXi host is severely strained. To relieve the strain, disable NetQueue. Dec 2 18:24:20 ESX4 vmkernel: 0:00:00:32.342 cpu2:4142)WARNING: Heap: 1435: Heap bnx2x already at its maximumSize. Cannot expand. Dec 2 18:24:20 ESX4 vmkernel: 0:00:00:32.342 cpu2:4142)WARNING: Heap: 1645: Heap_Align(bnx2x, 4096/4096 bytes, 4096 align) failed.
8–VMware Driver Software FCoE Support FCoE Support This section describes the contents and procedures associated with installation of the VMware software package for supporting QLogic FCoE C-NICs. Drivers QLogic BCM57xx and BCM57xxx FCoE drivers include the bnx2x and the bnx2fc. The bnx2x driver manages all PCI device resources (registers, host interface queues, and so on.) and also acts as the Layer 2 VMware low-level network driver for QLogic's BCM57xx and BCM57xxx 10G device.
8–VMware Driver Software FCoE Support 2. Enable the FCoE interface as follows: # esxcli fcoe nic discover -n vmnicX Where X is the interface number determined in Step 1. 3.
8–VMware Driver Software iSCSI Support Installation Check To verify the correct installation of the driver and to ensure that the host port is seen by the switch, follow these steps. To verify the correct installation of the driver: 1. Verify that the host port shows up in the switch fabric login (FLOGI) database by issuing the one of the following commands: show flogi database (for a Cisco FCF) fcoe -loginshow (for a Brocade FCF) 2.
8–VMware Driver Software iSCSI Support 5. (Optional) On the VM Network Properties, General page, assign a VLAN number in the VLAN ID box. Figure 8-1 and Figure 8-2 show examples. Figure 8-1.
8–VMware Driver Software iSCSI Support Figure 8-2. VM Network Properties: Example 2 6. Configure the VLAN on VMkernel.
9 Windows Driver Software Windows driver software information includes the following: Installing the Driver Software “Modifying the Driver Software” on page 76 “Repairing or Reinstalling the Driver Software” on page 77 “Removing the Device Drivers” on page 78 “Viewing or Changing the Properties of the Adapter” on page 78 “Setting Power Management Options” on page 78 “Configuring the Communication Protocol to Use with QCC GUI, QCC PowerKit, and QCS CLI” on page 80 Installing the
9–Windows Driver Software Installing the Driver Software The two methods of driver installation include: Graphical interactive installation mode (see “Using the Installer” on page 72) Command-line silent mode for unattended installation (see “Using Silent Installation” on page 75) NOTE Before installing the driver software, verify that the Windows operating system has been upgraded to the latest version with the latest service pack applied.
9–Windows Driver Software Installing the Driver Software To install the QLogic BCM57xx and BCM57xxx drivers and management applications: 1. When the Found New Hardware Wizard appears, click Cancel. 2. From either the driver source media or from the location in which you downloaded the software driver package, do the following: a. Open the folder for your operating system. b. Open the MUPS folder, and then extract the folder according to your operating system configuration. c.
9–Windows Driver Software Installing the Driver Software 9. Click Finish to close the wizard. 10. The installer determines if a system restart is necessary. Follow the on-screen instructions. To install the Microsoft iSCSI Software Initiator for iSCSI Crash Dump: If supported and if you will use the QLogic iSCSI Crash Dump utility, it is important to follow the installation sequence: 1. Run the installer. 2. Install Microsoft iSCSI Software Initiator along with the patch (MS KB939875).
9–Windows Driver Software Installing the Driver Software Table 9-1. Windows Operating Systems and iSCSI Crash Dump (Continued) Operating System MS iSCSI Software Initiator Required? Microsoft Patch (MS KB939875) Required? Offload iSCSI (OIS) Windows Server 2008 No No Windows Server 2008 R2 No No Windows Server 2012 and later No No Using Silent Installation NOTE All commands are case sensitive. For detailed instructions and information about unattended installs, refer to the silent.
9–Windows Driver Software Modifying the Driver Software To perform a silent install by feature: Use the ADDSOURCE to include any of the following features.
9–Windows Driver Software Repairing or Reinstalling the Driver Software 4. Click Modify, Add, or Remove to change program features. NOTE This option does not install drivers for new adapters. For information on installing drivers for new adapters, see “Repairing or Reinstalling the Driver Software” on page 77. 5. Click Next to continue. 6. Click on an icon to change how a feature is installed. 7. Click Next. 8. Click Install. 9. Click Finish to close the wizard. 10.
9–Windows Driver Software Removing the Device Drivers Removing the Device Drivers When removing the device drivers, any management application that is installed is also removed. NOTE Windows Server 2008 and Windows Server 2008 R2 provide the Device Driver Rollback feature to replace a device driver with one that was previously installed. However, the complex software architecture of the BCM57xx and BCM57xxx device may present problems if the rollback feature is used on one of the individual components.
9–Windows Driver Software Setting Power Management Options To have the controller stay on at all times: On the adapter properties’ Power Management page, clear the Allow the computer to turn off the device to save power check box, as shown in Figure 9-2. NOTE Power management options are not available on blade servers. Figure 9-2. Device Power Management Options NOTE The Power Management page is available only for servers that support power management.
9–Windows Driver Software Configuring the Communication Protocol to Use with QCC GUI, QCC PowerKit, and QCS CLI Configuring the Communication Protocol to Use with QCC GUI, QCC PowerKit, and QCS CLI There are two main components of the QCC GUI, QCC PowerKit, and QCS CLI management applications: the RPC agent and the client software. An RPC agent is installed on a server, or managed host, that contains one or more Converged Network Adapters.
10 iSCSI Protocol This chapter provides the following information about the iSCSI protocol: iSCSI Boot “iSCSI Crash Dump” on page 110 “iSCSI Offload in Windows Server” on page 110 iSCSI Boot QLogic BCM57xx and BCM57xxx gigabit Ethernet (GbE) adapters support iSCSI boot to enable network boot of operating systems to diskless systems. iSCSI boot allows a Windows, Linux, or VMware operating system boot from an iSCSI target machine located remotely over a standard IP network.
10–iSCSI Protocol iSCSI Boot VMware ESXi 5.0 and later for IPv4 (supports only non-offload path), and ESXi 6.0 and later for IPv6. VMware ESX in Layer 2 path In addition, the adapters support iSCSI boot for unspecified path types on SLES 11SP3, SLES12.0, and later; RHEL 6.5, 6.6, 7, 7.1, and later; Windows 2012, Windows 2012 R2, and later; and ESXi 6.0 U2, 6.5, and later.
10–iSCSI Protocol iSCSI Boot Configuring iSCSI Boot Parameters To configure the iSCSI boot parameters: 1. In the NIC Configuration page, in the Legacy Boot Protocol drop-down menu, select iSCSI (see Figure 10-1). Figure 10-1. Legacy Boot Protocol Selection As shown in Figure 10-1, UEFI is not supported for the iSCSI protocol for the BCM57xx and BCM57xxx adapters.
10–iSCSI Protocol iSCSI Boot 2. Configure the QLogic iSCSI boot software for either static or dynamic configuration in the CCM, UEFI (see Figure 10-2), QCC GUI, or QCS CLI. Figure 10-2.
10–iSCSI Protocol iSCSI Boot The configuration options available on the General Parameters window (see Figure 10-3) are listed in Table 10-1. Figure 10-3. UEFI, iSCSI Configuration, iSCSI General Parameters Table 10-1 lists parameters for both IPv4 and IPv6. Parameters specific to either IPv4 or IPv6 are noted. NOTE Availability of IPv6 iSCSI boot is platform and device dependent. Table 10-1. Configuration Options Option TCP/IP parameters through DHCP Description This option is specific to IPv4.
10–iSCSI Protocol iSCSI Boot Table 10-1. Configuration Options (Continued) Option Description IP Autoconfiguration This option is specific to IPv6. Controls whether the iSCSI boot host software will configure a stateless link-local address and/or stateful address if DHCPv6 is present and used (Enabled). Router Solicit packets are sent out up to three times with 4 second intervals in between each retry. Or use a static IP configuration (Disabled).
10–iSCSI Protocol iSCSI Boot Table 10-1. Configuration Options (Continued) Option Description LUN Busy Retry Count Controls the quantity of connection retries the iSCSI Boot initiator will attempt if the iSCSI target LUN is busy. IP Version This option is specific to IPv6. Toggles between the IPv4 or IPv6 protocol. All IP settings will be lost when switching from one protocol version to another.
10–iSCSI Protocol iSCSI Boot LUN Busy Retry Count: 0 IP Version: IPv6 (for IPv6, non-offload) HBA Boot Mode: Disabled (Note: This parameter cannot be changed when the adapter is in Multi-Function mode.) NOTE For initial OS installation to a blank iSCSI target LUN from a CD/DVD-ROM or mounted bootable OS installation image, set Boot from Target to One Time Disabled. This setting causes the system not to boot from the configured iSCSI target after establishing a successful login and connection.
10–iSCSI Protocol iSCSI Boot 4. On the iSCSI Initiator Parameters window (Figure 10-4), type values for the following: IP Address (unspecified IPv4 and IPv6 addresses should be 0.0.0.0 and ::, respectively) NOTE Carefully enter the IP address. There is no error-checking performed against the IP address to check for duplicates or incorrect segment or network assignment.
10–iSCSI Protocol iSCSI Boot 7. On the iSCSI First Target Parameters window (Figure 10-5): a. Enable Connect to connect to the iSCSI target. b. Type values for the following using the values used when configuring the iSCSI target: IP Address TCP Port Boot LUN iSCSI Name CHAP ID CHAP Secret 8. Press ESC to return to the Main menu. 9. (Optional) Configure a secondary iSCSI target by repeating these steps in the iSCSI Second Target Parameter window. 10.
10–iSCSI Protocol iSCSI Boot If DHCP Option 17 is used, the target information is provided by the DHCP server, and the initiator iSCSI name is retrieved from the value programmed on the Initiator Parameters window. If no value was selected, the controller defaults to the following name: iqn.1995-05.com.qlogic.<11.22.33.44.55.66>.iscsiboot Where the string 11.22.33.44.55.66 corresponds to the controller’s MAC address.
10–iSCSI Protocol iSCSI Boot Enabling CHAP Authentication Ensure that CHAP authentication is enabled on the target and initiator. To enable CHAP authentication: 1. On the iSCSI General Parameters window, set CHAP Authentication to Enabled. 2. On the iSCSI Initiator Parameters window, type values for the following: CHAP ID (up to 128 bytes) CHAP Secret (if authentication is required, and must be 12 characters in length or longer) 3. Press the ESC key to return to the Main menu. 4.
10–iSCSI Protocol iSCSI Boot DHCP Option 17, Root Path Option 17 is used to pass the iSCSI target information to the iSCSI client. The format of the root path as defined in IETC RFC 4173 is: "iscsi:"":"":"":"":"" Table 10-2 lists the parameters and definitions. Table 10-2.
10–iSCSI Protocol iSCSI Boot Table 10-3 lists the suboption. Table 10-3. DHCP Option 43 Suboption Definition Suboption 201 Definition First iSCSI target information in the standard root path format "iscsi:"":"":"":"": "" Using DHCP option 43 requires more configuration than DHCP option 17, but it provides a richer environment and provides more configuration options.
10–iSCSI Protocol iSCSI Boot The content of Option 16 should be <2-byte length> . DHCPv6 Option 17, Vendor-Specific Information DHCPv6 Option 17 (vendor-specific information) provides more configuration options to the iSCSI client. In this configuration, three additional suboptions are provided that assign the initiator IQN to the iSCSI boot client along with two iSCSI target IQNs that can be used for booting. Table 10-4 lists the suboption. Table 10-4.
10–iSCSI Protocol iSCSI Boot Windows Server 2008 R2 and SP2 iSCSI Boot Setup Windows Server 2008 R2 and Windows Server 2008 SP2 support booting as well as installing in either the offload or non-offload paths. The following procedure prepares the image for installation and booting in either the offload or non-offload path. The procedure references Windows Server 2008 R2, but is also common to Windows Server SP2.
10–iSCSI Protocol iSCSI Boot 8. Set HBA Boot Mode to Enabled or Disabled. (Note: This parameter cannot be changed when the adapter is in Multi-Function mode.) 9. Save the settings and reboot the system. The remote system should connect to the iSCSI target and then boot from the DVDROM device. 10. Boot to DVD and begin installation. 11. Answer all the installation questions appropriately (specify the operating system you want to install, accept the license terms, and so on).
10–iSCSI Protocol iSCSI Boot To prepare the image for installation and booting in either the offload or non-offload path: 1. Remove any local hard drives on the system to be booted (the “remote system”). 2. Load the latest QLogic MBA and iSCSI boot images into the NVRAM of the adapter. 3. Configure the BIOS on the remote system to have the QLogic MBA as the first bootable device and the CDROM as the second device. 4. Configure the iSCSI target to allow a connection from the remote device.
10–iSCSI Protocol iSCSI Boot Linux iSCSI Boot Setup Linux iSCSI boot is supported on Red Hat Enterprise Linux 5.5 and later and SUSE Linux Enterprise Server 11 (SLES 11) SP1 and later in both the offload and non-offload paths. Note that SLES 10.x and SLES 11 have support only for the non-offload path. To set up Linux iSCSI boot: 1. For driver update, obtain the latest QLogic Linux driver CD. 2.
10–iSCSI Protocol iSCSI Boot To create a new customized initrd for any new components update: 1. Update the iSCSI initiator if needed. You must first remove the existing initiator using rpm -e. 2. Make sure all runlevels of network service are on: chkconfig network on 3. Make sure 2, 3, and 5 runlevels of iSCSI service are on: chkconfig -level 235 iscsi on 4. For Red Hat 6.0, make sure Network Manager service is stopped and disabled. 5. (Optional) Install iscsiuio (not required for SUSE 10). 6.
10–iSCSI Protocol iSCSI Boot 17. Continue booting into the iSCSI boot image and select one of the images you created (non-offload or offload). Your choice must correspond with your choice in the iSCSI Boot parameters section. If HBA Boot Mode was enabled in the iSCSI Boot Parameters section, you must boot the offload image. NOTE QLogic supports Host Bus Adapter (offload) starting in SLES 11 SP1 and later. QLogic does not support iSCSI boot in Host Bus Adapter (offload) mode for SLES 10.x and SLES 11. 18.
10–iSCSI Protocol iSCSI Boot ISCSIUIO=/sbin/iscsiuio CONFIG_FILE=/etc/iscsid.conf DAEMON=/sbin/iscsid ARGS="-c $CONFIG_FILE" # Source LSB init functions . /etc/rc.status # # This service is run right after booting. So all targets activated # during mkinitrd run should not be removed when the open-iscsi # service is stopped.
10–iSCSI Protocol iSCSI Boot rc_failed 6 rc_exit fi fi case "$1" in start) echo -n "Starting iSCSI initiator for the root device: " iscsi_load_iscsiuio startproc $DAEMON $ARGS rc_status -v iscsi_mark_root_nodes ;; stop|restart|reload) rc_failed 0 ;; status) echo -n "Checking for iSCSI initiator service: " if checkproc $DAEMON ; then rc_status -v else rc_failed 3 rc_status -v fi ;; *) echo "Usage: $0 {start|stop|status|restart|reload}" exit 1 ;; esac rc_exit Removing Inbox Drivers from Windows OS Image 1.
10–iSCSI Protocol iSCSI Boot 4. Open the Windows Automated Installation Kit (AIK) command prompt in elevated mode from All program, and then issue the following command: attrib -r D:\Temp\Win2008R2Copy\sources\boot.wim 5. Issue the following command to mount the boot.wim image: dism /Mount-WIM /WimFile:D:\Temp\Win2008R2Copy\sources\boot.wim /index:1 / MountDir:D:\Temp\Win2008R2Mod 6. The boot.wim image was mounted in the Win2008R2Mod folder.
10–iSCSI Protocol iSCSI Boot Finally, inject these drivers into the Windows Image (WIM) files and install the applicable Windows Server version from the updated images. To inject QLogic drivers into Windows image files: 1. For Windows Server 2008 R2 and SP2, install the Windows Automated Installation Kit (AIK). Or, for Windows Server 2012 and 2012 R2, install the Windows Assessment and Deployment Kit (ADK). 2.
10–iSCSI Protocol iSCSI Boot 10. Issue the following command to determine the index of the SKU that you want in the install.wim image: dism /get-wiminfo /wimfile:.\src\sources\install.wim For example, in Windows Server 2012, index 2 is identified as “Windows Server 2012 SERVERSTANDARD.” 11. Issue the following command to mount the install.wim image: dism /mount-wim /wimfile:.\src\sources\install.wim /index:X /mountdir:.\mnt Note: X is a placeholder for the index value that you obtained in the previous
10–iSCSI Protocol iSCSI Boot 3. To boot through an offload path, set the HBA Boot Mode to Enabled. To boot through a non-offload path, set the HBA Boot Mode to Disabled. (This parameter cannot be changed when the adapter is in multi-function mode.) If CHAP authentication is needed, enable CHAP authentication after determining that booting is successful (see “Enabling CHAP Authentication” on page 92).
10–iSCSI Protocol iSCSI Boot 6. Install the bibt package on your Linux system. You can get this package from QLogic CD. 7. Delete all ifcfg-eth* files. 8. Configure one port of the network adapter to connect to iSCSI target (for instructions, see “Configuring the iSCSI Target” on page 82). 9. Connect to the iSCSI target. 10. Issue the DD command to copy from the local hard drive to iSCSI target. 11.
10–iSCSI Protocol iSCSI Boot Troubleshooting iSCSI Boot The following troubleshooting tips are useful for iSCSI boot. Problem: A system blue screen occurs when iSCSI boots Windows Server 2008 R2 through the adapter’s NDIS path with the initiator configured using a link-local IPv6 address and the target configured using a router-configured IPv6 address. Solution: This problem is a known Windows TCP/IP stack issue.
10–iSCSI Protocol iSCSI Crash Dump Problem: Unable to update inbox driver if a non-inbox hardware ID present. Solution: Create a custom slipstream DVD image with supported drivers present on the install media. Problem: In Windows Server 2012, toggling between iSCSI Host Bus Adapter offload mode and iSCSI software initiator boot can leave the machine in a state where the Host Bus Adapter offload miniport bxois will not load. Solution: Manually edit [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\bxois
10–iSCSI Protocol iSCSI Offload in Windows Server Configuring iSCSI Offload With the proper iSCSI offload licensing, you can configure your iSCSI-capable BCM57xx and BCM57xxx network adapter to offload iSCSI processing from the host processor. The following process enables your system to take advantage of QLogic’s iSCSI offload feature.
10–iSCSI Protocol iSCSI Offload in Windows Server Configuring Microsoft Initiator to Use the QLogic iSCSI Offload After you have configured the IP address for the iSCSI adapter, you must use Microsoft Initiator to configure and add a connection to the iSCSI target using a QLogic iSCSI adapter. See Microsoft’s user guide for more details on the Microsoft Initiator. 1. Open Microsoft Initiator. 2. Configure the initiator IQN name according to your setup.
10–iSCSI Protocol iSCSI Offload in Windows Server 3. In the Initiator Node Name Change dialog box (see Figure 10-7), type the initiator IQN name, and then click OK. Figure 10-7. Changing the Initiator Node Name 4. On the iSCSI Initiator Properties (Figure 10-8), click the Discovery tab, and then under Target Portals, click Add. Figure 10-8.
10–iSCSI Protocol iSCSI Offload in Windows Server 5. On the Add Target Portal dialog box (Figure 10-9), type the IP address of the target, and then click Advanced. Figure 10-9. Add Target Portal Dialog Box 6. On the Advanced Settings dialog box, complete the General page as follows: a. For the Local adapter, select the QLogic BCM57xx and BCM57xxx C-NIC iSCSI adapter. b. For the Source IP, select the IP address for the adapter. c.
10–iSCSI Protocol iSCSI Offload in Windows Server Figure 10-10 shows an example. Figure 10-10.
10–iSCSI Protocol iSCSI Offload in Windows Server 7. On the iSCSI Initiator Properties, click the Discovery tab, and then on the Discovery page, click OK to add the target portal. Figure 10-11 shows an example. Figure 10-11. iSCSI Initiator Properties: Discovery Page 8. On the iSCSI Initiator Properties, click the Targets tab.
10–iSCSI Protocol iSCSI Offload in Windows Server 9. On the Targets page, select the target, and then click Log On to log into your iSCSI target using the QLogic iSCSI adapter. Figure 10-12 shows an example. Figure 10-12. iSCSI Initiator Properties: Targets Page 10. On the Log On To Target dialog box (Figure 10-13), click Advanced. Figure 10-13.
10–iSCSI Protocol iSCSI Offload in Windows Server 11. On the Advanced Settings dialog box, General page, select the QLogic BCM57xx and BCM57xxx C-NIC iSCSI adapters as the Local adapter, and then click OK. Figure 10-14 shows an example. Figure 10-14. Advanced Settings: General Page, Local Adapter 12. Click OK to close the Microsoft Initiator.
10–iSCSI Protocol iSCSI Offload in Windows Server 13. To format your iSCSI partition, use Disk Manager. NOTE Teaming does not support iSCSI adapters. Teaming does not support NDIS adapters that are in the boot path. Teaming supports NDIS adapters that are not in the iSCSI boot path, but only for the SLB team type. iSCSI Offload FAQs Question: How do I assign an IP address for iSCSI offload? Answer: Use the Configurations page in QLogic Control Suite (QCS).
10–iSCSI Protocol iSCSI Offload in Windows Server Table 10-5. Offload iSCSI (OIS) Driver Event Log Messages (Continued) Message Number Severity 4 Error MaxBurstLength is not serially greater than FirstBurstLength. Dump data contains FirstBurstLength followed by MaxBurstLength. 5 Error Failed to setup initiator portal. Error status is specified in the dump data. 6 Error The initiator could not allocate resources for an iSCSI connection 7 Error The initiator could not send an iSCSI PDU.
10–iSCSI Protocol iSCSI Offload in Windows Server Table 10-5. Offload iSCSI (OIS) Driver Event Log Messages (Continued) Message Number Severity Message 22 Error Header digest error was detected for the specified PDU. Dump data contains the header and digest. 23 Error Target sent an invalid iSCSI PDU. Dump data contains the entire iSCSI header. 24 Error Target sent an iSCSI PDU with an invalid opcode. Dump data contains the entire iSCSI header. 25 Error Data digest error was detected.
10–iSCSI Protocol iSCSI Offload in Windows Server Table 10-5. Offload iSCSI (OIS) Driver Event Log Messages (Continued) Message Number Severity Message 40 Error Target requires logon authentication through CHAP, but Initiator is not configured to perform CHAP. 41 Error Target did not send AuthMethod key during security negotiation phase. 42 Error Target sent an invalid status sequence number for a connection.
10–iSCSI Protocol iSCSI Offload in Windows Server Table 10-5. Offload iSCSI (OIS) Driver Event Log Messages (Continued) Message Number Severity Message 59 Error Target dropped the connection before the initiator could transition to Full Feature Phase. 60 Error Target sent data in SCSI Response PDU instead of Data_IN PDU. Only Sense Data can be sent in SCSI Response. 61 Error Target set DataPduInOrder to NO when initiator requested YES. Login will be failed.
11 QLogic Teaming Services This chapter describes teaming for adapters in Windows Server systems (excluding Windows Server 2016 and later). For more information on a similar technologies on other operating systems (for example, Linux Channel Bonding), refer to your operating system documentation.
11–QLogic Teaming Services Executive Summary This section describes the technology and implementation considerations when working with the network teaming services offered by the QLogic software shipped with Dell’s servers and storage products. The goal of QLogic teaming services is to provide fault tolerance and link aggregation across a team of two or more adapters.
11–QLogic Teaming Services Executive Summary Table 11-1. Glossary (Continued) Term Definition LOM LAN on motherboard NDIS Network Driver Interface Specification PXE pre-execution environment RAID redundant array of inexpensive disks Smart Load Balancing and Failover Switch-independent failover type of team in which the primary team member handles all incoming and outgoing traffic while the standby team member is idle until a failover event (for example, loss of link occurs).
11–QLogic Teaming Services Executive Summary The following information provides a high-level overview of the concepts of network addressing used in an Ethernet network. Every Ethernet network interface in a host platform, such as a computer system, requires a globally unique Layer 2 address and at least one globally unique Layer 3 address. Layer 2 is the data link layer, and Layer 3 is the network layer as defined in the OSI model.
11–QLogic Teaming Services Executive Summary For switch-independent teaming modes, all physical adapters that make up a virtual adapter must use the unique MAC address assigned to them when transmitting data. That is, the frames that are sent by each of the physical adapters in the team must use a unique MAC address to be IEEE compliant. It is important to note that ARP cache entries are not learned from received frames, but only from ARP requests and ARP replies.
11–QLogic Teaming Services Executive Summary Smart Load Balancing and Failover The Smart Load Balancing and Failover type of team provides both load balancing and failover when configured for load balancing, and only failover when configured for fault tolerance. This type of team works with any Ethernet switch and requires no trunking configuration on the switch. The team advertises multiple MAC addresses and one or more IP addresses (when using secondary IP addresses).
11–QLogic Teaming Services Executive Summary SLB receive load balancing attempts to load balance incoming traffic for client machines across physical ports in the team. It uses a modified gratuitous ARP to advertise a different MAC address for the team IP address in the sender physical and protocol address. The G-ARP is unicast with the MAC and IP Address of a client machine in the target physical and protocol address, respectively.
11–QLogic Teaming Services Executive Summary Generic Trunking Generic Trunking is a switch-assisted teaming mode and requires configuring ports at both ends of the link: server interfaces and switch ports. This port configuration is often referred to as Cisco Fast EtherChannel or Gigabit EtherChannel. In addition, generic trunking supports similar implementations by other switch OEMs such as Extreme Networks Load Sharing and Bay Networks or IEEE 802.3ad Link Aggregation static mode.
11–QLogic Teaming Services Executive Summary The Link Aggregation control function determines which links may be aggregated and then binds the ports to an Aggregator function in the system and monitors conditions to determine if a change in the aggregation group is required. Link aggregation combines the individual capacity of multiple links to form a high performance virtual link. The failure or replacement of a link in an LACP trunk will not cause loss of connectivity.
11–QLogic Teaming Services Executive Summary Table 11-3 describes the four software components and their associated files for supported operating systems. Table 11-3. QLogic Teaming Software Component Software Component — Miniport Driver Network Adapter or Operating System System Architecture Windows File Name BCM57xx 32-bit bxvbdx.sys BCM57xx 64-bit bxvbda.sys BCM5771x, BCM578xx 32-bit evbdx.sys BCM5771x, BCM578xx 64-bit evbda.sys Windows Server 2008 (NDIS 6.0) 32-bit bxnd60x.
11–QLogic Teaming Services Executive Summary Repeater Hub A Repeater Hub allows a network administrator to extend an Ethernet network beyond the limits of an individual segment. The repeater regenerates the input signal received on one port onto all other connected ports, forming a single collision domain. This domain means that when a station attached to a repeater sends an Ethernet frame to another station, every station within the same collision domain will also receive that message.
11–QLogic Teaming Services Executive Summary Configuring Teaming The QLogic Control Suite (QCS) utility is used to configure teaming in the supported operating system environments. QCS runs on 32-bit and 64-bit Windows family of operating systems. Use QCS to configure VLANs and load balancing and fault tolerance teaming. In addition, it displays the MAC address, driver version, and status information for each network adapter.
11–QLogic Teaming Services Executive Summary Table 11-4. Comparison of Team Types (Continued) Type of Team Fault Tolerance Load Balancing Switch-Dependent Static Trunking Switch-Independent Dynamic Link Aggregation (IEEE 802.
11–QLogic Teaming Services Executive Summary Selecting a Team Type The following flow chart provides the decision flow when planning for Layer 2 teaming. The primary rationale for teaming is the need for additional network bandwidth and fault tolerance. Teaming offers link aggregation and fault tolerance to meet both of these requirements.
11–QLogic Teaming Services Executive Summary Figure 11-1 shows a flow chart for determining the team type. Figure 11-1.
11–QLogic Teaming Services Teaming Mechanisms Teaming Mechanisms This section provides the following information about teaming mechanisms: Architecture Types of Teams Attributes of the Features Associated with Each Type of Team Speeds Supported for Each Type of Team 139 BC0054508-00 J
11–QLogic Teaming Services Teaming Mechanisms Architecture The QLASP is implemented as an NDIS intermediate driver (see Figure 11-2). It operates below protocol stacks such as TCP/IP and IPX and appears as a virtual adapter. This virtual adapter inherits the MAC Address of the first port initialized in the team. A Layer 3 address must also be configured for the virtual adapter.
11–QLogic Teaming Services Teaming Mechanisms Outbound Traffic Flow The QLogic intermediate driver manages the outbound traffic flow for all teaming modes. For outbound traffic, every packet is first classified into a flow, and then distributed to the selected physical adapter for transmission. The flow classification involves an efficient hash computation over known protocol fields. The resulting hash value is used to index into an Outbound Flow Hash Table.
11–QLogic Teaming Services Teaming Mechanisms When an inbound IP Datagram arrives, the appropriate Inbound Flow Head Entry is located by hashing the source IP address of the IP Datagram. Two statistics counters stored in the selected entry are also updated. These counters are used in the same fashion as the outbound counters by the load-balancing engine periodically to reassign the flows to the physical adapter.
11–QLogic Teaming Services Teaming Mechanisms The actual assignment between adapters may change over time, but any protocol that is not TCP/UDP based goes over the same physical adapter because only the IP address is used in the hash. Performance Modern network interface cards provide many hardware features that reduce CPU utilization by offloading specific CPU intensive operations (see “Teaming and Other Advanced Networking Properties” on page 150).
11–QLogic Teaming Services Teaming Mechanisms Network Communications Key attributes of SLB include: Failover mechanism—Link loss detection. Load Balancing Algorithm—Inbound and outbound traffic are balanced through a QLogic proprietary mechanism based on Layer 4 flows. Outbound Load Balancing using MAC Address—No Outbound Load Balancing using IP Address—Yes Multivendor Teaming—Supported (must include at least one QLogic Ethernet adapter as a team member).
11–QLogic Teaming Services Teaming Mechanisms The attached switch must support the appropriate trunking scheme for this mode of operation. Both the QLASP and the switch continually monitor their ports for link loss. In the event of link loss on any port, traffic is automatically diverted to other ports in the team.
11–QLogic Teaming Services Teaming Mechanisms Dynamic Trunking (IEEE 802.3ad Link Aggregation) This mode supports link aggregation through static and dynamic configuration through the link aggregation control protocol (LACP). With this mode, all adapters in the team are configured to receive packets for the same MAC address. The MAC address of the first adapter in the team is used and cannot be substituted for a different MAC address.
11–QLogic Teaming Services Teaming Mechanisms LiveLink LiveLink is a feature of QLASP that is available for the Smart Load Balancing (SLB) and SLB (Auto-Fallback Disable) types of teaming. The purpose of LiveLink is to detect link loss beyond the switch and to route traffic only through team members that have a live link. This function is accomplished though the teaming software.
11–QLogic Teaming Services Teaming Mechanisms Table 11-5. Teaming Attributes (Continued) Feature Attribute Failover event Loss of link Failover time <500ms Fallback time 1.
11–QLogic Teaming Services Teaming Mechanisms Table 11-5. Teaming Attributes (Continued) Feature Attribute Hot remove Yes Link speed support Different speeds Frame protocol All Incoming packet management Switch Outgoing packet management QLASP Failover event Loss of link only Failover time < 500ms Fallback time 1.5s (approximate) a MAC address Same for all adapters Multivendor teaming Yes a Make sure that Port Fast or Edge Port is enabled.
11–QLogic Teaming Services Teaming and Other Advanced Networking Properties Teaming and Other Advanced Networking Properties This section covers the following teaming and advanced networking properties: Checksum Offload IEEE 802.1p QoS Tagging Large Send Offload Jumbo Frames IEEE 802.
11–QLogic Teaming Services Teaming and Other Advanced Networking Properties A team does not necessarily inherit adapter properties; rather various properties depend on the specific capability. For instance, an example would be flow control, which is a physical adapter property and has nothing to do with QLASP, and will be enabled on a specific adapter if the miniport driver for that adapter has flow control enabled.
11–QLogic Teaming Services Teaming and Other Advanced Networking Properties Jumbo Frames The use of jumbo frames was originally proposed by Alteon Networks, Inc. in 1998 and increased the maximum size of an Ethernet frame to a maximum size of 9600 bytes. Though never formally adopted by the IEEE 802.3 Working Group, support for jumbo frames has been implemented in QLogic BCM57xx and BCM57xxx adapters.
11–QLogic Teaming Services General Network Considerations Wake on LAN Wake on LAN (WoL) is a feature that allows a system to be awakened from a sleep state by the arrival of a specific packet over the Ethernet interface. Because a Virtual Adapter is implemented as a software only device, it lacks the hardware features to implement Wake on LAN and cannot be enabled to wake the system from a sleeping state through the virtual adapter.
11–QLogic Teaming Services General Network Considerations Teaming with Microsoft Virtual Server 2005 The only supported QLASP team configuration when using Microsoft Virtual Server 2005 is with a Smart Load Balancing team-type consisting of a single primary QLogic adapter and a standby QLogic adapter. Make sure to unbind or deselect “Virtual Machine Network Services” from each team member prior to creating a team and prior to creating virtual networks with Microsoft Virtual Server.
11–QLogic Teaming Services General Network Considerations The figures show the secondary team member sending the ICMP echo requests (yellow arrows) while the primary team member receives the respective ICMP echo replies (blue arrows). This send-receive illustrates a key characteristic of the teaming software. The load balancing algorithms do not synchronize how frames are load balanced when sent or received.
11–QLogic Teaming Services General Network Considerations Furthermore, a failover event would cause additional loss of connectivity. Consider a cable disconnect on the Top Switch port 4. In this case, Gray would send the ICMP Request to Red 49:C9, but because the Bottom Switch has no entry for 49:C9 in its CAM Table, the frame is flooded to all its ports but cannot find a way to get to 49:C9. Figure 11-3.
11–QLogic Teaming Services General Network Considerations The addition of a link between the switches allows traffic from and to Blue and Gray to reach each other without any problems. Note the additional entries in the CAM table for both switches. The link interconnect is critical for the proper operation of the team. As a result, QLogic highly advises that you have a link aggregation trunk to interconnect the two switches to ensure high availability for the connection. Figure 11-4.
11–QLogic Teaming Services General Network Considerations Figure 11-5 represents a failover event in which the cable is unplugged on the Top Switch port 4. This event is a successful failover with all stations pinging each other without loss of connectivity. Figure 11-5.
11–QLogic Teaming Services General Network Considerations Spanning Tree Algorithm In Ethernet networks, only one active path may exist between any two bridges or switches. Multiple active paths between switches can cause loops in the network. When loops occur, some switches recognize stations on both sides of the switch. This situation causes the forwarding algorithm to malfunction allowing duplicate frames to be forwarded.
11–QLogic Teaming Services General Network Considerations Topology Change Notice (TCN) A bridge or switch creates a forwarding table of MAC addresses and port numbers by learning the source MAC address that received on a specific port. The table is used to forward frames to a specific port rather than flooding the frame to all ports. The typical maximum aging time of entries in the table is 5 minutes. Only when a host has been silent for 5 minutes would its entry be removed from the table.
11–QLogic Teaming Services General Network Considerations Layer 3 Routing and Switching The switch that the teamed ports are connected to must not be a Layer 3 switch or router. The ports in the team must be in the same network. Teaming with Hubs (for Troubleshooting Purposes Only) SLB teaming can be used with 10 and 100 hubs, but QLogic recommends using it only for troubleshooting purposes, such as connecting a network analyzer in the event that switch port mirroring is not an option.
11–QLogic Teaming Services General Network Considerations SLB Team Connected to a Single Hub SLB teams configured as shown in Figure 11-6 maintain their fault tolerance properties. Either server connection could potentially fail, and network functionality is maintained. Clients could be connected directly to the hub, and fault tolerance would still be maintained; server performance, however, would be degraded. Figure 11-6. Team Connected to a Single Hub Generic and Dynamic Trunking (FEC/GEC/IEEE 802.
11–QLogic Teaming Services Application Considerations Application Considerations Application considerations covered: Teaming and Clustering Teaming and Network Backup Teaming and Clustering Teaming and clustering information includes: Microsoft Cluster Software High-Performance Computing Cluster Oracle Microsoft Cluster Software Dell Server cluster solutions integrate Microsoft Cluster Services (MSCS) with PowerVault™ SCSI or Dell and EMC Fibre Channel-based storage, Dell servers, stora
11–QLogic Teaming Services Application Considerations Figure 11-7 shows a two-node Fibre-Channel cluster with three network interfaces per cluster node: one private and two public. On each node, the two public adapters are teamed, and the private adapter is not. Teaming is supported across the same switch or across two switches. Figure 11-8 on page 166 shows the same two-node Fibre-Channel cluster in this configuration. Figure 11-7.
11–QLogic Teaming Services Application Considerations High-Performance Computing Cluster Gigabit Ethernet is typically used for the following purposes in high-performance computing cluster (HPCC) applications: Inter-process communications (IPC): For applications that do not require low-latency, high-bandwidth interconnects (such as Myrinet™ or InfiniBand®), Gigabit Ethernet can be used for communication between the compute nodes.
11–QLogic Teaming Services Application Considerations Oracle In the QLogic Oracle® solution stacks, QLogic supports adapter teaming in both the private network (interconnect between Real Application Cluster [RAC] nodes) and public network with clients or the application layer above the database layer, as shown in Figure 11-8. Figure 11-8.
11–QLogic Teaming Services Application Considerations Teaming and Network Backup When you perform network backups in a nonteamed environment, overall throughput on a backup server adapter can be easily impacted due to excessive traffic and adapter overloading. Depending on the quantity of backup servers, data streams, and tape drive speed, backup traffic can easily consume a high percentage of the network link bandwidth, thus impacting production data and tape backup performance.
11–QLogic Teaming Services Application Considerations Because there are four client servers, the backup server can simultaneously stream four backup jobs (one per client) to a multidrive autoloader. Because of the single link between the switch and the backup server; however, a 4-stream backup can easily saturate the adapter and link.
11–QLogic Teaming Services Application Considerations The designated path is determined by two factors: Client-Server ARP cache points to the backup server MAC address. This address is determined by the QLogic intermediate driver inbound load balancing algorithm. The physical adapter interface on Client-Server Red transmits the data.
11–QLogic Teaming Services Application Considerations Fault Tolerance If a network link fails during tape backup operations, all traffic between the backup server and client stops and backup jobs fail. If, however, the network topology was configured for both QLogic SLB and switch fault tolerance, this configuration would allow tape backup operations to continue without interruption during the link failure. All failover processes within the network are transparent to tape backup software applications.
11–QLogic Teaming Services Application Considerations To understand how backup data streams are directed during network failover process, consider the topology in Figure 11-10. Client-Server Red is transmitting data to the backup server through Path 1, but a link failure occurs between the backup server and the switch.
11–QLogic Teaming Services Troubleshooting Teaming Problems Troubleshooting Teaming Problems When running a protocol analyzer over a virtual adapter teamed interface, the MAC address shown in the transmitted frames may not be correct. The analyzer does not show the frames as constructed by QLASP and shows the MAC address of the team and not the MAC address of the interface transmitting the frame.
11–QLogic Teaming Services Troubleshooting Teaming Problems A team that requires maximum throughput should use LACP or GEC\FEC. In these cases, the intermediate driver is only responsible for the outbound load balancing while the switch performs the inbound load balancing. Aggregated teams (802.3ad\LACP and GEC\FEC) must be connected to only a single switch that supports IEEE 802.3a, LACP, or GEC/FEC.
11–QLogic Teaming Services Frequently Asked Questions 5. Check that the adapters and the switch are configured identically for link speed and duplex. 6. If possible, break the team and check for connectivity to each adapter independently to confirm that the problem is directly associated with teaming. 7. Check that all switch ports connected to the team are on the same VLAN. 8. Check that the switch ports are configured properly for Generic Trunking (FEC/GEC)/802.
11–QLogic Teaming Services Frequently Asked Questions Question: Can I connect the teamed adapters to a hub? Answer: Teamed ports can be connected to a hub for troubleshooting purposes only. However, this practice is not recommended for normal operation because the performance would be degraded due to hub limitations. Connect the teamed ports to a switch instead. Question: Can I connect the teamed adapters to ports in a router? Answer: No.
11–QLogic Teaming Services Frequently Asked Questions Question: How do I upgrade the intermediate driver (QLASP)? Answer: The intermediate driver cannot be upgraded through the Local Area Connection Properties. It must be upgraded using the Setup installer. Question: How can I determine the performance statistics on a virtual adapter (team)? Answer: In QLogic Control Suite, click the Statistics tab for the virtual adapter.
11–QLogic Teaming Services Event Log Messages Question: Why does my team lose connectivity for the first 30 to 50 seconds after the primary adapter is restored (fall-back after a failover)? Answer: During a fall-back event, link is restored causing Spanning Tree Protocol to configure the port for blocking until it determines that it can move to the forwarding state. You must enable Port Fast or Edge Port on the switch ports connected to the team to prevent the loss of communications caused by STP.
11–QLogic Teaming Services Event Log Messages Base Driver (Physical Adapter or Miniport) The base driver is identified by source L2ND. Table 11-8 lists the event log messages supported by the base driver, explains the cause for the message, and provides the recommended action. NOTE In Table 11-8, message numbers 1 through 17 apply to both NDIS 5.x and NDIS 6.x drivers, message numbers 18 through 23 apply only to the NDIS 6.x driver. Table 11-8.
11–QLogic Teaming Services Event Log Messages Table 11-8. Base Driver Event Log Messages (Continued) Message Number Severity Message Cause 6 Informational Network controller configured for 10Mb half-duplex link. The adapter has been manually configured for the selected line speed and duplex settings. No action is required. 7 Informational Network controller configured for 10Mb full-duplex link. The adapter has been manually configured for the selected line speed and duplex settings.
11–QLogic Teaming Services Event Log Messages Table 11-8. Base Driver Event Log Messages (Continued) Message Number Severity Message Cause Corrective Action 15 Error Unable to map IO space. The device driver cannot allocate memory-mapped I/O to access driver registers. Remove other adapters from the system, reduce the amount of physical memory installed, and replace the adapter. 16 Informational Driver initialized successfully. The driver has successfully loaded. No action is required.
11–QLogic Teaming Services Event Log Messages Table 11-8. Base Driver Event Log Messages (Continued) Message Number 23 Severity Error Message Cause Corrective Action Network controller failed to exchange the interface with the bus driver. The driver and the bus driver are not compatible. Update to the latest driver set, ensuring the major and minor versions for both NDIS and the bus driver are the same.
11–QLogic Teaming Services Event Log Messages Table 11-9. Intermediate Driver Event Log Messages (Continued) System Event Message Number Severity Message Cause Corrective Action 7 Error Could not allocate memory for internal data structures. The driver cannot allocate memory from the operating system. Close running applications to free memory. 8 Warning Could not bind to adapter. The driver could not open one of the team physical adapters.
11–QLogic Teaming Services Event Log Messages Table 11-9. Intermediate Driver Event Log Messages (Continued) System Event Message Number Severity Message Cause 14 Informational Network adapter does not support Advanced Failover. The physical adapter does not support the QLogic NIC Extension (NICE). Replace the adapter with one that does support NICE. 15 Informational Network adapter is enabled through management interface.
11–QLogic Teaming Services Event Log Messages Virtual Bus Driver (VBD) Table 11-10 lists VBD event log messages. Table 11-10. Virtual Bus Driver (VBD) Event Log Messages Message Number Severity Message Cause Corrective Action 1 Error Failed to allocate memory for the device block. Check system memory resource usage. The driver cannot allocate memory from the operating system. Close running applications to free memory. 2 Informational The network link is down.
11–QLogic Teaming Services Event Log Messages Table 11-10. Virtual Bus Driver (VBD) Event Log Messages (Continued) Message Number Severity Message Cause Corrective Action 8 Informational Network controller configured for 1Gb half-duplex link. The adapter has been manually configured for the selected line speed and duplex settings. No action is required. 9 Informational Network controller configured for 1Gb full-duplex link.
12 NIC Partitioning and Bandwidth Management NIC partitioning and bandwidth management covered in this chapter includes: Overview “Configuring for NIC Partitioning” on page 187 Overview NIC partitioning (NPAR) divides a QLogic BCM57xx and BCM57xxx 10-gigabit Ethernet NIC into multiple virtual NICs by having multiple PCI physical functions per port. Each PCI function is associated with a different virtual NIC. To the OS and the network, each physical function appears as a separate NIC port.
12–NIC Partitioning and Bandwidth Management Configuring for NIC Partitioning Linux 2016 Server Nano Server RHEL 6.x and later family RHEL 7.x and later family SLES 11.x and later family SLES 12.x and later family VMware ESX 5.x and later family ESX 6.x and later family NOTE 32-bit Linux operating systems have a limited amount of memory space available for Kernel data structures. Therefore, QLogic recommends that you use only 64-bit Linux to configure NPAR.
12–NIC Partitioning and Bandwidth Management Configuring for NIC Partitioning 3. Change the Multi-Function Mode to NPAR. 4. Configure the NIC parameters for your configuration based on the options shown in Table 12-1, which lists the configuration parameters available on the NIC Partitioning Configuration window. Table 12-1. Configuration Options Parameter Flow Control Description Configures the Flow Control mode for this port.
12–NIC Partitioning and Bandwidth Management Configuring for NIC Partitioning Table 12-2 describes the functions available from the PF# X window. Table 12-2. Function Description Function Description Ethernet Protocol Enables and disables the Ethernet protocol. Option Enable Disable iSCSI Offload Protocol Enables and disables the iSCSI protocol. Enable Disable FCoE Offload protocol Enables and disables the FCoE protocol.
12–NIC Partitioning and Bandwidth Management Configuring for NIC Partitioning Consider this example configuration: Four functions (or partitions) are configured with a total of six protocols, as shown in the following. Function 0 Ethernet FCoE Function 1 Ethernet Function 2 Ethernet Function 3 Ethernet iSCSI 1. If Relative Bandwidth Weight is configured as “0” for all four physical functions (PFs), all six offloads share the bandwidth equally.
13 Linux QCS Installation Installation information for QLogic Control Suite on a Linux platform includes: Overview “Installing WS-MAN or CIM-XML on Linux Server” on page 193 “Installing WS-MAN or CIM-XML on Linux Client” on page 200 “Installing QLogic Control Suite” on page 201 Overview The QLogic Control Suite (QCS) is a management application for configuring the BCM57xx and BCM57xxx family of adapters, also known as Converged Network Adapters.
13–Linux QCS Installation Overview Communication Protocols A communication protocol enables exchanging information between provider and the client software. These are proprietary or open-source implementations of the Web-Based Enterprise Management (WBEM) and Common Information Model (CIM) standards from the Distributed Management Task Force (DMTF). Network administrators can choose the best option based on the prevailing standard on their network.
13–Linux QCS Installation Installing WS-MAN or CIM-XML on Linux Server Installing WS-MAN or CIM-XML on Linux Server Follow these steps to install WS-MAN or CIM-XML on a Linux server: Step 1: Install OpenPegasus Step 2: Start CIM Server on the Server Step 3: Configure OpenPegasus on the Server Step 4: Install QLogic CMPI Provider Step 5: Perform Linux Firewall Configuration, if Required Step 6: Install QCS and Related Management Applications Step 1: Install OpenPegasus On the Red Hat Linux OS, two installa
13–Linux QCS Installation Installing WS-MAN or CIM-XML on Linux Server NOTE On SUSE Linux, the Inbox OpenPegasus RPM is not available. OpenPegasus must be installed form source, as described in the following section. Note that in inbox Pegasus, HTTP is not enabled by default. After Inbox OpenPegasus is installed successfully, if no further configuration is required, follow the instructions in “Step 4: Install QLogic CMPI Provider” on page 198. To enable HTTP, see “Enable HTTP” on page 198.
13–Linux QCS Installation Installing WS-MAN or CIM-XML on Linux Server Set the Environment Variables Table 13-2 describes the environment variables for building OpenPegasus. Table 13-2.
13–Linux QCS Installation Installing WS-MAN or CIM-XML on Linux Server For WS-MAN support, add the following environment variable: export PEGASUS_ENABLE_PROTOCOL_WSMAN=true CIM-XML and WSMAN in OpenPegasus use the same ports for HTTP or HTTPS. The default port numbers for HTTP and HTTPS are 5989 and 5989, respectively. NOTE You can add these exports at the end of the .bash_profile. This file is located in the /root directory. The environment variables will be set when a user logs in using PuTTY.
13–Linux QCS Installation Installing WS-MAN or CIM-XML on Linux Server To check whether OpenPegasus has been installed properly, issue the following command: cimcli ei -n root/PG_Interop PG_ProviderModule NOTE For OpenPegasus compiled from source, PEGASUS_HOME must be defined when you start CIM server. Otherwise, CIM server will not load the repository properly. Consider setting PEGASUS_HOME in the .bash_profile file.
13–Linux QCS Installation Installing WS-MAN or CIM-XML on Linux Server If you want root user to connect remotely, issue the following: cimconfig -s enableRemotePrivilegedUserAccess=true -p User configuration with privilege: The Linux system users are used for OpenPegasus authentication.
13–Linux QCS Installation Installing WS-MAN or CIM-XML on Linux Server Uninstall Issue the following command to uninstall QLogic CMPI Provider: % rpm -e QLGC_CMPIProvider Step 5: Perform Linux Firewall Configuration, if Required Follow the appropriate procedure to open the appropriate ports in the firewall. Red Hat To configure the Linux firewall on Red Hat: 1. Click System, select Administration, and then select Firewall. 2. Select Other Ports. 3.
13–Linux QCS Installation Installing WS-MAN or CIM-XML on Linux Client 7. Destination Port: 5988:5989 Source Port: Leave blank Click Next and then click Finish for the firewall rules to take effect. Step 6: Install QCS and Related Management Applications For procedures, see “Installing QLogic Control Suite” on page 201. Installing WS-MAN or CIM-XML on Linux Client No special software components are required on the Linux client system to use the HTTP except installing the QCS management application.
13–Linux QCS Installation Installing QLogic Control Suite Test HTTPS and SSL Connection from Linux Client Issue the following command to test whether the certificate is installed correctly on Linux: # curl -v --capath /etc/ssl/certs https://Hostname or IPAddress:5986/wsman If this fails, the certificate is not installed correctly and an error message appears, indicating to take corrective action.
14 Fibre Channel Over Ethernet Fibre Channel over Ethernet (FCoE) information includes: Overview “FCoE Boot from SAN” on page 203 “Configuring FCoE” on page 237 “N_Port ID Virtualization (NPIV)” on page 239 Overview In today’s data center, multiple networks, including network attached storage (NAS), management, IPC, and storage, are used to achieve the performance and versatility that you require.
14–Fibre Channel Over Ethernet FCoE Boot from SAN DCB allocates a share of link bandwidth to FCoE traffic with enhanced transmission selection (ETS) DCB supports storage, management, computing, and communications fabrics onto a single physical fabric that is simpler to deploy, upgrade, and maintain than in standard Ethernet networks. DCB technology allows the capable QLogic C-NICs to provide lossless data delivery, lower latency, and standards-based bandwidth sharing of data center physical links.
14–Fibre Channel Over Ethernet FCoE Boot from SAN Preparing QLogic Multiple Boot Agent for FCoE Boot (CCM) CCM is available only when the system is set to legacy boot mode; it is not available when the systems is set to UEFI boot mode. The UEFI device configuration pages are available in both modes. 1. Invoke the CCM utility during POST. At the QLogic Ethernet Boot Agent banner (Figure 14-1), press the CTRL+S keys. Figure 14-1. Invoking the CCM Utility 2.
14–Fibre Channel Over Ethernet FCoE Boot from SAN 3. Ensure that DCB and DCBX are enabled on the device (Figure 14-3). FCoE boot is only supported on DCBX-capable configurations. As such, DCB and DCBX must be enabled, and the directly attached link peer must also be DCBX-capable with parameters that allow for full DCBX synchronization. Figure 14-3. CCM Device Hardware Configuration 4.
14–Fibre Channel Over Ethernet FCoE Boot from SAN For all other devices, use the CCM MBA Configuration Menu to set the Boot Protocol option to FCoE (Figure 14-4). Figure 14-4. CCM MBA Configuration Menu 5. Configure the boot target and LUN. From the Target Information menu, select the first available path (Figure 14-5). Figure 14-5.
14–Fibre Channel Over Ethernet FCoE Boot from SAN 6. Enable the Connect option, and then the target WWPN and Boot LUN information for the target to be used for boot (Figure 14-6). Figure 14-6. CCM Target Parameters The target information shows the changes (Figure 14-7). Figure 14-7.
14–Fibre Channel Over Ethernet FCoE Boot from SAN 7. Press the ESC key until prompted to exit and save changes. To exit CCM, restart the system, and apply changes, press the CTRL+ALT+DEL keys. 8. Proceed to OS installation after storage access has been provisioned in the SAN. Preparing QLogic Multiple Boot Agent for FCoE Boot (UEFI) To prepare the QLogic multiple boot agent for FCOE boot (UEFI): 1.
14–Fibre Channel Over Ethernet FCoE Boot from SAN 5. In the FCoE Configuration menu, select FCoE General Parameters. The FCoE General Parameters menu appears (see Figure 14-9). Figure 14-9. FCoE Boot Configuration Menu, FCoE General Parameters 6. In the FCoE General Parameters menu: a. Select the desired Boot to FCoE Target mode (see One-Time Disabled).
14–Fibre Channel Over Ethernet FCoE Boot from SAN Provisioning Storage Access in the SAN Storage access consists of zone provisioning and storage selective LUN presentation, each of which is commonly provisioned per initiator WWPN.
14–Fibre Channel Over Ethernet FCoE Boot from SAN When the initiator boot starts, it begins DCBX sync, FIP Discovery, Fabric Login, Target Login, and LUN readiness checks. As each of these phases completes, if the initiator is unable to proceed to the next phase, MBA presents the option to press the CTRL+R keys. 3. Press the CTRL+R keys. 4.
14–Fibre Channel Over Ethernet FCoE Boot from SAN For OS installation over the FCoE path, you must instruct the Option ROM to bypass FCoE and skip to CD or DVD installation media. As instructed in “Preparing QLogic Multiple Boot Agent for FCoE Boot (CCM)” on page 204, the boot order must be configured with QLogic boot first and installation media second. Furthermore, during OS installation, it is necessary to bypass the FCoE boot and pass through to the installation media for boot.
14–Fibre Channel Over Ethernet FCoE Boot from SAN Windows Server 2008 R2 and Windows Server 2008 SP2 FCoE Boot Installation To install Windows Server 2008 boot: 1. Before starting the OS installer, ensure that no USB flash drive is attached. The EVBD and BXFCOE drivers must be loaded during installation. 2. Go through the usual procedures for OS installation. 3. When no disk devices are found, Windows prompts you to load additional drivers. 4.
14–Fibre Channel Over Ethernet FCoE Boot from SAN 2. Load the bxfcoe (OFC) driver (Figure 14-12). Figure 14-12. Installing the bxfcoe Driver 3. Select the boot LUN to be installed (Figure 14-13). Figure 14-13. Selecting Installation Disk Partition 4. Continue with the rest of the installation.
14–Fibre Channel Over Ethernet FCoE Boot from SAN 5. After installation is complete and booted to SAN, execute the provided Windows driver installer and reboot. Installation is now complete. NOTE The boot initiator must be configured to point at the installation LUN that you need, and the boot initiator must have successfully logged and determined the readiness of the LUN prior to starting installation.
14–Fibre Channel Over Ethernet FCoE Boot from SAN SLES 11 SP3 and SLES 12 Installation 1. To start the installation: a. Boot from the SLES installation medium. b. On the installation splash window, press the F6 key for driver update disk. c. Select Yes. d. In Boot Options, type withfcoe=1. e. Click Installation to proceed (Figure 14-14). Figure 14-14.
14–Fibre Channel Over Ethernet FCoE Boot from SAN 2. Follow the prompts to choose the driver update medium (Figure 14-15) and load the drivers (Figure 14-16). Figure 14-15. Selecting Driver Update Medium Figure 14-16. Loading the Drivers 3. After the driver update is complete, select Next to continue with OS installation.
14–Fibre Channel Over Ethernet FCoE Boot from SAN 4. When requested, click Configure FCoE Interfaces (Figure 14-17). Figure 14-17.
14–Fibre Channel Over Ethernet FCoE Boot from SAN 5. Ensure that FCoE Enable is set to yes on the 10GbE QLogic initiator ports that you want to use as the SAN boot paths (Figure 14-18). Figure 14-18. Enabling FCoE 6. For each interface to be enabled for FCoE boot: a. Click Change Settings. b. On the Change FCoE Settings window (Figure 14-19), ensure that FCoE Enable and Auto_VLAN are set to yes. c. Ensure that DCB Required is set to no. d. Click Next to save the settings.
14–Fibre Channel Over Ethernet FCoE Boot from SAN Figure 14-19. Changing FCoE Settings 7. For each interface to be enabled for FCoE boot: a. Click Create FCoE VLAN Interface. b. On the VLAN interface creation dialog box, click Yes to confirm and trigger automatic FIP VLAN discovery. If successful, the VLAN is displayed under FCoE VLAN Interface. If no VLAN is visible, check your connectivity and switch configuration.
14–Fibre Channel Over Ethernet FCoE Boot from SAN 8. After completing the configuration of all interfaces, click OK to proceed (Figure 14-20). Figure 14-20. FCoE Interface Configuration 9. Click Next to continue installation.
14–Fibre Channel Over Ethernet FCoE Boot from SAN 10. YaST2 prompts you to activate multipath. Answer as appropriate (Figure 14-21). Figure 14-21. Disk Activation 11. Continue installation as usual.
14–Fibre Channel Over Ethernet FCoE Boot from SAN 12. On the Expert page on the Installation Settings window, click Booting (Figure 14-22). Figure 14-22.
14–Fibre Channel Over Ethernet FCoE Boot from SAN 13. Click the Boot Loader Installation tab, and then select Boot Loader Installation Details. Make sure you have one boot loader entry here; delete all redundant entries (Figure 14-23). Figure 14-23. Boot Loader Device Map 14. Click OK to proceed and complete installation. RHEL 6 Installation To install Linux FCoE boot on RHEL 6: 1. Boot from the installation medium. Instructions vary for RHEL 6.3 and 6.4. For RHEL 6.3: a.
14–Fibre Channel Over Ethernet FCoE Boot from SAN For details about installing the Anaconda update image, refer to the Red Hat Installation Guide, Section 28.1.3: http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Install ation_Guide/ap-admin-options.html#sn-boot-options-update For RHEL 6.4 and later: No updated Anaconda is required. a. On the installation splash window, press the TAB key. b. Add the dd option to the boot command line, as shown in Figure 14-24. c.
14–Fibre Channel Over Ethernet FCoE Boot from SAN 2. When prompted Do you have a driver disk, select Yes (Figure 14-25). NOTE RHEL does not allow driver update media to be loaded through the network when installing driver updates for network devices. Use local media. Figure 14-25. Selecting a Driver Disk 3. When drivers are loaded, proceed with installation. 4. When prompted, select Specialized Storage Devices. 5. Click Add Advanced Target.
14–Fibre Channel Over Ethernet FCoE Boot from SAN 6. Select Add FCoE SAN, and then click Add drive (Figure 14-26). Figure 14-26. Adding FCoE SAN Drive 7. For each interface intended for FCoE boot, select the interface, clear the Use DCB check box, select Use auto vlan, and then click Add FCoE Disk(s) (Figure 14-27). Figure 14-27. Configuring FCoE Parameters 8. Repeat steps 8 through 10 for all initiator ports.
14–Fibre Channel Over Ethernet FCoE Boot from SAN 9. Confirm all FCoE visible disks are visible on the Multipath Devices or Other SAN Devices pages (Figure 14-28). Figure 14-28. Confirming FCoE Disks 10. Click Next to proceed. 11. Click Next and complete installation as usual. Upon completion of installation, the system reboots. 12. When booted, ensure all boot path devices are set to start on boot. Set onboot=yes under each network interface config file in /etc/sysconfig/network-scripts. 13.
14–Fibre Channel Over Ethernet FCoE Boot from SAN 3. Add the option dd to the boot command line, as shown in Figure 14-29. Figure 14-29. Adding the “dd” Installation Option 4. Press the ENTER key to proceed. 5. At the Driver disk device selection prompt: a. Refresh the device list by pressing the R key. b. Type the appropriate number for your media. c. Press the C key to continue.
14–Fibre Channel Over Ethernet FCoE Boot from SAN 12. On the Installation Destination window (Figure 14-30) under Other Storage Options, select your Partitioning options, and then click Done. Figure 14-30. Selecting Partitioning Options 13. On the Installation Summary window, click Begin Installation. Linux: Adding Boot Paths RHEL requires updates to the network configuration when adding new boot through an FCoE initiator that was not configured during installation.
14–Fibre Channel Over Ethernet FCoE Boot from SAN RHEL 6.2 and Later On RHEL 6.2 and later, if the system is configured to boot through an initiator port that has not previously been configured in the OS, the system automatically boots successfully, but will encounter problems during shutdown. All new boot path initiator ports must be configured in the OS before updating pre-boot FCoE boot parameters. 1. Identify the network interface names for the newly added interfaces through ifconfig -a. 2.
14–Fibre Channel Over Ethernet FCoE Boot from SAN To install ESXi FCoE boot: 1. Boot from the updated ESXi 6.0 U2 installation image and select ESXi 6.0 U2 installer when prompted. 2. On the Welcome to the VMware ESXi installation window, press the ENTER key to continue. 3. On the EULA window, press the F11 key to accept the agreement and continue. 4. On the Select a Disk window (Figure 14-31), scroll to the boot LUN for installation, and then press ENTER to continue. Figure 14-31.
14–Fibre Channel Over Ethernet FCoE Boot from SAN 7. On the Confirm Install window (Figure 14-33), press the F11 key to confirm the installation and repartition. Figure 14-33. ESXi Confirm Install 8. After successful installation (Figure 14-34), press ENTER to reboot. Figure 14-34. ESXi Installation Complete 9. On 57800 and 57810 boards, the management network is not vmnic0.
14–Fibre Channel Over Ethernet FCoE Boot from SAN 10. For BCM57800 and BCM57810 boards, the FCoE boot devices must have a separate vSwitch other than vSwitch0. This switch allows DHCP to assign the IP address to the management network rather than to the FCoE boot device. To create a vSwitch for the FCoE boot devices, add the boot device vmnics in vSphere Client on the Configuration page under Networking. Figure 14-36 shows an example. Figure 14-36.
14–Fibre Channel Over Ethernet Booting from SAN After Installation Booting from SAN After Installation After boot configuration and OS installation are complete, you can reboot and test the installation. On this and all future reboots, no other user interactivity is required. Ignore the CTRL+D prompt and allow the system to boot through to the FCoE SAN LUN, as shown in Figure 14-37. Figure 14-37.
14–Fibre Channel Over Ethernet Booting from SAN After Installation 3. 4. 5. Issue the following command to update the ramdisk: On RHEL 6.x systems, issue: dracut -force On SLES 11 SPX systems, issue: mkinitrd If you are using different name for the initrd under /boot: a. Overwrite it with the default, because dracut/mkinitrd updates the ramdisk with the default original name. b.
14–Fibre Channel Over Ethernet Configuring FCoE To avoid any of the preceding error messages, you must ensure that there is no USB flash drive attached until the setup asks for the drivers. When you load the drivers and see your SAN disks, detach or disconnect the USB flash drive immediately before selecting the disk for further installation. Configuring FCoE By default, DCB is enabled on QLogic BCM57xx and BCM57xxx FCoE-, DCB-compatible C-NICs.
14–Fibre Channel Over Ethernet Configuring FCoE To enable and disable the FCoE-offload instance on Windows using QCC GUI: 1. Open QCC GUI. 2. In the tree pane on the left, under the port node, select the port’s virtual bus device instance. 3. In the configuration pane on the right, click the Resource Config tab. The Resource Config page appears (see Figure 14-39). Figure 14-39. Resource Config Page 4. 5. Complete the Resource Config page for each selected port as follows: a.
14–Fibre Channel Over Ethernet N_Port ID Virtualization (NPIV) N_Port ID Virtualization (NPIV) NPIV is a Fibre Channel protocol that allows multiple, virtual N_Ports to be instantiated on a single physical N_Port. Each NPIV port is provided with a unique identification in the fabric and appears as a distinct initiator port at the operating system level. QLogic NetXtreme II BCM57xx and BCM57xxx FCoE drivers support NPIV by default, without requiring any user inputs.
15 Data Center Bridging This chapter provides the following information about the data center bridging feature: Overview “DCB Capabilities” on page 241 “Configuring DCB” on page 242 “DCB Conditions” on page 242 “Data Center Bridging in Windows Server 2012 and Later” on page 243 Overview Data center bridging (DCB) is a collection of IEEE specified standard extensions to Ethernet to provide lossless data delivery, low latency, and standards-based bandwidth sharing of data center physical
15–Data Center Bridging DCB Capabilities DCB Capabilities DCB capabilities include ETS, PFC, and DCBX, as described in this section. Enhanced Transmission Selection (ETS) Enhanced transmission selection (ETS) provides a common management framework for assignment of bandwidth to traffic classes. Each traffic class or priority can be grouped in a priority group (PG), and it can be considered as a virtual link or virtual interface queue.
15–Data Center Bridging Configuring DCB Data Center Bridging Exchange (DCBX) Data center bridging exchange (DCBX) is a discovery and capability exchange protocol that is used for conveying capabilities and configuration of ETS and PFC between link partners to ensure consistent configuration across the network fabric. In order for two devices to exchange information, one device must be willing to adopt network configuration from the other device.
15–Data Center Bridging Data Center Bridging in Windows Server 2012 and Later In NIC partitioned enabled configurations, ETS (if operational) overrides the Bandwidth Weights assigned to each function. Transmission selection weights are per protocol per ETS settings instead. Maximum bandwidths per function are still honored in the presence of ETS. In the absence of an iSCSI or FCoE application TLV advertised through the DCBX peer, the adapter will use the settings taken from the local Admin MIB.
15–Data Center Bridging Data Center Bridging in Windows Server 2012 and Later To revert to standard QCS control over the QLogic DCB feature set, uninstall the Microsoft QoS feature or disable quality of service in the QCS or Device Manager NDIS advance properties page. NOTE QLogic recommends that you do not install the DCB feature if SR-IOV will be used.
16 SR-IOV This chapter provides information about single-root I/O virtualization (SR-IOV): Overview Enabling SR-IOV “Verifying that SR-IOV is Operational” on page 248 “SR-IOV and Storage Functionality” on page 249 “SR-IOV and Jumbo Packets” on page 249 Overview Virtualization of network controllers allows users to consolidate their networking hardware resources and run multiple virtual machines concurrently on consolidated hardware.
16–SR-IOV Enabling SR-IOV To enable SR-IOV: 1. Enable the feature on the adapter using either QCC GUI, QCS CLI, Dell pre-boot UEFI, or pre-boot CCM. If using Windows QCC GUI: a. Select the network adapter in the Explorer View pane. Click the Configuration tab and select SR-IOV Global Enable. b. In the SR-IOV VFs per PF box, configure the quantity of SR-IOV virtual functions (VFs) that the adapter can support per physical function, from 0 to 64 in increments of 8 (default = 16). c.
16–SR-IOV Enabling SR-IOV If using pre-boot CCM: a. During power up, press CTRL+S at the prompt to enter CCM. b. Select the SR-IOV-capable adapter from the Device List. On the Main Menu, select Device Hardware Configuration, and then select SR-IOV Enabled. c. To configure the quantity of VFs that the adapter can support: If Multi-Function Mode to is set to SF (Single Function), the Number of VFs per PF box appears, which you can set from 0 to 64 in increments of 8 (default is 16).
16–SR-IOV Verifying that SR-IOV is Operational c. From lspci, select the 10G NIC sequence number for which SR-IOV is required. For example: ~ # lspci | grep -i Broadcom 0000:03:00.0 Network Controllers: Broadcom Corporation NetXtreme II BCM57810 10 Gigabit Ethernet [vmnic0] Following is a sample output. 0000:03:00.1 Network Controllers: Broadcom Corporation NetXtreme II BCM57810 10 Gigabit Ethernet [vmnic1] ~ # d.
16–SR-IOV SR-IOV and Storage Functionality To verify SR-IOV in ESXi CLI: 1. Issue the lspci command ~ # lspci | grep -i ether Following is a sample output. 0000:03:01.0 Network controller: Broadcom Corporation NetXtreme II BCM57810 10 Gigabit Ethernet Virtual Function [PF_0.3.0_VF_0] 2. To list the SR-IOV-enabled NIC, issue the esxcli command: ~ # esxcli network sriovnic list Following is a sample output.
17 Specifications Specifications, characteristics, and requirements include: 10/100/1000BASE-T and 10GBASE-T Cable Specifications “Interface Specifications” on page 253 “NIC Physical Characteristics” on page 254 “NIC Power Requirements” on page 254 “Wake on LAN Power Requirements” on page 255 “Environmental Specifications” on page 256 10/100/1000BASE-T and 10GBASE-T Cable Specifications Table 17-1.
17–Specifications 10/100/1000BASE-T and 10GBASE-T Cable Specifications Table 17-2. 10GBASE-T Cable Specifications Port Type Connector 10GBASE-T a RJ45 Maximum Distance Media CAT-6 a UTP 131ft (40m) CAT-6A a UTP 328ft (100m) 10GBASE-T signaling requires four twisted pairs of CAT-6 or CAT-6A (augmented CAT-6) balanced cabling, as specified in ISO/IEC 11801:2002 and ANSI/TIA/EIA-568-B Supported SFP+ Modules Per NIC Table 17-3.
17–Specifications 10/100/1000BASE-T and 10GBASE-T Cable Specifications Table 17-4. BCM57810 Supported Modules Module Type Optic Modules (SR) Direct Attach Cables Dell Part Number Module Vendor Module Part Number W365M Avago AFBR-703SDZ-D1 N743D Finisar Corp. FTLX8571D3BCL R8H2F Intel Corp. AFBR-703SDZ-IN2 R8H2F Intel Corp. FTLX8571D3BCV-IT K585N Cisco-Molex Inc. 74752-9093 J564N Cisco-Molex Inc. 74752-9094 H603N Cisco-Molex Inc. 74752-9096 G840N Cisco-Molex Inc.
17–Specifications Interface Specifications Table 17-5. BCM57840 Supported Modules Module Type Optic Modules (SR) Direct Attach Cables Dell Part Number Module Vendor R8H2F Module Part Number Intel Corp. AFBR-703SDZ-IN2 Intel Corp. FTLX8571D3BCV-IT K585N Cisco-Molex Inc. 74752-9093 J564N Cisco-Molex Inc. 74752-9094 H603N Cisco-Molex Inc. 74752-9096 G840N Cisco-Molex Inc.
17–Specifications NIC Physical Characteristics NIC Physical Characteristics Table 17-8. NIC Physical Characteristics NIC Type NIC Length BCM57810S PCI Express x8 low profile 6.6in (16.8cm) NIC Width 2.54in (6.5cm) NIC Power Requirements Table 17-9. BCM957810A1006G NIC Power Requirements Link 10G SFP Module a NIC 12V Current Draw (A) NIC 3.3V Current Draw (A) NIC Power (W) a 1.00 0.004 12.0 Power, measured in watts (W), is a direct calculation of total current draw (A) multiplied by voltage (V).
17–Specifications Wake on LAN Power Requirements Table 17-11. BCM957840A4006G Mezzanine Card Power Requirements Total Power (12V and 3.3VAUX) (W) a Link a 10G SFP+ 12.0 Standby WoL Enabled 5.0 Standby WoL Disabled 0.5 Power, measured in watts (W), is a direct calculation of total current draw (A) multiplied by voltage (V). The maximum power consumption for the adapter will not exceed 25W. Table 17-12. BCM957840A4007G Mezzanine Card Power Requirements Link a Total Power (3.
17–Specifications Environmental Specifications Environmental Specifications Table 17-13. BCM5709 and BCM5716 Environmental Specifications Parameter Condition Operating Temperature 32°F to 131°F (0°C to 55°C) Air Flow Requirement (LFM) 0 Storage Temperature –40°F to 149°F (–40°C to 65°C) Storage Humidity 5% to 95% condensing Vibration and Shock IEC 68, FCC Part 68.302, NSTA, 1A Electrostatic/Electromagnetic Susceptibility EN 61000-4-2, EN 55024 Table 17-14.
17–Specifications Environmental Specifications Table 17-16. BCM957840A4007G Environmental Specifications Parameter Condition Operating Temperature 32°F to 131°F (0°C to 65°C) Air Flow Requirement (LFM) 200 Storage Temperature –40°F to 149°F (–40°C to 65°C) Storage Humidity 5% to 95% condensing Vibration and Shock IEC 68, FCC Part 68.
18 Regulatory Information Regulatory information covered in this chapter includes the following: Product Safety AS/NZS (C-Tick) “FCC Notice” on page 259 “VCCI Notice” on page 261 “CE Notice” on page 266 “Canadian Regulatory Information (Canada Only)” on page 267 “Korea Communications Commission (KCC) Notice (Republic of Korea Only)” on page 269 “BSMI” on page 272 “Certifications for BCM95709SA0908G, BCM957710A1023G (E02D001), and BCM957711A1123G (E03D001)” on page 272 Pr
18–Regulatory Information FCC Notice FCC Notice FCC, Class B QLogic BCM57xx and BCM57xxx gigabit Ethernet controller BCM95708A0804F BCM95709A0907G BCM95709A0906G BCM957810A1008G QLogic Corporation 26650 Aliso Viejo Parkway Aliso Viejo, CA 92656 USA The equipment complies with Part 15 of the FCC Rules.
18–Regulatory Information FCC Notice FCC, Class A QLogic BCM57xx and BCM57xxx gigabit Ethernet controller: BCM95709A0916G QLogic BCM57xx and BCM57xxx 10-gigabit Ethernet controller: BCM957800 BCM957710A1022G BCM957710A1021G BCM957711A1113G BCM957711A1102G BCM957810A1006G BCM957840A4006G BCM957840A4007G QLogic Corporation 26650 Aliso Viejo Parkway Aliso Viejo, CA 92656 USA This device complies with Part 15 of the FCC Rules.
18–Regulatory Information VCCI Notice Do not make mechanical or electrical modifications to the equipment. NOTE If the device is changed or modified without permission of QLogic, the user may void his or her authority to operate the equipment. VCCI Notice The following tables provide the VCCI notice physical specifications for the QLogic BCM57xx and BCM57xxx adapters for Dell. Table 18-1.
18–Regulatory Information VCCI Notice Table 18-2. QLogic 57800S Quad RJ-45, SFP+, or Direct Attach Rack Network Daughter Card Physical Characteristics (Continued) Item Connectors Description Two ports SFP+ (10GbE) Two ports RJ45 (1GbE) Certifications RoHS, FCC A, UL, CE, VCCI, BSMI, C-Tick, KCC, TUV, and ICES-003 Table 18-3. QLogic 57810S Dual 10GBASE-T PCI-e Card Physical Characteristics Item Description Ports Dual 10Gbps BASE-T Ethernet ports Form Factor PCI Express short, low-profile card 6.
18–Regulatory Information VCCI Notice Table 18-4. QLogic 57810S Dual SFP+ or Direct Attach PCIe Physical Characteristics (Continued) Item Supported Servers Description 13th Generation: R630, R730, R730xd, and T630 12th Generation: R220, R320, R420, R520, R620, R720, R720xd, R820, R920, T420, and T620 Certifications RoHS, FCC A, UL, CE, VCCI, BSMI, C-Tick, KCC, TUV, and ICES-003 Table 18-5.
18–Regulatory Information VCCI Notice Table 18-7. QLogic 57840S Quad 10GbE SFP+ or Direct Attach Rack Network Daughter Card Physical Characteristics Item Description Ports Dual 10Gbps Ethernet Form Factor PCI Express short, low-profile card 6.60in×2.71in (67.64mm×68.
18–Regulatory Information VCCI Notice The equipment is a Class B product based on the standard of the Voluntary Control Council for Interference from Information Technology Equipment (VCCI). If used near a radio or television receiver in a domestic environment, it may cause radio interference. Install and use the equipment according to the instruction manual.
18–Regulatory Information CE Notice VCCI Class A Statement (Japan) CE Notice QLogic BCM57xx and BCM57xxx gigabit Ethernet controller BCM95708A0804F BCM95709A0907G BCM95709A0906G BCM95709A0916G BCM957810A1008G QLogic BCM57xx and BCM57xxx 10-gigabit Ethernet controller BCM957710A1022G BCM957710A1021G BCM957711A1113G BCM957711A1102G BCM957840A4006G BCM957840A4007G This product has been determined to be in compliance with 2006/95/EC (Low Voltage Directive), 2004/108/EC (EMC Directive
18–Regulatory Information Canadian Regulatory Information (Canada Only) Canadian Regulatory Information (Canada Only) Industry Canada, Class B QLogic BCM57xx and BCM57xxx gigabit Ethernet controller BCM95708A0804F BCM95709A0907G BCM95709A0906G QLogic Corporation 26650 Aliso Viejo Parkway Aliso Viejo, CA 92656 USA This Class B digital apparatus complies with Canadian ICES-003.
18–Regulatory Information Canadian Regulatory Information (Canada Only) Industry Canada, classe B QLogic BCM57xx and BCM57xxx gigabit Ethernet controller BCM95708A0804F BCM95709A0907G BCM95709A0906G QLogic Corporation 26650 Aliso Viejo Parkway Aliso Viejo, CA 92656 USA Cet appareil numérique de la classe B est conforme à la norme canadienne ICES-003.
18–Regulatory Information Korea Communications Commission (KCC) Notice (Republic of Korea Only) Korea Communications Commission (KCC) Notice (Republic of Korea Only) B Class Device QLogic BCM57xx and BCM57xxx gigabit Ethernet controller BCM95708A0804F BCM95709A0907G BCM95709A0906G QLogic Corporation 26650 Aliso Viejo Parkway Aliso Viejo, CA 92656 USA 269 BC0054508-00 J
18–Regulatory Information Korea Communications Commission (KCC) Notice (Republic of Korea Only) Note that this device has been approved for non-business purposes and may be used in any environment, including residential areas.
18–Regulatory Information Korea Communications Commission (KCC) Notice (Republic of Korea Only) QLogic Corporation 26650 Aliso Viejo Parkway Aliso Viejo, CA 92656 USA 271 BC0054508-00 J
18–Regulatory Information BSMI BSMI Certifications for BCM95709SA0908G, BCM957710A1023G (E02D001), and BCM957711A1123G (E03D001) This section is included on behalf of Dell, and QLogic is not responsible for the validity or accuracy of the information.
18–Regulatory Information Certifications for BCM95709SA0908G, BCM957710A1023G (E02D001), and BCM957711A1123G Canadian Regulatory Information, Class A (Canada) Korea Communications Commission (KCC) Notice (Republic of Korea) FCC Notice FCC, Class A QLogic BCM57xx and BCM57xxx gigabit Ethernet controller BCM95709SA0908G QLogic BCM57xx and BCM57xxx 10-gigabit Ethernet controller BCM957710A1023G BCM957711A1123G (E03D001) E02D001 Dell Inc.
18–Regulatory Information Certifications for BCM95709SA0908G, BCM957710A1023G (E02D001), and BCM957711A1123G Move the system away from the receiver. Plug the system into a different outlet so that the system and receiver are on different branch circuits. Do not make mechanical or electrical modifications to the equipment. NOTE If the device is changed or modified without permission of Dell Inc, the user may void his or her authority to operate the equipment.
18–Regulatory Information Certifications for BCM95709SA0908G, BCM957710A1023G (E02D001), and BCM957711A1123G VCCI Class A Statement (Japan) CE Notice Class A QLogic BCM57xx and BCM57xxx gigabit Ethernet controller BCM95709SA0908G QLogic BCM57xx and BCM57xxx 10-gigabit Ethernet controller BCM957710A1023G BCM957711A1123G (E03D001) E02D001 Dell Inc.
18–Regulatory Information Certifications for BCM95709SA0908G, BCM957710A1023G (E02D001), and BCM957711A1123G QLogic BCM57xx and BCM57xxx 10Gbt Ethernet Controller BCM957710A1023G BCM957711A1123G (E03D001) E02D001 Dell Inc. Worldwide Regulatory Compliance, Engineering and Environmental Affairs One Dell Way PS4-30 Round Rock, Texas 78682, USA 512-338-4400 This Class A digital apparatus complies with Canadian ICES-003.
18–Regulatory Information Certifications for BCM95709SA0908G, BCM957710A1023G (E02D001), and BCM957711A1123G Korea Communications Commission (KCC) Notice (Republic of Korea Only) A Class Device QLogic BCM57xx and BCM57xxx gigabit Ethernet controller BCM95709SA0908G (5709s-mezz) QLogic BCM57xx and BCM57xxx 10-gigabit Ethernet controller BCM957710A1023G BCM957711A1123G (E03D001) E02D001 Dell Inc.
18–Regulatory Information Certifications for BCM95709SA0908G, BCM957710A1023G (E02D001), and BCM957711A1123G 278 BC0054508-00 J
19 Troubleshooting Troubleshooting topics cover the following: Hardware Diagnostics “Checking Port LEDs” on page 281 “Troubleshooting Checklist” on page 281 “Checking if Current Drivers Are Loaded” on page 282 “Running a Cable Length Test” on page 283 “Testing Network Connectivity” on page 283 “Microsoft Virtualization with Hyper-V” on page 284 “Removing the QLogic BCM57xx and BCM57xxx Device Drivers” on page 289 “Upgrading Windows Operating Systems” on page 289 “QLog
19–Troubleshooting Hardware Diagnostics QCS Diagnostic Tests Failures If any of the following tests fail while running the diagnostic tests from QCS, this may indicate a hardware issue with the NIC or LOM that is installed in the system. Control Registers MII Registers EEPROM Internal Memory On-Chip CPU Interrupt Loopback - MAC Loopback - PHY Test LED Troubleshooting steps that may help correct the failure: 1.
19–Troubleshooting Checking Port LEDs Checking Port LEDs To check the state of the network link and activity, see “Network Link and Activity Indication” on page 6. Troubleshooting Checklist CAUTION Before you open the cabinet of your server to add or remove the adapter, review “Safety Precautions” on page 19. The following checklist provides recommended actions to take to resolve problems installing the QLogic BCM57xx and BCM57xxx adapter or running it in your system.
19–Troubleshooting Checking if Current Drivers Are Loaded Checking if Current Drivers Are Loaded Follow the appropriate procedure for your operating system to confirm if the current drivers are loaded. Windows See the QCC GUI online help for information on viewing vital information about the adapter, link status, and network connectivity. Linux To verify that the bnx2.
19–Troubleshooting Running a Cable Length Test Following is a sample output. driver: bnx2x version: 1.78.07 firmware-version: bc 7.8.6 bus-info: 0000:04:00.2 If you loaded a new driver but have not yet booted, the modinfo command does not show the updated driver information.
19–Troubleshooting Microsoft Virtualization with Hyper-V Linux To verify that the Ethernet interface is up and running, issue ifconfig to check the status of the Ethernet interface. It is possible to use netstat -i to check the statistics on the Ethernet interface. For information on ifconfig and netstat, see Chapter 7 Linux Driver Software. Ping an IP host on the network to verify connection has been established. From the command line, issue the ping command, and then press ENTER.
19–Troubleshooting Microsoft Virtualization with Hyper-V Table 19-1. Configurable Network Adapter Hyper-V Features (Continued) Supported in Windows Server Feature Comments and Limitations 2008 2008 R2 2012 and Later IPv6 CO (parent and child partition) No* Yes Yes * When bound to a virtual network, OS limitation. Jumbo frames No* Yes Yes * OS limitation. RSS No* No* Yes * OS limitation. RSC No* No* Yes * OS limitation. SR-IOV No* No* Yes * OS limitation.
19–Troubleshooting Microsoft Virtualization with Hyper-V Windows Server 2008 R2 and 2012 When configuring a BCM57xx and BCM57xxx network adapter on a Hyper-V system, be aware of the following: An adapter that is to be bound to a virtual network must not be configured for VLAN tagging through the driver’s advanced properties. Instead, Hyper-V should manage VLAN tagging exclusively.
19–Troubleshooting Microsoft Virtualization with Hyper-V Table 19-2. Configurable Teamed Network Adapter Hyper-V Features (Continued) Supported in Windows Server Version Feature Comments and Limitations 2008 2008 R2 2012 and Later Large send offload (LSO) Limited* Yes Yes * Conforms to miniport limitations outlines in Table 19-1. Checksum offload (CO) Limited* Yes Yes * Conforms to miniport limitations outlines in Table 19-1.
19–Troubleshooting Microsoft Virtualization with Hyper-V In an IPv6 network, a team that supports CO or LSO and is bound to a Hyper-V virtual network will report CO and LSO as an offload capability in QCS; however, CO and LSO will not work. This issue is a limitation of Hyper-V, which does not support CO and LSO in an IPv6 network.
19–Troubleshooting Removing the QLogic BCM57xx and BCM57xxx Device Drivers To create a VMQ-capable SLB team: 1. Create an SLB team. If using the Teaming Wizard, when you select the SLB team type, also select Enable HyperV Mode. If using Expert mode, enable the property on the Create Team or Edit Team pages. 2. Follow these instructions to add the required registry entries in Windows: http://technet.microsoft.com/en-us/library/gg162696%28v=ws.10%29.aspx 3.
19–Troubleshooting QLogic Boot Agent 3. Perform the Windows upgrade. 4. Reinstall the latest QLogic adapter drivers and the QLogic Control Suite application. QLogic Boot Agent Problem: Unable to obtain network settings through DHCP using PXE. Solution: For proper operation make sure that the Spanning Tree Protocol (STP) is disabled or that portfast mode (for Cisco) is enabled on the port to which the PXE client is connected. For instance, set spantree portfast 4/12 enable.
19–Troubleshooting QLASP Problem: A system containing an 802.3ad team causes a Netlogon service failure in the system event log and prevents it from communicating with the domain controller during boot up. Solution: Microsoft Knowledge Base Article 326152 (http://support.microsoft.
19–Troubleshooting Linux Linux Problem: BCM57xx and BCM57xxx devices with SFP+ Flow Control default to Off rather than Rx/Tx Enable. Solution: The Flow Control default setting for revision 1.6.x and later has been changed to Rx Off and Tx Off because SFP+ devices do not support auto-negotiation for flow control. Problem: On kernels older than 2.6.
19–Troubleshooting Kernel Debugging Over Ethernet A software defect can cause the system to be unable to BFS boot to an iSCSI or FCoE target if an iSCSI personality is enabled on the first partition of one port, whereas an FCoE personality is enabled on the first partition of another port. The MBA driver performs a check for this configuration and prompts the user when it is found. Solution: If using the 7.6.
19–Troubleshooting Miscellaneous Problem: Cannot configure Resource Reservations in QCC after SNP is uninstalled. Solution: Reinstall SNP. Prior to uninstalling SNP from the system, ensure that NDIS is enabled by selecting the check box on the Resource Configuration window, available from the Resource Reservations section of the Configurations page. If NDIS is disabled and SNP is removed, there is no access to re-enable the device.
Corporate Headquarters Cavium, Inc. 2315 N. First Street San Jose, CA 95131 408-943-7100 International Offices UK | Ireland | Germany | France | India | Japan | China | Hong Kong | Singapore | Taiwan | Israel Copyright © 2015–2018 Cavium, Inc. All rights reserved worldwide. QLogic Corporation is a wholly owned subsidiary of Cavium, Inc. QLogic, FastLinQ, and QConvergeConsole are registered trademarks of Cavium, Inc.