User’s Guide Converged Network Adapter QMD8262-k, QLE8262, QME8262-k CU0354602-00 M Third party information brought to you courtesy of Dell EMC.
User’s Guide—Converged Network Adapter QMD8262-k, QLE8262, QME8262-k This document is provided for informational purposes only and may contain errors. QLogic reserves the right, without notice, to make changes to this document or in product design or specifications. QLogic disclaims any warranty of any kind, expressed or implied, and does not guarantee that any results or performance described in the document will be achieved by you.
Table of Contents Introduction Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Intended Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . User’s Guide Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Related Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Functionality and Features . . . . . . . . . . . .
User’s Guide—Converged Network Adapter QMD8262-k, QLE8262, QME8262-k Installing the Linux iSCSI Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Building the iSCSI Adapter Driver SLES 11 SP4 . . . . . . . . . . . . Building the iSCSI Adapter Driver for RHEL 6.5 and SLES 12. . Building the iSCSI Adapter Driver for RHEL 6.5 and SLES 11 SP3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installing the Linux FCoE Driver . . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Converged Network Adapter QMD8262-k, QLE8262, QME8262-k Installing the QConvergeConsole VMware vCenter Server Plug-in . . Installation Package Contents . . . . . . . . . . . . . . . . . . . . . . . . . . QConvergeConsole VMware vCenter Server Plug-in Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Plug-in Unregistration from a Manual Install. . . . . . . . . . . . . . . . Enabling and Disabling the Plug-in. . . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Converged Network Adapter QMD8262-k, QLE8262, QME8262-k Windows Teaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Teaming Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using the CLI for Teaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using the Team Management GUI . . . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Converged Network Adapter QMD8262-k, QLE8262, QME8262-k Configuring the Switch for iSCSI over DCBX . . . . . . . . . . . . . . . . . . . Verify the Version of the Switch Firmware . . . . . . . . . . . . . . . . . Create and Configure the iSCSI VLAN on the Switch . . . . . . . . Create and Configure the CEE Map for iSCSI Traffic Bandwidth and PFC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configure LLDP/DCBX for the iSCSI TLV . . . . . . . . . . . . . . . . .
User’s Guide—Converged Network Adapter QMD8262-k, QLE8262, QME8262-k QConvergeConsole CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Windows Device Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configure Switch Independent Partitioning . . . . . . . . . . . . . . . . Change Personalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Manage Bandwidth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
User’s Guide—Converged Network Adapter QMD8262-k, QLE8262, QME8262-k Configuring iSCSI Boot Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Boot Device Primary and Alternate . . . . . . . . . . . . . . . . . . . . . . Adapter Boot Mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Primary and Alternate Boot Device Settings . . . . . . . . . . . . . . . Configuring the iSCSI Boot Parameters . . . . . . . . . . . . . . . . . . . Configuring QLogic iSCSI Boot . . . .
User’s Guide—Converged Network Adapter QMD8262-k, QLE8262, QME8262-k Environmental Specifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C QConvergeConsole GUI Introduction to QConvergeConsole . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Downloading QConvergeConsole Documentation. . . . . . . . . . . . . . . . . . . . Downloading and Installing Management Agents . . . . . . . . . . . . . . . . . . . . Installing the Agents from the QLogic Web Site . . . . . . . .
Introduction Overview This user’s guide covers the following products: QLogic QMD8262-k blade network daughter card QLogic QLE8262 monolithic server standup card QLogic QME8262-k blade mezzanine card NOTE Throughout this document, the term adapter refers to any or all of these products. This guide provides technical information about the adapters, including how to install and configure the adapter, as well as detailed descriptions of the adapter’s various uses and functions.
Introduction Related Materials Switch Independent Partitioning covers how to configure Switch Independent Partitioning using utilities such as QConvergeConsole, as well as configuring iSCSI over data center bridging exchange (DCBX) using a Brocade® Series 8000 FCoE switch and a QLogic iSCSI Host Bus Adapter.
Introduction Functionality and Features Functionality and Features This section provides the following information: Functional Description Features Supported Operating Systems Functional Description Functional descriptions for the adapters are as follows: QMD8262-k: This a network daughter card with FCoE and iSCSI offload for the blade server environment. QLE8262: This is a standard form-factor adapter with FCoE and iSCSI offload for the rack and tower server environment.
Introduction Functionality and Features Advanced stateless offload features include: IP, TCP, and user datagram protocol (UDP) checksums Large segment offload (LSO) Large receive offload (LRO) Stateful offload features include: iSCSI offload Fibre Channel and FCoE offload Advanced management features for Converged Network Adapters and Fibre Channel adapters, including QConvergeConsole (GUI and CLI) Interrupt management and scalability features including: Receive side scaling (RSS)
Introduction Functionality and Features Supported Operating Systems The adapter supports the following operating systems. To view the most complete and current list, refer to the product release notes. Windows Windows Server® 2016 Nano Windows Server 2012 Windows Server 2012 R2 Windows Server 2008 SP2 and x64 (12G Only) Windows Server 2008 R2 with SP1 Windows PE 50 64-bit Windows PE 10.0 64-bit Red Hat® Enterprise Linux (RHEL®) 7.3 Red Hat Enterprise Linux (RHEL) 7.
1 Hardware Installation Overview This section provides the hardware and software requirements, safety precautions, a pre-installation checklist, and a procedure for installing the adapter. Hardware and Software Requirements Before you install the adapter, verify that your system meets the following hardware and software requirements.
1–Hardware Installation Pre-Installation Checklist Before you touch internal components, verify that the system is powered OFF and is unplugged. Install or remove adapters in a static-free environment. The use of a properly grounded wrist strap or other personal antistatic devices and an antistatic mat is strongly recommended. Pre-Installation Checklist 1. Verify that your system meets the hardware and software requirements listed in “Hardware and Software Requirements” on page 1. 2.
1–Hardware Installation Connecting to the Network 7. Close the computer cover. 8. Plug the Ethernet cable into the adapter. 9. Plug in the power cable and turn on the computer. For more detailed information, refer to the Hardware Owner’s Manual for your Dell PowerEdge server. Connecting to the Network Follow the instructions for your adapter. QMD8262-k, QME8262-k Refer to the “Guidelines for Installing I/O Modules” section of the Dell PowerEdge Modular Systems Hardware Owner’s Manual: ftp://ftp.
2 Driver Installation and Configuration Overview NOTE If you need to update the Flash memory of multiple adapters simultaneously: For the QConvergeConsole GUI, refer to the “Update the Flash Using the Flash Update Wizard” topic in the QConvergeConsole Help System. For the QConvergeConsole CLI, use the -flashsupport command to update the Flash memory for all cards supported by the specified file (for example, qaucli -pr nic -flashsupport -i ALL -a p3p11179.bin).
2–Driver Installation and Configuration Windows Driver Installation and Configuration Windows Driver Installation and Configuration Running the DUP in the GUI To run the DUP in the GUI: 1. Double-click the icon representing the DUP file. NOTE The actual file name of the DUP varies. The Update Package window appears, as shown in Figure 2-1. Figure 2-1.
2–Driver Installation and Configuration Windows Driver Installation and Configuration 2. Click Install to continue. The QLogic Super Installer—InstallShield® Wizard appears, as shown in Figure 2-2. Figure 2-2.
2–Driver Installation and Configuration Windows Driver Installation and Configuration 3. Click Next to continue. The License Agreement dialog box appears, as shown in Figure 2-3. Figure 2-3.
2–Driver Installation and Configuration Windows Driver Installation and Configuration 4. Select I accept the terms of the license agreement and click Next. The Setup Type dialog box appears, as shown in Figure 2-4. Figure 2-4. Setup Type Dialog Box a. b. Select a setup type as follows: Select Complete to install all program features. Select Custom to manually select the features to be installed. Click Next to continue. If you selected Complete, proceed directly to Step 5.
2–Driver Installation and Configuration Windows Driver Installation and Configuration c. The Custom Setup dialog box appears, as shown in Figure 2-5. Figure 2-5. Custom Setup Dialog Box d. e. Select the features to install. By default, all features are selected.
2–Driver Installation and Configuration Windows Driver Installation and Configuration 5. The Ready to Install the Program dialog box appears, as shown in Figure 2-6. Figure 2-6.
2–Driver Installation and Configuration Windows Driver Installation and Configuration 6. Click Install so that the InstallShield Wizard installs the QLogic Adapter drivers and Management Software Installer. When the installation is complete, the InstallShield Wizard Completed dialog box appears, as shown in Figure 2-7. Figure 2-7.
2–Driver Installation and Configuration Windows Driver Installation and Configuration 7. Click Finish to dismiss the installer. The Update Package window appears, as shown in Figure 2-8. Figure 2-8. Update Package Window 8. Click OK to close the window. Options The following options can be used to customize the DUP installation behavior. To extract only the driver components to a directory: /drivers= NOTE This command requires the /s option.
2–Driver Installation and Configuration Windows Driver Installation and Configuration NOTE This command requires the /s option. (Advanced) This command sends all text following the /passthrough option directly to the QLogic installation software of the DUP. This mode suppresses any provided GUI but not necessarily those of the QLogic software. /passthrough (Advanced) To return a coded description of this DUP’s supported features: /capabilities NOTE This command requires the /s option.
2–Driver Installation and Configuration Linux Driver Installation and Configuration Linux Driver Installation and Configuration This section provides the following procedures for installing drivers on a Linux system: Installation Overview Installing the Linux NIC Driver Installing the Linux iSCSI Driver Installing the Linux FCoE Driver Installation Overview To install and configure the adapter drivers on a Linux system, refer to the driver release notes, readme, and installation instructions incl
2–Driver Installation and Configuration Linux Driver Installation and Configuration Installing the Linux iSCSI Driver Driver installation makes extensive use of the build.sh script located in the driver source (extras/build.sh). This section provides installation instructions for the following Linux versions: Building the iSCSI Adapter Driver SLES 11 SP4 Building the iSCSI Adapter Driver for RHEL 6.5 and SLES 12 Building the iSCSI Adapter Driver for RHEL 6.
2–Driver Installation and Configuration Linux Driver Installation and Configuration To load the driver using modprobe, issue the following command: # modprobe -v qla4xxx 2.
2–Driver Installation and Configuration Linux Driver Installation and Configuration Building the iSCSI Adapter Driver for RHEL 6.5 and SLES 12 Building and Installing the Adapter Driver 1. Issue the following commands from the directory that contains the source driver file, qla4xxx-src-vx.xx.xx.xx.xx.xx-k.tar.gz: # tar -xzvf qla4xxx-vx.xx.xx.xx.xx.xx-cx.tar.gz # cd qla4xxx-vx.xx.xx.xx.xx.xx-cx where x.xx.xx.xx.xx.xx is the applicable version number. 2.
2–Driver Installation and Configuration Linux Driver Installation and Configuration Manually Loading the Adapter Driver 1. To load the driver, use one of the following methods: To load the driver directly from the local build directories, issue the following commands: For RHEL 6.5: # insmod /lib/modules/2.6.../kernel/drivers/scsi/ scsi_transport_iscsi.ko insmod /lib/modules/2.6.../extra/qlgc-qla4xxx/qla4xxx.ko For SLES 12: # insmod /lib/modules/2.6.../kernel/drivers/scsi/ scsi_transport_iscsi.
2–Driver Installation and Configuration Linux Driver Installation and Configuration Rebuilding the RAM Disk To automatically load the driver by rebuilding the RAM disk to include the driver, follow these steps: 1. To create a backup copy of the RAM disk image, issue the following command: For RHEL 6.5: # cd /boot # cp initramfs-[kernel version].img initramfs-[kernel version].img.bak For SLES 12: # cd /boot # cp initrd-[kernel version].img initrd-[kernel version].img.bak 2.
2–Driver Installation and Configuration Linux Driver Installation and Configuration 2. Build and install the driver modules from the source code by executing the build.sh script as follows: # ./extras/build.sh install The build.sh script does the following: Builds the driver .ko files Copies the .ko files to the appropriate directory: For RHEL 6.5: /lib/modules/2.6.../extra/qlgc-qla4xxx/ For SLES 11 SP3: /lib/modules/2.6.../updates Adds the appropriate directive in the modprobe.
2–Driver Installation and Configuration Linux Driver Installation and Configuration Unloading the Adapter Driver To replace an existing inbox driver with a new out-of-box iSCSI driver, unload the existing driver and load the new driver. To unload the driver, stop all applications using the driver and then unload the driver. 1. If the iqlremote agent is running, stop the agent by issuing the following command: # service iqlremote stop 2.
2–Driver Installation and Configuration Linux Driver Installation and Configuration Installing the Linux FCoE Driver This section provides procedures for installing the Linux FCoE driver for the following operating systems: Building the Driver for RHEL 6.5 Linux Building the Driver for SLES 11 SP4 Linux Building the Driver for SLES 12 Linux Building the Driver for SLES 11 SP3 Linux Building the Driver for RHEL 6.5 Linux 1.
2–Driver Installation and Configuration Linux Driver Installation and Configuration c. To load the driver, reboot the host. Building the Driver for SLES 11 SP4 Linux 1. Issue the following commands from the directory that contains the source driver file, qla2xxx-src-vx.xx.xx.xx.xx.x-k4.tar.gz: # tar -xzvf qla2xxx-src-vx.xx.xx.xx.xx.x-k4.tar.gz # cd qla2xxx-x.xx.xx.xx.xx.x-k4 where x.xx.xx.xx.xx.x is the applicable version number. 2.
2–Driver Installation and Configuration Linux Driver Installation and Configuration NOTE Depending on the server hardware, the RAMDISK file name might be different. c. To load the driver, reboot the host. Building the Driver for SLES 12 Linux 1. In the directory that contains the source driver file, qla2xxx-src-vx.xx.xx.xx.11.x-k.tgz, issue the following commands: # tar -xzvf qla2xxx-src-vx.xx.xx.xx.11.x-k.tgz # cd qla2xxx-x.xx.xx.xx.xx.xx-k where x.xx.xx.xx.xx.xx is the applicable version number. 2.
2–Driver Installation and Configuration Linux Driver Installation and Configuration 4. To automatically load the driver each time the system boots, rebuild the RAM disk to include the driver. Create a copy of the current RAMDISK by issuing the following commands: # cd /boot # cp initrd-[kernel version].img initrd-[kernel version].img.bak # mkinitrd NOTE Depending on the server hardware, the RAMDISK file name might be different. 5. To load the driver, reboot the host.
2–Driver Installation and Configuration Linux Driver Installation and Configuration To load the driver using modprobe, issue the following command: # modprobe -v qla2xxx To unload the driver using modprobe, issue the following command: # modprobe -r qla2xxx 4. To automatically load the driver each time the system boots, rebuild the RAM disk to include the driver. Create a copy of the current RAMDISK by issuing the following commands: # cd /boot # cp initrd-[kernel version].
2–Driver Installation and Configuration VMware Driver Installation and Configuration VMware Driver Installation and Configuration This section provides the following procedures for installing drivers on a VMware system: Installation Overview Installing the ESXi 5.x NIC Driver Installing the ESXi 5.x iSCSI Driver Installing the ESXi 5.x FCoE Driver Installing the ESXi 6.x Fibre Channel Over Ethernet Driver Installing the ESXi 6.
2–Driver Installation and Configuration VMware Driver Installation and Configuration Updating an Existing Driver or Installing a New Driver for an Existing ESXi Installation with esxcli (ESXi 5.x Only) To use the driver bundle (): 1. Copy the driver bundle () to this ESXi host. 2. Install the driver bundle () using the following steps: a. Type the following command to make a temporary directory: mkdir /install; cd /install b.
2–Driver Installation and Configuration VMware Driver Installation and Configuration Installing the ESXi 5.x iSCSI Driver The operating system manages and controls the driver installation process. To install the ESXi 5.x driver, follow the steps in this section. NOTE This section provides the most common ways of installing and upgrading the driver. For other installation procedures, refer to the following: http://kb.vmware.com/selfservice/microsites/search.
2–Driver Installation and Configuration VMware Driver Installation and Configuration To use the driver VIB: 1. Copy the driver VIB (scsi--.0.0..x 86_64.vib) to this ESXi host. 2. Install the driver VIB using the following esxcli commands: a. Type the following command to make a temporary directory: mkdir /install; cd /install b.
2–Driver Installation and Configuration VMware Driver Installation and Configuration Installing the ESXi 5.x FCoE Driver The operating system manages and controls the driver installation process. To install the ESXi 5.x driver, follow the steps in this section. NOTE This section provides the most common ways of installing and upgrading the driver. For other installation procedures, refer to the following: http://kb.vmware.com/selfservice/microsites/search.
2–Driver Installation and Configuration VMware Driver Installation and Configuration To use the driver VIB: 1. Copy the driver VIB (for ESX 5.0/5.1: scsi-qla2xxx-.0.0..x86_64.vib; for ESX 5.5: qlnativefc-.0.0..x86_64.vib) to this ESXi host. 2. Install the driver VIB using the following esxcli commands: a. Type the following command to make a temporary directory: mkdir /install; cd /install b.
2–Driver Installation and Configuration VMware Driver Installation and Configuration Installing the ESXi 6.x Fibre Channel Over Ethernet Driver Updating an Existing Driver or Installing a New Driver for an Existing ESXi Installation with esxcli (for ESXi 6x Only) To use the driver bundle .zip): 1. Copy the driver bundle (.zip) to this ESXi host. 2. Install the driver bundle (.zip) using the following steps: a.
2–Driver Installation and Configuration VMware Driver Installation and Configuration Installing the ESXi 6.x iSCSI Driver Updating an Existing Driver or Installing a New Driver for an Existing ESXi Installation with esxcli (for ESXi 6x Only) To use the driver bundle .zip): 1. Copy the driver bundle (.zip) to this ESXi host. 2. Install the driver bundle (.zip) using the following steps: a.
2–Driver Installation and Configuration VMware Driver Installation and Configuration Installing the QConvergeConsole VMware vCenter Server Plug-in To use the QConvergeConsole VMware vCenter Server Plug-in, install the following software in the order given: 1. QConvergeConsole VMware vCenter Server Plug-in—on the vCenter Server 2.
2–Driver Installation and Configuration VMware Driver Installation and Configuration This file contains the release notes that list changes, fixes, known issues, and release details. For detailed information on installing the QConvergeConsole VMware vCenter Server Plug-in, refer to “QConvergeConsole VMware vCenter Server Plug-in Installation” on page 36. For detailed information on installing the CIM Provider, refer to “Installing the QLogic Adapter CIM Provider” on page 43.
2–Driver Installation and Configuration VMware Driver Installation and Configuration 4. The Plug-in Registration Wizard opens, as shown in Figure 2-10. Click Next. Figure 2-10. QConvergeConsole VMware vCenter Server Plug-in Registration Wizard 5. Wait while the wizard configures the plug-in (see Figure 2-11). Figure 2-11.
2–Driver Installation and Configuration VMware Driver Installation and Configuration 6. Select the installation directory and then click Install (see Figure 2-12). Figure 2-12. Select the Installation Directory 7. Wait while the wizard performs the installation (see Figure 2-13). Figure 2-13.
2–Driver Installation and Configuration VMware Driver Installation and Configuration 8. Type in the requested information and then click Next (see Figure 2-14). Figure 2-14. User Input Screen 9. Wait while the wizard finishes configuring the plug-in (see Figure 2-15). Figure 2-15.
2–Driver Installation and Configuration VMware Driver Installation and Configuration 10. Figure 2-16 appears when registration is completed. Click Finish to exit. Figure 2-16. Successful Registration 11. After the installation completes, restart the TomcatTM service as follows: If the plug-in is installed on the VMware vCenter Server, restart the VMware Virtual Center Management Web services.
2–Driver Installation and Configuration VMware Driver Installation and Configuration For PowerShell: vSphere PowerCLI http://communities.vmware.com/community/vmtn/vsphere/automationtools/ powercli After downloading and installing the SDK and the registration script, follow the VMware instructions to unregister the plug-in. For example, the Perl unregister command is: perl registerPlugin.pl --server="127.0.0.1" -username="administrator" --password="password" --key="com.qlogic.
2–Driver Installation and Configuration VMware Driver Installation and Configuration Figure 2-18. QConvergeConsole vCenter Server in Plug-in Manager 3. If you want to enable or disable the QConvergeConsole plug-in, right-click on the plug-in and select Enabled or Disabled (the status toggles between the two), as shown in Figure 2-19. 4. Click Close to close the Plug-in Manager window. Figure 2-19.
2–Driver Installation and Configuration VMware Driver Installation and Configuration Uninstalling the QConvergeConsole VMware vCenter Server Plug-in To remove the QConvergeConsole VMware vCenter Server Plug-in: 1. In the Windows Control Panel, select Add or Remove Programs. (Windows Server 2008 or later only: select Programs and Features.) 2. In the Add or Remove Programs dialog box, select the QConvergeConsole VMware vCenter Server Plug-in and then click Change/Remove. 3.
2–Driver Installation and Configuration VMware Driver Installation and Configuration Installing the CIM Provider on an ESXi 5.x Host 1. Copy the provider-adapter.vib file to the root directory (/) of the ESXi 5.x system. 2. Issue the esxcli commands as follows: # cd / # esxcli software acceptance set --level=CommunitySupported # esxcli software vib install -v file:/provider-adapter.vib --maintenance-mode --no-sig-check 3. Reboot the system as required. Installing the CIM Provider on an ESXi 5.
2–Driver Installation and Configuration VMware Driver Installation and Configuration Remote Installation of the CIM Provider on an ESX/ESXi Host NOTE Before performing this procedure, ensure that the remote ESX/ESXi system is in Maintenance Mode. To do so using vSphere Client, select Inventory, select Host, and then select Enter Maintenance Mode. 1. Copy the offline-bundle.zip file to any location on the host where either the vSphere CLI package is installed or the vMA is hosted. 2.
2–Driver Installation and Configuration VMware Driver Installation and Configuration To restart the SFCB CIMOM and the QLogic Adapter CIM Provider: # /etc/init.d/sfcbd-watchdog restart After starting the SFCB CIMOM, use a CIM client utility to query the QLogic Adapter CIM Provider for information. Uninstalling the QLogic Adapter CIM Provider You can uninstall the QLogic Adapter CIM Provider for your version of VMware.
2–Driver Installation and Configuration VMware Driver Installation and Configuration Installing the vSphere Web Client Plug-in 1.
2–Driver Installation and Configuration VMware Driver Installation and Configuration If you are updating a previous version of the vSphere Web Client Plug-in, restart the vSphere Web Client services. In Windows, go to the Administrative Tools menu, select Services, and restart VMware vSphere Web Client. On the vCenter Server Appliance (Linux), issue the following command: /etc/init.
3 Adapter Management Applications Overview This chapter describes the following adapter management applications: General Management with QConvergeConsole Switch Independent Partitioning—refer to Chapter 4 Windows Management Applications Linux Management Applications VMware Management Applications 49 CU0354602-00 M
3–Adapter Management Applications General Management with QConvergeConsole General Management with QConvergeConsole Use the QConvergeConsole GUI and CLI utilities to manage the adapter as follows: Configuring the NIC Driver with QConvergeConsole Configuring iSCSI with QConvergeConsole Configuring FCoE with QConvergeConsole NOTE For information on installing and starting the QConvergeConsole GUI, refer to the QConvergeConsole GUI Installation Guide (for download instructions, see “Related Materials”
3–Adapter Management Applications General Management with QConvergeConsole For information on configuring FCoE using the QConvergeConsole CLI, refer to the “Fibre Channel Interactive Commands” chapter of the QConvergeConsole CLI User’s Guide. Configuring iSCSI Offload with QConvergeConsole The iSCSI offload feature provides full iSCSI offloads that include header and data digest, receive protocol data unit (PDU) parsing, and direct data placement.
3–Adapter Management Applications General Management with QConvergeConsole Example: $qaucli -pr iscsi -ch Or: $qaucli -iscsi -ch *** hba instance: 0 HBA_Alias *** hba instance: 1 HBA_Alias : QLogic QLE8262 : QLogic QLE8262 Modifying Adapter-Level iSCSI Parameters Use the -nh command to set the adapter-level parameters for single- or multi-port adapters. The positional parameter becomes and a series of one or more parameter name-value pairs.
3–Adapter Management Applications General Management with QConvergeConsole Example: $qaucli -pr iscsi -c 0 Or: $qaucli -iscsi -c 0 ******************************* *** Displaying Port inst=0 *** ******************************* *** Displaying HBA (Adapter) Level Information inst=0 *** HBA_Alias : QLogic QLE8262 HBA_TCP_Max_Window_Size : 19537 HBA_Default_Fragment_Reass_Timeout : 0 HBA_Reserved_TCP_Config : 0x00000000 HBA_Delayed_ACK : off *** Displaying Port General Summary Information inst=0 *** 0.
3–Adapter Management Applications General Management with QConvergeConsole FW_Fast_Posting : off(*) FW_Sense_Buffer_Desc : off(*) FW_ZIO_Enable_Mode : off AFW_Device_Timeouts : on AFW_Delayed_Ack : off AFW_AutoConnect : on *** Displaying Device Settings inst=0 *** Large_Frames : off DevType : 0(*) ExeThrottle : 0 FirstBurstLen : 32 KeepAliveTO : 30 DefaultTime2Retain : 20(*) DefaultTime2Wait : 2(*) MaxBurstLen : 512 MaxOutstandingR2T : 1 MaxRxDataSegmentLen : 128(*) Port : 3260(*) IPv4TOS : 0 IPv4TTL : 64
3–Adapter Management Applications General Management with QConvergeConsole IP_Fragmentation : on(*) IP_ARP_Redirect : off VLAN_Enable : off VLAN_User_Priority : 0 VLAN_ID : 0 IPv4_TOS_ENABLE : off Force_Negotiate_Main_iSCSI_Keys : off iSCSI_Send_Markers : off(*) iSCSI_Header_Digests : off iSCSI_Data_Digests : off iSCSI_Immediate_Data : on iSCSI_Initial_R2T : off iSCSI_Data_Seq_In_Order : on(*) iSCSI_Data_PDU_In_Order : on(*) iSCSI_CHAP_Auth : off(*) iSCSI_Bidi_CHAP_Auth : off(*) iSCSI_Error_Recovery_Level
3–Adapter Management Applications General Management with QConvergeConsole IPv6_Addr_Routable0 : :: IPv6_Addr_Routable1 : :: Default_IPv6_Router : :: IPv6_Port : 3260 IPv6_Gratuitious_Neighbor_Ad_Enable : off IPv6_Redirect_Enable : off *** Displaying IPv6 TCP Settings inst=0 *** IPv6_Nagle : off IPV6_TCP_Timer_Scale : 3(*) IPv6_TCP_Time_Stamp : on *** Displaying Remaining parameters inst=0 *** ACB_Supported : on(*) Values noted with (*) are read only.
3–Adapter Management Applications General Management with QConvergeConsole iSCSI_Discovery_Logout iSCSI_Strict_Login KeepAliveTO Large_Frames (not for 4010s) MaxBurstLen MaxOutstandingR2T TCP_DHCP TCP_Nagle TCP_Time_Stamp TCP_Window_Scale VLAN_Enable VLAN_User_Priority VLAN_ID IP_Address IP_Subnet_Mask IP_Gateway ZIO FW_ZIO_Enable_Mode Task_Management_Timeout ENABLE_IPV4 ENABLE_4022IPV4 ENABLE_IPV6 LOC_LINK_AUTO ROUTABLE_AUTO LDROUTER_AUTO IPv6_Addr_Local_link IPv6_Addr_Routable0 IPv6_Addr_Routable1 Defaul
3–Adapter Management Applications General Management with QConvergeConsole (router may override) IPv6_ND_Reachable_Timer (router may override) IPv6_DAD_Count IPV6NDRT 0 to 4294967295 IPV6DAD 0 to 255 Summary of Target Sessions Use the -ts command to display summary information for both persistent and non-persistent targets. Both [hba_port_inst] and [target_id] are optional parameters. If neither of the parameters is present, the information is displayed for all adapters and all targets.
3–Adapter Management Applications General Management with QConvergeConsole Target Session-Level iSCSI Negotiated Parameters Use the -t command to display information for targets. The positional parameter is . The optional parameter is [target_id]. If only the hba_port_inst is entered, target information for all targets on the specified adapter is displayed. If the optional target_id is entered, only information on the specified target is displayed.
3–Adapter Management Applications General Management with QConvergeConsole TGTISCSIO_Discovery_Logout : on TGTISCSIO_Strict_Login : off TGTISCSIO_Error_Recovery_Level : 0(*) TGT_KeepAliveTimeout : 30 TGT_DefaultTimeout : 2 TGT_DefaultTime2Retain : 20(*) TGT_MaxBurstLen : 512 TGT_MaxOutstandingR2T : 1 TGT_MaxRxDataSegmentLen : 128(*) TGT_MaxTxDataSegmentLen : 0(*) TGT_Port : 3260 TGTTCPO_Nagle : off TGTTCPO_Timer_Scale : 0(*) TGTTCPO_Timestamp : on TGT_TaskManagementTimeout : 10 TGT_ExeCount : 0(*) TGT_Targ
3–Adapter Management Applications General Management with QConvergeConsole Displaying Target Session-Level Persistent iSCSI Parameters Use the -tp command to view target persistent parameter information (pre-negotiation, from Flash memory). The positional parameter is . The optional parameter is [target_id]. If only the hba_port_inst is entered, target information for all targets on the specified adapter is shown.
3–Adapter Management Applications General Management with QConvergeConsole TGTISCSIO_Snack : off TGTISCSIO_Discovery_Logout : on TGTISCSIO_Strict_Login : off TGTISCSIO_Error_Recovery_Level : 0(*) TGT_KeepAliveTimeout : 30 TGT_DefaultTimeout : 2 TGT_DefaultTime2Retain : 20(*) TGT_MaxBurstLen : 512 TGT_MaxOutstandingR2T : 1 TGT_MaxRxDataSegmentLen : 128(*) TGT_MaxTxDataSegmentLen : 0(*) TGT_Port : 3260 TGTTCPO_Nagle : off TGTTCPO_Timer_Scale : 0(*) TGTTCPO_Timestamp : on TGT_TaskManagementTimeout : 10 TGT_Ex
3–Adapter Management Applications General Management with QConvergeConsole Modifying Target Session-Level iSCSI Parameters Use the –tc command to modify target-session-level iSCSI parameters. The positional parameters are , , and a series of one or more parameter name-value pairs.
3–Adapter Management Applications General Management with QConvergeConsole Configuring iSCSI Initiators with QConvergeConsole This section provides procedures on how to configure the following iSCSI initiators using QLogic’s QConvergeConsole utility: Configuring the Windows iSCSI Initiator Configuring the Linux iSCSI Initiator Configuring the ESX iSCSI Initiator NOTE For information on installing and starting the QConvergeConsole GUI, refer to the QConvergeConsole GUI Installation Guide (for downlo
3–Adapter Management Applications General Management with QConvergeConsole d. IP_Subnet_Mask [0.0.0.0]: Type the appropriate subnet mask, and then press ENTER. e. IP_Gateway [0.0.0.0]: Press ENTER to accept the default. f. Enable IPv6 [off]: Press ENTER to accept the default. 9. On the options menu that appears, select 3, Save changes and reset HBA (if necessary). 10. At the prompt for both ports, type Yes. 11.
3–Adapter Management Applications General Management with QConvergeConsole 6. Select the Converged Network Port you want to configure. 7. Select 2, Configure IP Settings. 8. Complete the interactive list of settings as follows: a. Enable IPv4 [on]: Press ENTER to accept the default. b. DHCP to obtain IPv4 Network Information: [off]: Press ENTER to accept the default. c. IP_Address [ ]: Type the IP address of the initiator system and then press ENTER. d. IP_Subnet_Mask [255.255.255.
3–Adapter Management Applications General Management with QConvergeConsole 1. Log in to the vSphere Client. 2. In the inventory panel, select a server to which to connect. 3. Click the Configuration tab. 4. In the Hardware panel, click Storage Adapters. 5. From the list of available storage adapters, select the iSCSI initiator you want to configure and then click Properties. 6. Click Configure. The General Properties dialog box shows the initiator’s status, default name, and alias. 7.
3–Adapter Management Applications General Management with QConvergeConsole Configuring CHAP with QConvergeConsole CLI To configure CHAP with QConvergeConsole CLI: 1. To add a primary and local CHAP entry (name and secret), issue the -addchap command to add a CHAP entry to the persistent CHAP table. The positional parameters are , , and . The optional parameter is [-BIDI] indicating the CHAP entry is a bidirectional entry (default is local CHAP).
3–Adapter Management Applications General Management with QConvergeConsole 3. To view the CHAP map table to determine the CHAP index to use later to link the CHAP entry to a target, issue the –dspchap command. The positional parameter for this command is . Command line options: -dspchap In the following examples, the HBA port instance = 0. $qaucli -pr iscsi -dspchap 0 Or: $qaucli -iscsi -dspchap 0 CHAP TABLE Entry: 1 Name: chapdbserver1 Secret: k9Q038iaZwlqPplq012 4.
3–Adapter Management Applications General Management with QConvergeConsole In the following examples, the HBA port instance is 0, and the Send Target IP is 10.14.64.154. $qaucli -pr iscsi -ps 0 Or: $qaucli -iscsi -ps 0 Target ID: 2 hba_no: 0 Instance #: 2 ISCSI Name: Alias: State: Session Failed 6. IP: 10.14.64.154 Port: 3260 TGT Link the CHAP entry to the target by issuing the -linkchap command. The positional parameters are , and .
3–Adapter Management Applications General Management with QConvergeConsole In the following examples, the HBA port instance is 0. $qaucli -pr iscsi -ps 0 Or: $qaucli -iscsi -ps 0 Target ID: 2 hba_no: 0 IP: 10.14.64.154 Port: 3260 TGT Instance #: 2 ISCSI Name: Alias: State: No Connection 3. To view all targets linked to the CHAP, issue the -chapmap command. This command lists the mapping of targets to CHAP table entries. The positional parameter for this command is .
3–Adapter Management Applications Windows Management Applications Windows Management Applications Windows management applications for the adapter include the following: Windows NIC Driver Management Applications Windows Teaming Windows VLAN Configuration User Diagnostics for Windows NIC Driver Management Applications Windows NIC Driver Management Applications Overview Viewing and Changing Adapter Properties Overview In the QConvergeConsole CLI (qaucli) utility, you can view VLAN and teaming
3–Adapter Management Applications Windows Management Applications To view port information: qaucli -nic -pinfo [cna_port_inst] Changing Adapter Properties NOTE For an adapter that is teamed or an adapter with VLANs, do not directly modify the adapter properties. To ensure that the properties of all teamed adapters and adapters with VLANs remain synchronized with the team properties, make property changes only on the Team Management page (see “Modifying a Team” on page 86).
3–Adapter Management Applications Windows Management Applications Table 3-1. Port Adapter Variables and Values (Continued) Variable Values Header_Data_Split_Enable on, off Jumbo_Frames_MTU_9000_Enable on, off Jumbo_Frames_MTU_9000_Enable_Rx on, off Jumbo_Frames_MTU_9000_Enable_Tx on, off LOCAL_Administered_Address_MAC xx:xx:xx:xx:xx:xx Port_Wake_On_LAN_Option 0=Disabled, 1=Wake on Magic Frame VLAN_ID 1.
3–Adapter Management Applications Windows Management Applications Teaming Modes Teaming is designed to improve reliability and fault tolerance of networks and to enhance performance by efficient load balancing. The following NIC teaming modes are provided: Failsafe Mode ensures that an alternate standby or redundant adapter becomes active if the primary network connection fails. Switch-Independent Load Balancing Mode ensures distribution of transmit loads across the teamed adapters.
3–Adapter Management Applications Windows Management Applications None When the preferred primary becomes operational again, the driver does not automatically switch back the primary to the active adapter. Preferred Primary When the preferred primary becomes operational again, the driver automatically switches back the primary as the active adapter. The network traffic resumes to the primary adapter from the standby adapter.
3–Adapter Management Applications Windows Management Applications Link Aggregation Mode Link aggregation provides increased bandwidth and high reliability by combining several NICs into a single, logical, network interface called a LAG. The link aggregation is scalable, meaning an adapter can be added or deleted either statically or dynamically from a team. Traffic from all the team ports that form a LAG have the same MAC address, which is the MAC address of the team.
3–Adapter Management Applications Windows Management Applications Link aggregation mode has transmit load balancing and fail safety support. If a link connected through a participant port of a link-aggregated team goes down, LACP provides failover and load balancing across the remaining members of the team.
3–Adapter Management Applications Windows Management Applications NOTE The following applies to configuring teaming and VLAN using the QConvergeConsole CLI: Windows Server 2012 and later: QConvergeConsole CLI does not support teaming and VLAN configuration. Use the native Windows teaming interface instead of QConvergeConsole CLI.
3–Adapter Management Applications Windows Management Applications Figure 3-1. Team Management Property Page On the Team Management page, the Teams and Adapters pane on the left lists the network devices currently present on this system, including: Teams and virtual adapters, as well as their member physical adapters QLogic and other vendor adapters. Procedures for creating a team, adding virtual adapters, and more are provided in the How-to box on the bottom of the Team Management page.
3–Adapter Management Applications Windows Management Applications Creating a Team To create a team use the following procedure: 1. Right-click the Teams folder icons and then click Create Team (see Figure 3-2). Figure 3-2. Creating a Team 2. The software automatically picks a unique team name, or you can enter your own team name. Team names must be unique on a system. 3.
3–Adapter Management Applications Windows Management Applications Type—Select the teaming mode by clicking either Failsafe Team, 802.3ad Static Team, 802.3ad Dynamic Team, or Switch Independent Load Balancing. If you select the 802.3ad dynamic option, you must also select one of the following options: Active LACP: LACP is a Layer 2 protocol that is used control the teaming of physical ports into an aggregated set.
3–Adapter Management Applications Windows Management Applications Figure 3-3. Creating a Failsafe Team Figure 3-4.
3–Adapter Management Applications Windows Management Applications Figure 3-5. Creating an 802.3ad Static Team Figure 3-6. Creating an 802.
3–Adapter Management Applications Windows Management Applications Figure 3-7. Setting Advanced Team Properties 4. To confirm if a team has been successfully created, view the Team and Adapters pane on the Team Management page. Figure 3-8 shows an example of a newly formed team. The Team Data pane on the right shows the properties, information, and status of the team or adapter that is currently selected in the Teams and Adapters pane on the left. Figure 3-8.
3–Adapter Management Applications Windows Management Applications Modifying a Team A team can be modified by doing the following: Adding or removing one or more team members to a team Modifying the team properties To add team members: 1. On the Team Management property page, right-click the unteamed adapter to add to a team. 2. On the shortcut menu, point to Add to Team and then click the team to which you want to add the adapter (see Figure 3-9).
3–Adapter Management Applications Windows Management Applications To remove an adapter from a team: NOTE A team must include at least one QLogic adapter. A QLogic adapter is allowed to be deleted from a team only if it is not the last QLogic teamed adapter. 1. On the Team Management property page, right-click the adapter to be removed from the team. 2. On the shortcut menu, click Remove from Team. At least two adapters must be present in a team.
3–Adapter Management Applications Windows Management Applications NOTE To ensure that the properties of all teamed adapters and adapters with VLANs remain synchronized with the team properties, do not directly modify the adapter properties on the Advanced page. If an adapter property becomes unsynchronized with its team properties, change either the team or adapter property so that they are the same on each and then reload the team.
3–Adapter Management Applications Windows Management Applications Example 1: For a failsafe team, you can change the team name, assigned team static MAC address, preferred primary adapter, and failback type, as shown in Figure 3-12. Figure 3-12. Modifying Failsafe Team Properties Example 2: You can change the team type and the corresponding team attributes. For example, you can change from failsafe to switch-independent load balancing or from 802.3ad static team to 802.3ad dynamic team.
3–Adapter Management Applications Windows Management Applications Deleting a Team To delete a team: 1. On the Team Management property page, in the left pane under Teams and Adapters, right-click the team name to be deleted. 2. On the shortcut menu, click Delete team. Saving and Restoring Teaming Configuration It is recommended that you periodically save the configuration to prevent any accidental loss of network topology and settings.
3–Adapter Management Applications Windows Management Applications Windows VLAN Configuration The term VLAN refers to a collection of devices that communicate as if they were on the same physical LAN. VLAN information covered in this section includes the following: VLAN Properties Using the CLI for VLANs Using the GUI for VLANs VLAN Properties The VLAN protocol permits insertion of a tag into an Ethernet frame to identify the VLAN to which a frame belongs.
3–Adapter Management Applications Windows Management Applications To preview a VLAN before removing it from a port or team, issue the following command to list the indices to use in the -vlandel command: qaucli -nic -vlandel_preview To remove a VLAN from a port or team, issue the following command: qaucli -nic -vlandel where list_insts specifies the comma-separated port indices (for example, 1,2) and vlan_id specifies a comma-separated numeric value (for example, 1...
3–Adapter Management Applications Windows Management Applications To add and configure a VLAN: 1. On the Team Management page under Teams and Adapters, right-click either a team or an unteamed adapter. 2. On the shortcut menu, click Add VLAN (see Figure 3-14). Figure 3-14. Adding a VLAN 3. On the Configure VLAN dialog box (see Figure 3-15), type values in the VLAN Name and VLAN ID boxes, click an appropriate VLAN Type, and then click OK. Figure 3-15.
3–Adapter Management Applications Windows Management Applications When the VLAN addition is complete, the added VLAN is visible as a Virtual Adapter on the Team Management page under Teams and Adapters. 4. Click the added virtual adapter to view all the properties, information, and status of the virtual adapter in the VLAN Data pane (see Figure 3-16). Figure 3-16. Viewing VLAN Data Properties Deleting a VLAN If VLAN is not needed on a team, you can delete it. To delete a VLAN: 1.
3–Adapter Management Applications Windows Management Applications NOTE To allow VLAN deletion, there must be at least one VLAN on the team. Deleting the last VLAN on the team results in deletion of the entire team. Viewing VLAN Statistics Follow these steps to view statistics for a selected VLAN. To view VLAN statistics: 1. On the Team Management page, click a team name in the left pane under the Teams folder. 2.
3–Adapter Management Applications Windows Management Applications Figure 3-17 shows the Diagnostics page. Figure 3-17. Diagnostics Tests on Windows 4. Under Diagnostic Tests, select one or more check boxes indicating the tests you want to run: Hardware Test, Register Test, Interrupt Test, Internal Loopback Test, External Loopback Test, and Link Test. (“Windows Diagnostic Test Descriptions” on page 100 describes each test type.) 5. Click Run Tests. NOTE Only one test can run at a time.
3–Adapter Management Applications Windows Management Applications To run user diagnostics in the CLI: Use QConvergeConsole CLI (qaucli), a unified command line utility, to manage all QLogic adapter models, including running user diagnostics. The overall option (-pr ) allows you to start the utility with a specific protocol type, either NIC, iSCSI, or Fibre Channel. If you do not specify a protocol, all protocols are enabled by default.
3–Adapter Management Applications Windows Management Applications Table 3-5. Getting Help (Continued) Command Description qaucli -pr fc -h Print Fibre Channel and FCoE protocol usage and then exit qaucli -pr iscsi -h Print iSCSI protocol usage and then exit qaucli -npar -h Print NPAR (Switch Independent Partitioning) commands usage and then exit Table 3-6 lists miscellaneous Windows diagnostics commands. Table 3-6.
3–Adapter Management Applications Windows Management Applications Table 3-7. Diagnostic Test Commands (Continued) Command a a Description -nL --noIntLP No internal loopback test (combine –D or –a) -nH --noHw No hardware test (combine –D or –a) -nS --noLinkSt No link status test (combine –D or –a) -h --help View help text All commands must be prefaced by qaucli -pr nic -qldiag.
3–Adapter Management Applications Windows Management Applications Table 3-8. Running Windows Diagnostic Tests in the CLI (Continued) Test Type Command Link qaucli -nic -testlink [cna_port_inst] Ping (IPv4) qaucli -nic -ping [ ] where the default values are =5, =525, =1000, and =30.
3–Adapter Management Applications Windows Management Applications NOTE Loopback tests are enabled only when the 8200 and 3200 Series Adapters are running firmware version 4.09.24 or later. When the loopback tests are running at the same time Fibre Channel or iSCSI protocols are running, refresh messages may appear. To avoid these messages, either click Cancel to ignore the messages or stop the qlremote and iqlremote agents while running the loopback tests on a NIC port.
3–Adapter Management Applications Windows Management Applications Windows Diagnostic Test Messages If a test fails, an appropriate error code is generated and displayed, as shown in Table 3-9. Note that this table does not list error messages for the Interrupt and Link tests. Table 3-9.
3–Adapter Management Applications Windows Management Applications Table 3-9.
3–Adapter Management Applications Windows Management Applications For example: qaucli -nic -testlink === Link Test for 1. CNA Port Index === Function is not supported by this hardware/driver/api stack === Link Test for 2. CNA Port Index === Function is not supported by this hardware/driver/api stack === Link Test for 3. CNA Port Index === Function is not supported by this hardware/driver/api stack === Link Test for 4.
3–Adapter Management Applications Linux Management Applications Linux Management Applications Linux management applications for the adapter include the following: Linux NIC Driver Management Applications User Diagnostics for Linux NIC Driver Management Applications Linux NIC Driver Management Applications The following sections describe how to configure and manage the driver and adapter using Linux management utilities: Overview Viewing and Changing Adapter Properties on Linux Overview The foll
3–Adapter Management Applications Linux Management Applications 3. If an older version is found, erase it by issuing the following command: rpm –e QConvergeConsoleCLI 4. To install the new version, issue the following command: rpm –ihv QConvergeConsoleCLI-.i386.rpm The utility is installed in the /opt/QLogic_Corporation/QConvergeConsoleCLI directory. Some software releases require firmware to be updated in the NIC’s Flash memory.
3–Adapter Management Applications Linux Management Applications lro_pkts: 0 rx_bytes: 0 tx_bytes: 468 lrobytes: 0 lso_frames: 0 xmit_on: 0 xmit_off: 0 skb_alloc_failure: 0 null skb: 0 null rxbuf: 0 rx dma map error: 0 In the following example, ethtool eth[n] lists interface settings.
3–Adapter Management Applications Linux Management Applications Running Linux User Diagnostics Linux user diagnostics include QConvergeConsole diagnostics and ethtool diagnostics. QConvergeConsole Diagnostics NOTE For information on installing and starting the QConvergeConsole GUI, refer to the QConvergeConsole GUI Installation Guide (for download instructions, see “Related Materials” on page xii).
3–Adapter Management Applications Linux Management Applications QConvergeConsole CLI-based diagnostics include the following commands: To enable or disable the port beacon, issue the following command: qaucli -pr nic -beacon [cna_port_inst] To run an internal loopback test, issue the following command: qaucli -pr nic -intloopback where tests_num is the number of tests (1–65535) and on_error is either 0=Ignore or 1=Abort To perform a Flash test, iss
3–Adapter Management Applications VMware Management Applications # ethtool -t eth4 The test result is PASS The test extra info: Register_Test_on_offline Link_Test_on_offline Interrupt_Test_offline Loopback_Test_offline 0 0 0 0 Linux Diagnostic Test Descriptions The internal loopback test performs internal packet loopback. The Flash test verifies the Flash read and write. The hardware test verifies that the hardware is running.
3–Adapter Management Applications VMware Management Applications Using Switch Independent Partitioning Under ESX All Switch Independent Partitioning Ethernet functions are enumerated by the hypervisor, controlled by the driver running in the hypervisor, and configured similar to other Ethernet interfaces. For more details, see “Switch Independent Partitioning” on page 122. You would typically create a virtual switch (vSwitch) for each Switch Independent Partitioning interface.
3–Adapter Management Applications Unified Extensible Firmware Interface firmware-version: bus-info: 0000:10:00.
3–Adapter Management Applications Unified Extensible Firmware Interface The preceding PDF files are included in the boot code release package in the EFI directory. NOTE All bin, uefi, and nsh files are required to update the adapter on a UEFI system. Supported Features The UEFI driver supports the following features: UEFI specification 1.10, 2.
3–Adapter Management Applications Configuring iSCSI over DCBX 3. At the system’s UEFI shell prompt, issue the map -r command to map the USB device file system. You can check the mapping as follows: map -b 4. Locate the USB device and change to that device. For example, if the USB device is mapped to fs9 after the map -r: fs9: The UEFI Shell prompt changes as follows: 5. fs9:\> To update the UEFI driver and RISC firmware, run the update.nsh script. For example: fs9:> update.nsh The update.
3–Adapter Management Applications Configuring iSCSI over DCBX NOTE iSCSI over DCBX applies only to the iSCSI Host Bus Adapter. It does not apply to iBFT/SW or an iSCSI function type on a NIC port configured with Switch Independent Partitioning. When bandwidth settings exist for both Switch Independent Partitioning and DCBX, DCBX takes precedence over Switch Independent Partitioning.
3–Adapter Management Applications Configuring iSCSI over DCBX - qaucli –pr iscsi –c 0 Configuring the Switch for iSCSI over DCBX Configuring the Brocade 8000 CEE switch involves the following steps: 1. 2. 3. 4. 5. 6.
3–Adapter Management Applications Configuring iSCSI over DCBX swd77(conf-ceemap)#priority-table 6 6 6 6 6 6 6 7 swd77(conf-ceemap)#exit Configure LLDP/DCBX for the iSCSI TLV The following commands configure link layer discovery protocol (LLDP) for the iSCSI type-length-value (TLV). 1. Configure the LLDP: swd77(config)#protocol lldp 2. Enable the LLDP: swd77(conf-lldp)#no disable 3. Advertise DCBX TLV in the LLDP: swd77(conf-lldp)#advertise dcbx-tlv 4.
3–Adapter Management Applications Configuring iSCSI over DCBX Configure the CEE Port’s iSCSI Traffic Class The following commands configure the switch port to which the QLogic adapter is connected. In this example, the adapter is connected to port 0/16 of the switch. 1. Set the switching characteristics: swd77(config)#interface tengigabitethernet 0/16 swd77(conf-if-te-0/16)#switchport 2. Set the interface as converged: swd77(conf-if-te-0/16)#switchport mode converged 3.
3–Adapter Management Applications Configuring iSCSI over DCBX swd77(config)#do show lldp interface tengigabitethernet 0/16 LLDP information for Te 0/16 State: Enabled Mode: Receive/Transmit Advertise Transmitted: 30 seconds Hold time for advertise: 120 seconds Re-init Delay Timer: 2 seconds Tx Delay Timer: 1 seconds DCBX Version : CEE Auto-Sense : Yes Transmit TLVs: Chassis ID Port ID TTL IEEE DCBx DCBx FCoE App DCBx FCoE Logical Link Link Prim Brocade Link DCB x iSCSI App DCBx FCoE Priority Bits: 0x8 DCBx
3–Adapter Management Applications Configuring iSCSI over DCBX Interoperation of Bandwidth Settings for DCBX and Switch Independent Partitioning If you want to run iSCSI and NIC traffic together, DCBX can be used to set the bandwidth percentage to be shared among the iSCSI and NIC. If you want to run partitioned NIC traffic, Switch Independent Partitioning should be used to set the percentage of bandwidth that is shared among the multiple NIC partitions.
3–Adapter Management Applications Configuring iSCSI over DCBX NIC traffic only (no iSCSI traffic) without partitioning of the NIC traffic: Neither Switch Independent Partitioning nor DCBX needs to be used. iSCSI traffic only (no NIC traffic): Neither Switch Independent Partitioning nor DCBX needs to be used. Table 3-10 summarizes these guidelines. Table 3-10.
4 Switch Independent Partitioning Overview This chapter provides the following information about the QLogic Switch Independent Partitioning feature: Switch Independent Partitioning Setup Requirements Switch Independent Partitioning Configuration Switch Independent Partitioning Setup and Management Options Switch Independent Partitioning Setup 122 CU0354602-00 M
4–Switch Independent Partitioning Switch Independent Partitioning Setup Requirements Switch Independent Partitioning Setup Requirements This section provides hardware and software requirements for applying Switch Independent Partitioning functionality to QLogic adapters installed in host servers within SANs. Hardware Requirements Table 4-1.
4–Switch Independent Partitioning Switch Independent Partitioning Setup Requirements Table 4-3. Management Tool and Driver Requirements SW Components a File Names and Download Locations Management Tools Dell System Setup, Lifecycle Controller, or other human interface infrastructure (HII) browser http://support.dell.com QLogic OptionROM Pre-installed, written on the adapter’s Flash memory at Dell factory QLogic QConvergeConsole GUI/CLI http://support.dell.
4–Switch Independent Partitioning Switch Independent Partitioning Configuration Switch Independent Partitioning Configuration This section defines Switch Independent Partitioning configuration and describes the configuration options and the management tools you can use to set up Switch Independent Partitioning on QLogic adapters installed in 11th and 12th generation Dell PowerEdge blade servers.
4–Switch Independent Partitioning Switch Independent Partitioning Configuration You can modify the minimum and maximum bandwidth for each switch-independent partition. The changes take effect immediately without rebooting the server. The minimum and maximum bandwidths are specified as percentages of the link bandwidth, where: The minimum bandwidth is the minimum bandwidth guaranteed to a partition. The maximum bandwidth is the maximum value that a partition is permitted to use.
4–Switch Independent Partitioning Switch Independent Partitioning Configuration Virtual machine (VM)-to-VM Ethernet traffic between VMs on different vSwitches is routed by the eSwitch if the communicating VMs are attached to NIC partitions derived from the same physical port. The eSwitch handles VM-to-VM communication by learning MAC addresses of the virtual NICs (VNICs) of the VMs. This capability enables the eSwitch to switch packets destined to another VM on the same host.
4–Switch Independent Partitioning Switch Independent Partitioning Configuration Figure 4-1 shows the default Switch Independent Partitioning function settings. NOTE In NPAR configurations with teaming on ESXi 5.1 and ESXi 5.5, QLogic recommends setting the driver module parameter defq_filters to 0 by issuing the following command, and then rebooting the system for the setting to take effect.
4–Switch Independent Partitioning Switch Independent Partitioning Configuration Figure 4-2 shows the possible configurations. Figure 4-2. Switch Independent Partitioning Configuration Options (Personalities) Personality Changes Based on your operating environment, you can use your preferred management tool to change or disable PCI functions on either physical port.
4–Switch Independent Partitioning Switch Independent Partitioning Configuration Table 4-4. Configuration Options (Continued) Function Number Function Type 2 Physical Port Number User Label a System Number b Disabled/NIC 1 0 3 Disabled/NIC 2 1 4 iSCSI/NIC/Disabled 1 0 5 iSCSI/NIC/Disabled 2 1 6 FCoE/NIC/Disabled 1 0 7 FCoE/NIC/Disabled 2 1 a The physical port number is displayed as Port 1 or Port 2 on the adapter’s port’s label.
4–Switch Independent Partitioning Switch Independent Partitioning Configuration Enhanced transition services (ETS) control the actual bandwidth allocation at the network port. The bandwidth allocation under ETS is typically 50 percent for FCoE traffic and 50 percent for non-FCoE traffic (NIC and iSCSI). This means that Switch Independent Partitioning QoS allocations among the NIC partitions for a given port, allocate a percentage of the non-FCoE portion of the bandwidth.
4–Switch Independent Partitioning Switch Independent Partitioning Configuration The QLogic drivers download the VM MAC addresses to the firmware. This enables the firmware and hardware to switch the packets destined for VMs on the host. For traffic to flow from one eSwitch to another it must first pass through an external switch or have been forwarded by a VM that has a path through both eSwitches.
4–Switch Independent Partitioning Switch Independent Partitioning Configuration For procedures on setting up Switch Independent Partitioning and eSwitch parameters using the OptionROM while powering up the host server, see “QLogic OptionROM at POST” on page 144. QConvergeConsole GUI The QConvergeConsole Unified Adapter Web Management Interface is a Web-based client/server application that allows for centralized management and configuration of QLogic adapters within the entire network (LAN and SAN).
4–Switch Independent Partitioning Switch Independent Partitioning Configuration QConvergeConsole CLI QConvergeConsole CLI is a management utility that centralizes management and configuration of QLogic adapters within the entire network (LAN and SAN). The QConvergeConsole CLI manages iSCSI, Ethernet, and FCoE functions on QLogic adapters installed on a Dell PowerEdge blade server on either a Linux or Windows environment.
4–Switch Independent Partitioning Switch Independent Partitioning Configuration You would typically create a vSwitch for each Switch Independent Partitioning interface. You can configure VMs to use the standard virtual network devices, such as VMXNET 3 adapters. On each interface, you can configure features such as NetQueue.
4–Switch Independent Partitioning Switch Independent Partitioning Setup and Management Options Switch Independent Partitioning Setup and Management Options This section describes how to configure NIC partitions on QLogic adapters installed in a Dell PowerEdge server (host server) within a SAN. Procedures for establishing QoS for each partition and viewing the eSwitch parameters and statistics are included.
4–Switch Independent Partitioning Switch Independent Partitioning Setup and Management Options NOTE When bandwidth settings exist for both Switch Independent Partitioning and DCBX, DCBX takes precedence over Switch Independent Partitioning. DCBX sets the bandwidth for iSCSI and NIC traffic, and then Switch Independent Partitioning sets the bandwidth for the NIC partitions by dividing the NIC bandwidth allocated by DCBX.
4–Switch Independent Partitioning Switch Independent Partitioning Setup and Management Options 2. Select Device Settings, as shown in Figure 4-3. Figure 4-3.
4–Switch Independent Partitioning Switch Independent Partitioning Setup and Management Options 3. In the Device Settings screen, select the adapter that you want to configure (see Figure 4-4). Figure 4-4.
4–Switch Independent Partitioning Switch Independent Partitioning Setup and Management Options The next screen that appears (see Figure 4-5) is the Main Configuration page, which lists information about the selected adapter and the available setup options for the adapter. Figure 4-5.
4–Switch Independent Partitioning Switch Independent Partitioning Setup and Management Options 4. Select NIC Partitioning (Switch Independent Partitioning) Configuration from the Main Configuration page. The NIC Partitioning Configuration page opens (see Figure 4-6). Figure 4-6. NIC Partitioning (Switch Independent Partitioning) Configuration Page NOTE For a list of Switch Independent Partitioning configuration options, see “Switch Independent Partitioning Setup” on page 171.
4–Switch Independent Partitioning Switch Independent Partitioning Setup and Management Options 5. Select Global Bandwidth Allocation to open the Global Bandwidth Allocation page (see Figure 4-7). Figure 4-7. Global Bandwidth Allocation Page 6. Set the relative and maximum bandwidth (between 0-100 percent) as needed for each partition. The relative bandwidth setting guarantees that at least this much bandwidth is available to the partition.
4–Switch Independent Partitioning Switch Independent Partitioning Setup and Management Options Setting a port’s maximum bandwidth to 100 percent allows that partition to use bandwidth that is not used by other partitions. This would apply if one or more of the other partitions were using less than their relative bandwidth setting.
4–Switch Independent Partitioning Switch Independent Partitioning Setup and Management Options QLogic OptionROM at POST When you first start host server containing QLogic adapters, the POST starts. Running the POST gives you access to the OptionROM utility. To set up Switch Independent Partitioning using OptionROM: 1. When the screen prompts you to enter the setup menu (see Figure 4-8) during the POST, press CTRL+Q to enter OptionROM setup. Figure 4-8. POST Test Screen Prompt to Enter Setup Menu 2.
4–Switch Independent Partitioning Switch Independent Partitioning Setup and Management Options The screen displays a list of functions available to the selected adapter (see Figure 4-10). Figure 4-10. Function Configuration Screen NOTE For a list of Switch Independent Partitioning configuration options, see “Switch Independent Partitioning Setup” on page 171. 3. Move your cursor to the Type column for any function type you want to change (see Figure 4-11 and Figure 4-12). Figure 4-11.
4–Switch Independent Partitioning Switch Independent Partitioning Setup and Management Options Figure 4-12. Selecting FCOE Function Type to Change 4. Move your cursor to the MinBW% column to adjust the minimum bandwidth (see Figure 4-13) on each partition (between 0–100 percent). Figure 4-13.
4–Switch Independent Partitioning Switch Independent Partitioning Setup and Management Options NOTE The minimum bandwidth settings in the OptionROM are equivalent to the relative bandwidth settings in the Dell System Setup. The MaxBW% field is read only in this utility. To adjust the maximum bandwidth, use a different utility, such as the Dell System Setup. When bandwidth settings exist for both Switch Independent Partitioning and DCBX, DCBX takes precedence over Switch Independent Partitioning.
4–Switch Independent Partitioning Switch Independent Partitioning Setup and Management Options QConvergeConsole GUI The QConvergeConsole is a Web-based client/server application that allows for centralized management and configuration of QLogic adapters within the entire network (LAN and SAN). On the server side, QConvergeConsole runs as an Apache Tomcat server Web application.
4–Switch Independent Partitioning Switch Independent Partitioning Setup and Management Options 3. Select the NIC Partitioning (Switch Independent Partitioning) tab. The NIC Partitioning Configuration page displays configuration details that apply to the selected Switch Independent Partitioning configuration and personality options (see Figure 4-15). Figure 4-15. NIC Partitioning (Switch Independent Partitioning) Configuration Page 4.
4–Switch Independent Partitioning Switch Independent Partitioning Setup and Management Options 8. Verify that the configured ports have the most current drivers installed. 9. If necessary, update the driver for the port protocol. Set Up QoS The QConvergeConsole lets you set the QoS for each partition by setting minimum and maximum percentages of the physical port’s bandwidth for each partition.
4–Switch Independent Partitioning Switch Independent Partitioning Setup and Management Options 4. Click the down arrow and select the NIC partition (NPAR0, NPAR1, NPAR2, or NPAR3) from the drop-down list. Information and configuration fields related to the selected NIC partition include: Default MAC Address—The MAC address set at the manufacturer. Location—The logical location in the system: PCI bus number, device number, and function number.
4–Switch Independent Partitioning Switch Independent Partitioning Setup and Management Options The Security Check dialog box might appear. In the Enter Password box, type the password and then click OK. NOTE The settings are persistent across reboots. View eSwitch Configuration QConvergeConsole appears and lets you view the current eSwitch offload settings.
4–Switch Independent Partitioning Switch Independent Partitioning Setup and Management Options 6. 7. Click the down arrow next to any of the offload fields provided to change its value to Enabled or Disabled. Select one of the following command buttons to apply or cancel any changes: Save–Saves changes displayed on the screen. Restore Settings–Restores the default settings. Cancel–Cancels any changes made to this screen before you saved them.
4–Switch Independent Partitioning Switch Independent Partitioning Setup and Management Options To set up NIC partitions using the QConvergeConsole CLI: 1. Start the QConvergeConsole CLI interface and select 6: NIC Partitioning Information (see Figure 4-18). Figure 4-18. Selecting 6 to View NPAR Information Options 2. Select 2: NPAR Port Information (see Figure 4-19). Figure 4-19.
4–Switch Independent Partitioning Switch Independent Partitioning Setup and Management Options The NPAR Configuration Selection page displays the current configuration (see Figure 4-20). Figure 4-20. NPAR Configuration Selection Screen 3. Return to the main menu after viewing the Switch Independent Partitioning information and select 7: NIC Partitioning Configuration (see Figure 4-21). Figure 4-21. Selecting NPAR Configuration 4.
4–Switch Independent Partitioning Switch Independent Partitioning Setup and Management Options 5. Configure the bandwidth settings to meet your system requirements. NOTE When bandwidth settings exist for both Switch Independent Partitioning and DCBX, DCBX takes precedence over Switch Independent Partitioning. DCBX sets the bandwidth for iSCSI and NIC traffic, and then Switch Independent Partitioning sets the bandwidth for the NIC partitions by dividing the NIC bandwidth allocated by DCBX.
4–Switch Independent Partitioning Switch Independent Partitioning Setup and Management Options f. Specify whether you want your bandwidth settings to persist across reboots (see Figure 4-23). Figure 4-23. Setting Bandwidth Changes to Persist 6. Return to the NIC Partitioning Configuration Selection screen. 7. Change the personalities of each function to meet your system requirements. For example: a. b. c. Select 2: Change PCI Function Personality. Select the port number, 1 or 2.
4–Switch Independent Partitioning Switch Independent Partitioning Setup and Management Options Figure 4-24 shows the CLI commands leading to the option for changing a function type on a Linux system. Figure 4-24. Selecting Function Type on Linux System 8. Return to the main menu and select 8: NIC Partitioning Statistics to view the statistics. Navigate through the menu selections to view eSwitch statistics. 9.
4–Switch Independent Partitioning Switch Independent Partitioning Setup and Management Options Configure Switch Independent Partitioning You can use the NIC Partition Management tab in the device properties page to enable Switch Independent Partitioning and configure the 10GbE physical port into a multifunction storage and networking port. To set up Switch Independent Partitioning on a QLogic adapter port: 1. Log in to the server that contains installed QLogic adapters. 2.
4–Switch Independent Partitioning Switch Independent Partitioning Setup and Management Options 4. From the Adapter Properties page, do the following: a. Select the NIC Partition Management tab. b. Right-click on the function number you want to enable. c. Select Enable Partition (see Figure 4-26). Figure 4-26.
4–Switch Independent Partitioning Switch Independent Partitioning Setup and Management Options When partitioning is enabled, the Adapter Properties page appears, as shown in Figure 4-27. Figure 4-27. Partition Enabled 5. Click OK to close the message box that displays the following information: This change requires a reboot. Proceed? 6. Click OK to close the message box that displays the following information: Please reboot the system now 7. Reboot the host server to make the changes take effect.
4–Switch Independent Partitioning Switch Independent Partitioning Setup and Management Options 3. On the NIC Partition Management tab, right-click on one of the enabled functions, select Change Function Type, then select Convert to from the shortcut menu (see Figure 4-28). Figure 4-28. Selecting Convert to NIC from Shortcut Menu 4. Repeat these procedures to change the function types as needed.
4–Switch Independent Partitioning Switch Independent Partitioning Setup and Management Options Manage Bandwidth Using the NIC Partition Management tab in the Windows device properties page, you can allocate minimum and maximum bandwidth for each NIC function. NOTE When bandwidth settings exist for both Switch Independent Partitioning and DCBX, DCBX takes precedence over Switch Independent Partitioning.
4–Switch Independent Partitioning Switch Independent Partitioning Setup and Management Options 3. Use the Configure Function dialog box to set the minimum and maximum bandwidth percentages, New Minimum BW and New Maximum BW (see Figure 4-30). Figure 4-30. Entering New Bandwidth Values NOTE ETS only specifies the division of bandwidth between FCoE and non-FCoE traffic. It does not specify the bandwidth allocated to the NIC or iSCSI partitions.
4–Switch Independent Partitioning Switch Independent Partitioning Setup and Management Options The new bandwidth values appear in the right pane of the NIC Partition Management property sheet (see Figure 4-31). Figure 4-31. NIC Partition Management Property Sheet 6. Click OK at the bottom of the Properties page to close it.
4–Switch Independent Partitioning Switch Independent Partitioning Setup and Management Options View eSwitch Statistics You can use the Window Device Manager’s NIC Partition Management window to view eSwitch statistics for enabled partitions. To display eSwitch statistics: 1. From the QLogic Adapter Properties page, select the NIC Partition Management tab. 2. Right-click the function number for the port you want to review and select eSwitch Statistics from the shortcut menu.
4–Switch Independent Partitioning Switch Independent Partitioning Setup and Management Options CIM Provider and vCenter Server Plug-in for VMware ESX/ESXi The QConvergeConsole vCenter Server Plug-in provides a QConvergeConsole tab you can use to manage the QLogic adapter in the VMware ESX/ESXi environment.
4–Switch Independent Partitioning Switch Independent Partitioning Setup and Management Options The content pane varies depending on which Function is selected: Bandwidth: This setting allows you to display and set the bandwidth allocation for the NIC function. For detailed information, refer to “Bandwidth Allocation” on page 168. Type: This setting displays the current function type and allows you to change the function type. For detailed information refer to “Function Type” on page 169.
4–Switch Independent Partitioning Switch Independent Partitioning Setup and Management Options Adjusted Overall Bandwidth Assignment: This is a pie chart diagram that shows the amount of the total bandwidth assigned to the NIC function. Current Active Bandwidth Assignment: This lists the current settings for the Bandwidth Assignment and Maximum Bandwidth parameters. A yellow background indicates that the new value (in parentheses) has not been saved yet.
4–Switch Independent Partitioning Switch Independent Partitioning Setup and Management Options Figure 4-36.
4–Switch Independent Partitioning Switch Independent Partitioning Setup Switch Independent Partitioning Setup This section provides Switch Independent Partitioning reference tables you can use when configuring NIC partitions using the various tools available.
4–Switch Independent Partitioning Switch Independent Partitioning Setup Configuration Options Depending on your system requirements and operating environment, you can set up the adapter port partitions to support different function types. Table 4-6 shows the available function types and configurable parameters. Table 4-6.
4–Switch Independent Partitioning Switch Independent Partitioning Setup Switch Independent Partitioning Configuration Parameters and Setup Tools Table 4-7 identifies which parameters you can configure using each of the available management tools. Table 4-7.
4–Switch Independent Partitioning Switch Independent Partitioning Setup NOTE Table 4-8 applies to QME8262-k only. Table 4-8.
5 Boot Configuration Overview This section provides the following information about boot configuration for the QLogic adapter: Boot from SAN Booting servers from SANs can provide significant benefits in today’s complex data center environments. One of the driving forces behind SANs is the need to deliver mission-critical data quickly, at any time, without interruptions or delays. Dell System Setup The Dell System Setup allows you to configure a network adapter.
5–Boot Configuration Boot from SAN Boot from SAN This section provides the following information on boot from SAN: General Boot from SAN Windows Boot from SAN Linux Boot from SAN ESX Boot from SAN Additional information can be found in the driver readme and release notes. General Boot from SAN The following high-level boot from SAN instructions apply to all OSs: Linux, Windows, and ESX: 1. Set up the boot order to disable boot from the local disk or disconnect internal hard drives. 2.
5–Boot Configuration Boot from SAN Dell DUP: Issue the following command to extract the drivers to the appropriate path/location: /s /e= Windows 2008 Boot From SAN For Windows 2008, follow these steps to perform an initial OS installation with the adapter as boot or as add-on. NOTE The following procedure requires a USB Flash drive; see “Creating a Driver Disk” on page 176. Ensure that the target SAN device is available and configured before beginning the procedure.
5–Boot Configuration Boot from SAN The Driver Disk message box displays the following prompt: Do you have a driver disk? 3. Click YES and then press ENTER. 4. In the Driver Disk Source window, select the driver source: If the driver file is on a disk, select fd0, then press ENTER. If the driver file is on a CD, select hdx (where x is the CD drive letter) and then press ENTER. The Insert Driver Disk window opens. 5.
5–Boot Configuration Boot from SAN 6. Press ENTER. 7. If the system prompts you to update another drive, click BACK and then press ENTER. The following message appears: Make sure that CD number 1 is in your drive. 8. Insert the SLES CD #1 in the drive and then click OK. 9. Follow the on-screen instructions to complete the installation. ESX Boot from SAN For VMware ESX, follow these steps to install the driver for devices as part of a new ESX installation.
5–Boot Configuration Dell System Setup Dell System Setup The Dell System Setup allows you to configure a network adapter.
5–Boot Configuration Dell System Setup Accessing Dell System Setup When you first start the host server that contains QLogic adapters, the POST starts. Running POST gives you access to the Dell System Setup. To access the Dell System Setup: 1. While running POST, press F2. The Main menu for the Dell System Setup opens. NOTE Depending on your server model and System Setup version, the screens you see might differ from those shown. 2. Select Device Settings (see Figure 5-1). Figure 5-1.
5–Boot Configuration Dell System Setup 3. In the Device Settings screen, select the adapter that you want to configure or display information about (see Figure 5-2). Figure 5-2. Selecting the Device to Configure The next screen that appears (see Figure 5-3) is the Main Configuration page for the selected adapter. Figure 5-3.
5–Boot Configuration Dell System Setup Main Configuration The Main Configuration page (see Figure 5-3 on page 182) displays information about the selected network adapter and provides the following options.
5–Boot Configuration Dell System Setup NIC Configuration The NIC Configuration page (see Figure 5-5) allows the user to set the following: Legacy Boot Protocol: Select PXE, iSCSI, or None to control the network boot protocol. The configuration and enablement of iSCSI and FCoE are controlled separately. Wake on LAN: This option enables or disables server power-on using an in-band magic packet. Link Speed: This option is the link speed of the NIC.
5–Boot Configuration Dell System Setup iSCSI Configuration The iSCSI Configuration page (see Figure 5-6) provides the following choices for iSCSI configuration: iSCSI General Parameters iSCSI Initiator Parameters iSCSI First Target Parameters iSCSI Second Target Parameters Figure 5-6.
5–Boot Configuration Dell System Setup iSCSI General Parameters The iSCSI General Parameters page (see Figure 5-7) lets you set the following: TCP/IP Parameters via DHCP: Select Enabled or Disabled. When set to Enabled, the adapter uses the DHCP to obtain its IP address, subnet mask, and gateway IP address. iSCSI Parameters via DHCP: Select Enabled or Disabled. When set to Enabled, the initiator acquires its IP address from a DHCP server.
5–Boot Configuration Dell System Setup iSCSI Initiator Parameters The iSCSI Initiator Parameters page (see Figure 5-8 and Figure 5-9) lets you set the following: IPv4: This field indicates whether or not the iSCSI initiator uses the IPv4 protocol. If Enabled, the following parameters can be set: IPv4 Address: When TCP/IP Parameter via DHCP is set to Disabled, this field must contain a valid IP address.
5–Boot Configuration Dell System Setup Figure 5-8. iSCSI Initiator Parameters—Start of Page Figure 5-9.
5–Boot Configuration Dell System Setup iSCSI First Target Parameters The iSCSI First Target Parameters page (see Figure 5-10) lets you set the following: IP Version: This option indicates whether IPv4 or IPv6 is selected. IPv4 Address: If IPv4 is selected, this field let you specify the IPv4 address of the intended iSCSI boot target. IPv6 Address: If IPv4 is selected, this field lets you specify the IPv6 address of the intended iSCSI boot target.
5–Boot Configuration Dell System Setup iSCSI Second Target Parameters The iSCSI Second Target Parameters page (see Figure 5-11) lets you set the following: IP Version: This option indicates whether IPv4 or IPv6 is selected. IPv4 Address: If IPv4 is selected, this field let you specify the IPv4 address of the intended iSCSI boot target. IPv6 Address: If IPv4 is selected, this field lets you specify the IPv6 address of the intended iSCSI boot target.
5–Boot Configuration Dell System Setup FCoE Configuration The FCoE Configuration page (see Figure 5-12) lets you set the following: Connect: Select Enabled to enable OS boot from an FCoE storage device, or Disabled to disable OS boot from an FCoE storage device. Boot from LUN: The boot device LUN. This is a 16-bit value. This parameter is selectable only if the Boot parameter is set to Enabled. Boot from Target: The boot device worldwide port name. This is a 64-bit value.
5–Boot Configuration Dell System Setup NIC Partitioning (Switch Independent Partitioning) Configuration The NIC Partitioning (Switch Independent Partitioning) Configuration page (see Figure 5-13) provides the following choices for Switch Independent Partitioning configuration: Global Bandwidth Allocation Partition 1 Configuration Partition 2 Configuration Partition 3 Configuration Partition 4 Configuration Figure 5-13.
5–Boot Configuration Dell System Setup Global Bandwidth Allocation The Global Bandwidth Allocation page (see Figure 5-14) lets you change a partition’s relative bandwidth weighting and maximum bandwidth if it has been enabled. For more information on bandwidth allocation, refer to “Configuration Options” on page 172. Figure 5-14.
5–Boot Configuration Dell System Setup Partition 1 Configuration The Partition 1 Configuration page (see Figure 5-15) has only one selection, Enabled for NIC Mode. Figure 5-15.
5–Boot Configuration Dell System Setup Partition 2 Configuration The Partition 2 Configuration page (see Figure 5-16) lets you set NIC Mode to Enabled or Disabled. Figure 5-16.
5–Boot Configuration Dell System Setup Partition 3 Configuration The Partition 3 Configuration page (see Figure 5-17) lets you set NIC Mode to Enabled or Disabled. If you select Disabled for NIC Mode, you can select Enabled or Disabled for iSCSI Offload Mode. Figure 5-17.
5–Boot Configuration Dell System Setup Partition 4 Configuration The Partition 4 Configuration page (see Figure 5-18 and Figure 5-19) lets you set NIC Mode to Enabled or Disabled. If you select Disabled for NIC Mode, you can select Enabled or Disabled for iSCSI Offload Mode. Figure 5-18. Partition 4 Configuration—Start of Page Figure 5-19.
5–Boot Configuration PXE Boot Setup PXE Boot Setup The PXE allows a workstation to boot from a server on a network before booting the operating system on the local hard drive. Configuring PXE Boot This section provides procedures for configuring the ProductLine to perform PXE boot. The example uses function 1 and NIC 1. To configure PXE boot: 1. During POST, press the CTRL+Q keys to enter the QLogic 8200 Series CNA Function Configuration window. 2.
5–Boot Configuration PXE Boot Setup 9. Press the ESC key, and then select Save changes and exit. The system reboots. 10. After the system reboot, follow the window prompt for PXE boot server for the installation of OS of your choice. The system attempts to boot from the PXE. For example: Attempting Boot From NIC QLogic PXE v2.0.x.x PCI x.x Px Copyright (C) 2009-2014 QLogic Corporation Initializing... CLIENT MAC ADDR: xx xx xx xx xx xx CLIENT IP: xx.xx.xx.xx MASK: xx.xx.xx.xx DHCP IP: xx.xx.xx.
5–Boot Configuration iSCSI Configuration Using Fast!UTIL iSCSI Configuration Using Fast!UTIL QLogic’s Fast!UTIL provides one method of configuring the QMD8262-k/QLE8262/QME8262-k adapter for iSCSI.
5–Boot Configuration iSCSI Configuration Using Fast!UTIL Subnet Mask When DHCP is set to No, this field must contain a valid subnet mask. Gateway IP Address When DHCP is set to No, this field must contain a valid gateway IP address; otherwise, the system under configuration can communicate only with other nodes on its LAN. Initiator iSCSI Name Press ENTER to configure the iSCSI name of the initiator.
5–Boot Configuration iSCSI Configuration Using Fast!UTIL Boot Device Primary and Alternate After configuring a device (through Primary/Alternate Boot Device Settings), press ENTER on these locations to view a list of available devices. To select an iSCSI boot device, highlight the device and then press ENTER. Adapter Boot Mode Disable—Select this option to disable the ROM BIOS on the adapter, freeing space in upper memory.
5–Boot Configuration iSCSI Configuration Using Fast!UTIL Primary and Alternate Boot Device Settings Security Settings—Press ENTER to access Primary Boot Security Settings. Press ENTER to enable or disable CHAP and bidirectional CHAP and to configure the CHAP name and CHAP secret. (Depending on your configuration, it might not be necessary to configure this option.
5–Boot Configuration iSCSI Configuration Using Fast!UTIL To enable the QLogic iSCSI adapter to boot from a SAN: 1. During server POST, press CTRL+Q to enter the QLogic iSCSI Fast!UTIL BIOS. 2. Select the I/O port to configure. By default, the Adapter Boot mode is set to Disable. 3. From the Fast!UTIL Options menu, select Configuration Settings and then select iSCSI Boot Settings. 4. Before you can set SendTargets, set the Adapter Boot mode to Manual. 5. Select Primary Boot Device Settings. 6.
5–Boot Configuration iSCSI Configuration Using Fast!UTIL Boot Protocol Configuration Boot protocol primary and alternate boot device settings include the following: Security Settings—Press ENTER to access Primary Boot Security Settings. Press ENTER to enable or disable CHAP and bidirectional CHAP and to configure the CHAP name and CHAP secret. (Depending on your configuration, it might not be necessary to configure this option.
5–Boot Configuration iSCSI Configuration Using Fast!UTIL Configuring Parameters for a Secondary Adapter If login to the primary boot target fails, the BIOS should attempt to log in to the secondary target using the same technique. BIOS attempts to log in to boot targets configured on different ports, depending on their configuration. iSCSI ports can reside on physical interfaces and may exist on separate adapters.
5–Boot Configuration iSCSI Configuration Using Fast!UTIL 2. On the Options menu, select Configuration Settings. The Configuration Settings window opens, as shown in Figure 5-22. Figure 5-22. Fast!UTIL: Configuration Settings Window 3. On the Configuration Settings menu, select Host Adapter Settings. The Host Adapter Settings window opens, as shown in Figure 5-23. Figure 5-23.
5–Boot Configuration iSCSI Configuration Using Fast!UTIL 4. Select Initiator IP Settings. The Initiator IP Settings window opens, as shown in Figure 5-24. Figure 5-24. Fast!UTIL: Initiator IP Settings Window 5.
5–Boot Configuration iSCSI Configuration Using Fast!UTIL 7. Return to the Configuration Settings menu and then select iSCSI Boot Settings to configure the target settings shown in Figure 5-25. Figure 5-25. Fast!UTIL: iSCSI Boot Settings Window 8. a. On the iSCSI Boot Settings window, select Adapter Boot Mode and set it to Manual. b. On the iSCSI Boot Settings window, select Primary Boot Device Settings. On the Primary Boot Device Settings window (see Figure 5-26), specify the target parameters.
5–Boot Configuration iSCSI Configuration Using Fast!UTIL a. To scan for the specified target, highlight the primary LUN Target IP and then press ENTER. b. Select the target from the list of discovered targets on the Select iSCSI Device window, as shown in Figure 5-27. Figure 5-27. Fast!UTIL: Select iSCSI Device Window c. On the Select LUN window, select the LUN to set the target as the primary iSCSI boot device. 9. Press ESC and then select Save changes. 10.
5–Boot Configuration iSCSI Configuration Using Fast!UTIL 11. During POST, press F2 to enter the Dell System Setup menu, as shown in Figure 5-28. Figure 5-28. Dell System Setup 12. Select System BIOS Settings, then select Boot Settings, then select BIOS Boot Settings, and then select Hard-Disk Drive Sequence, as shown in Figure 5-29. Figure 5-29.
5–Boot Configuration iSCSI Configuration Using Fast!UTIL 13. In the pop-up window, use the UP ARROW and DOWN ARROW or the + and – buttons to move the iSCSI target to the top of the list, as shown in Figure 5-30 (where the iSCSI target is configured on Port 1, Partition 3). Then click OK. Figure 5-30. Setting the iSCSI Boot Sequence 14. Select Save changes and exit. 15. Follow the manufacturer’s OS installation instructions.
5–Boot Configuration iBFT Boot Setup iBFT Boot Setup For an alternate method of iSCSI boot from SAN, use the fields in the iBFT. iBFT is a component of the Advanced Configuration and Power Interface Specification 3.0b standard that provides operating systems a standard way to boot from software-initiated iSCSI protocol. To view the iBFT specification, visit the following URL: http://www.microsoft.com/whdc/system/platform/firmware/ibft.
5–Boot Configuration iBFT Boot Setup 2. Check that the protocol for functions 0 and 1 is set to iBFT. If necessary, change the setting(s), as shown in Figure 5-32, and then press ENTER. Figure 5-32. Enabling iBFT Boot 3. Press ESC and save the settings. 4. Reboot the system.
5–Boot Configuration iBFT Boot Setup Booting to a Target Disk To boot to the target disk, see the boot target vendor’s instructions for the hardware setup. 1. During POST, press F2 to enter the Dell System Setup menu, as shown in Figure 5-33. Figure 5-33.
5–Boot Configuration iBFT Boot Setup 2. Select System BIOS Settings, then select Boot Settings, then select BIOS Boot Settings, and then select Hard-Disk Drive Sequence, as shown in Figure 5-34. Figure 5-34. Selecting iSCSI Boot Sequence 3. In the pop-up window, use the UP ARROW and DOWN ARROW or the + and – buttons to move the iSCSI target to the top of the list, as shown in Figure 5-35 (where the iSCSI target is configured on Port 1, Partition 3). Then click OK. Figure 5-35.
5–Boot Configuration iBFT Boot Setup 4. Select Save changes and exit. 5. Reboot the system. 6. The Option Rom shows the iSCSI target login information, as shown in Figure 5-36. Figure 5-36. Connecting to the iSCSI Target 7. Continue with OS installation (refer to the OS documentation).
5–Boot Configuration DHCP Boot Setup (iSCSI) DHCP Boot Setup (iSCSI) To configure the DHCP server to support iSCSI boot, first ensure that your DHCP server is set up and then refer to the following procedure. NOTE This release does not support DHCP iSCSI boot for IPv6. Refer to future readme and release notes for IPv6 support notification.
5–Boot Configuration DHCP Boot Setup (iSCSI) 8. When presented with the various Boot Modes, select DHCP using VendorID and then press ENTER. 9. Select DHCP Boot Settings and then press ENTER. 10. On the DHCP Boot Settings screen, select Vendor ID and then press ENTER. 11. Enter the Vendor ID (class) that you defined earlier in the DHCP server configuration steps and then press ENTER. The vendor ID name is case sensitive and is limited to 10 characters in length. 12.
5–Boot Configuration DHCP Boot Setup (iSCSI) Example string value (no spaces): iscsi:192.168.95.121:6:3260:7:iqn.1984-05.com.dell:powervault.md30 00i.6a4badb0000e7ab4000000004b854c83 Parameters DHCP Vendor Class Option 202, Secondary Boot Target IQN and Boot Format the data as a string using the DHCP vendor-defined Secondary Boot Target IQN and Boot Parameters Option (Option 202): "iscsi:"":"":"":"":" Example string value (no spaces): iscsi:192.168.95.
A Troubleshooting This appendix provides the following troubleshooting information: Diagnosing Problems NIC Troubleshooting iSCSI Troubleshooting FCoE Troubleshooting ESX Troubleshooting Diagnosing Problems Network activity indicators and diagnostic utilities help you to verify that the hardware and software are working properly. If the installed adapter cannot communicate over the network, the flowcharts shown in this appendix can help diagnose the problem with the adapter.
A–Troubleshooting NIC Troubleshooting NIC Troubleshooting Figure A-1.
A–Troubleshooting iSCSI Troubleshooting iSCSI Troubleshooting Figure A-2.
A–Troubleshooting FCoE Troubleshooting FCoE Troubleshooting NOTE If most of the IP packet traffic is not TCP or UDP, the FCoE FIP session might be dropped. If you experience this problem, turn off RSS.
A–Troubleshooting FCoE Troubleshooting Figure A-3.
A–Troubleshooting ESX Troubleshooting ESX Troubleshooting For debugging and troubleshooting networking issues on ESX, refer to the VMware document, VI3 Networking: Advanced Troubleshooting, located here: http://www.vmware.com/files/pdf/technology/vi_networking_adv_troubleshooting.pdf If the troubleshooting procedures in this document do not resolve the problem, please contact Dell for technical assistance (refer to the “Getting Help” section in your Dell system documentation).
B Specifications This appendix provides specifications for the following products: QMD8262-k Specifications QLE8262 Specifications QME8262-k Specifications 227 CU0354602-00 M
B–Specifications QMD8262-k Specifications QMD8262-k Specifications Physical Characteristics Power Requirements Standards Specifications Interface Specifications Environmental Specifications Physical Characteristics Table B-1. Physical Characteristics Adapter Description Type Blade network daughter card Length 3.00 inches Width 2.45 inches Power Requirements Table B-2. Power Requirements Voltage Rail Voltage Current 12V 12.0V 2mA 12V Aux 12.0V 0.784A 3.3V N/A N/A 3.3V Aux 3.
B–Specifications QMD8262-k Specifications Fibre Channel Tape (FC-TAPE) Profile SCSI Fibre Channel Protocol-2 (FCP-2) Second Generation FC Generic Services (FC-GS-2) Third Generation FC Generic Services (FC-GS-3) Interface Specifications Table B-3. Interface Specifications Port Type 10G-BASE-KR Media Dell PE M1000e KR Midplane Revision 1.
B–Specifications QMD8262-k Specifications Environmental Specifications Table B-4. Environmental Specifications Condition Operating Non-Operating Temperature Ranges (for Altitude 900 m or 2952.75 ft) 10°C to 35°C (50°F to 95°F) –40°C to 65°C (–40°F to 149°F) Temperature Ranges (for Altitude >900 m or 2952.75 ft) 10°C to Notea °C (50°F to Noteb °F) –40°C to 65°C (–40°F to 149°F) 10 °C 20 °C Temperature Gradient Max. per 60 min. Humidity Percent Ranges— Noncondensing 20% to 80%* 5% to 95%+ (Max.
B–Specifications QLE8262 Specifications QLE8262 Specifications Physical Characteristics Power Requirements Standards Specifications Interface Specifications Environmental Specifications Physical Characteristics Table B-5. Physical Characteristics Adapter Description Type Low-profile PCIe card Length 6.6 inches Width 2.54 inches Power Requirements Table B-6. Power Requirements Voltage Rail Voltage Current 12V 12V 1.4A 3.3V 3.3V 0A 3.3V AUX 3.
B–Specifications QME8262-k Specifications QME8262-k Specifications Physical Characteristics Power Requirements Standards Specifications Interface Specifications Environmental Specifications Physical Characteristics Table B-8. Physical Characteristics Adapter Description Type Mezzanine card Length 3.307 inches Width 3.465 inches Power Requirements Table B-9. Power Requirements Voltage Rail Voltage Current 12V 12V 1.3A 3.3V 3.3V 0A 3.3V AUX 3.
C QConvergeConsole GUI This appendix provides the following information about the QConvergeConsole GUI: Introduction to QConvergeConsole Downloading QConvergeConsole Documentation Downloading and Installing Management Agents Installing the QConvergeConsole GUI What Is in the QConvergeConsole Help System NOTE For information on installing the QConvergeConsole GUI, refer to the QConvergeConsole GUI Installation Guide.
C–QConvergeConsole GUI Introduction to QConvergeConsole Introduction to QConvergeConsole The QConvergeConsole GUI is a Web-based client and server GUI management tool that provides centralized management and configuration of QLogic adapters within the entire network (LAN and SAN). On the server side, the QConvergeConsole GUI runs as an Apache Tomcat™ application server.
C–QConvergeConsole GUI Downloading QConvergeConsole Documentation Downloading QConvergeConsole Documentation To download the QConvergeConsole GUI Installation Guide, go to at http://driverdownloads.qlogic.com.and click Downloads. Downloading and Installing Management Agents To manage the adapters on a local or remote host, the management agents (also called agents) used by the host’s adapters must already be installed on the host.
C–QConvergeConsole GUI Downloading and Installing Management Agents Installing the Agents from the QLogic Web Site To obtain the agents from the QLogic Web site and install them: Windows and Linux (all versions): 1. Go to the QLogic Downloads page at http://driverdownloads.qlogic.com and download the following for each adapter on the host server: 2. SuperInstaller Readme and Release Notes Install the agents by running the SuperInstaller.
C–QConvergeConsole GUI Installing the QConvergeConsole GUI Installing the QConvergeConsole GUI Refer to the installation procedure for your operating system. Installing QConvergeConsole in a Windows Environment Installing QConvergeConsole in a Linux Environment Installing QConvergeConsole in Silent Mode Installing QConvergeConsole in a Windows Environment The QConvergeConsole Installer for Windows is a self-extracting utility that installs QConvergeConsole and related files.
C–QConvergeConsole GUI Installing the QConvergeConsole GUI 6. To enable the SSL feature, click Yes. To disable SSL, click No. 7. On the Install Complete dialog box, click Done to exit the installer. You have installed QConvergeConsole on your server. Installing QConvergeConsole in a Linux Environment You have the option of installing QConvergeConsole in a Linux environment using either a GUI or CLI method. To install from the CLI, see “Installing QConvergeConsole in Silent Mode” on page 239.
C–QConvergeConsole GUI Installing the QConvergeConsole GUI NOTE The localhost-only option installs QConvergeConsole locally so that you must run it locally (remote connection is not possible). To disable the option, you must uninstall QConvergeConsole and then re-install it, selecting No in this step. 8. On the Pre-Installation Summary dialog box, read the information, and then click Install. During the installation, the installer notifies you of the status. 9.
C–QConvergeConsole GUI What Is in the QConvergeConsole Help System What Is in the QConvergeConsole Help System To access the QConvergeConsole help system while the GUI utility is running, click the Help menu and then click Browse Contents. The help system provides topics containing details of the following: Getting Started shows how to start using QConvergeConsole and the help system.
C–QConvergeConsole GUI What Is in the QConvergeConsole Help System Managing FabricCache Adapters and Ports shows and describes how to display and edit information parameters for 10000 Series FabricCache Adapters (FCA) and ports, as well as how to configure port parameters.
D Regulatory Information This appendix provides the following information for the QMD8262-k, QLE8262, and QME8262-k products: Warranty Regulatory and Compliance Information Warranty For information about your Dell warranty, see your system documentation. Regulatory and Compliance Information Laser Safety FDA Notice This product complies with DHHS Rules 21CFR Chapter I, Subchapter J. This product has been designed and manufactured according to IEC60825-1 on the safety label of laser product.
D–Regulatory Information Regulatory and Compliance Information Agency Certification The following sections contain a summary of EMI and EMC test specifications performed on the models listed below to comply with emission, immunity, and product safety standards: QMD8262-k (CU0310419) QLE8262 (CU0310414) QME8262-k (CU0310410) EMI and EMC Requirements FCC Part 15 compliance: Class A FCC compliance information statement: This device complies with Part 15 of the FCC Rules.
D–Regulatory Information Regulatory and Compliance Information KCC: Class A Korea RRA Class A Certified Product Name/Model: Fibre Channel Adapter Certification holder: QLogic Corporation Manufactured date: Refer to date code listed on product Manufacturer/Country of origin: QLogic Corporation/USA A class equipment (Business purpose info/telecommunications equipment) As this equipment has undergone EMC registration for business purpose, the seller and/or the buyer is asked to beware of this point and in ca
Corporate Headquarters Cavium, Inc. 2315 N. First Street San Jose, CA 95131 408-943-7100 International Offices UK | Ireland | Germany | France | India | Japan | China | Hong Kong | Israel | Singapore | Taiwan © 2011–2017 QLogic Corporation. QLogic Corporation is a wholly subsidiary of Cavium, Inc. All rights reserved worldwide. QLogic, the QLogic logo, FabricCache, and QConvergeConsole are trademarks or registered trademarks of QLogic Corporation.