Active System Manager Version 8.
Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. Copyright © 2014 Dell Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws.
Contents 1 Overview................................................................................................................. 5 About this Document............................................................................................................................ 5 What is New in this Release.................................................................................................................. 5 Accessing Online Help..................................................................
Adding NetApp Ruby SDK................................................................................................................... 34 Enable HTTP or HTTPs for NFS share................................................................................................ 35 Configuring NetApp Storage Component......................................................................................... 35 6 Completing Initial Configuration....................................................................
Overview 1 Active System Manager (ASM) is Dell’s unified management product that provides a comprehensive infrastructure and workload automation solution for IT administrators and teams. ASM simplifies and automates the management of heterogeneous environments, enabling IT to respond more rapidly to dynamic business needs.
After you log in to ASM user interface, you can access the online help in any of the following ways: • To open context-sensitive online help for the active page, click ?, and then click Help. • To open context-sensitive online help for a dialog box, click ? in the dialog box. Additionally, in the online help, use the Enter search items option in the Table of Contents to search for a specific topic or keyword. Other Documents You May Need Go to http://www.dell.
ASM Port and Protocol Information The following ports and communication protocols used by ASM to transfer and receive data. Table 1.
Installation and Quick Start 2 The following sections provide installation and quick start information, including step-by-step instructions for deploying and configuring ASM in VMware vSphere or Microsoft virtualization environment. Only one instance of ASM should be installed within a network environment. Exceeding this limit can cause conflicts in device communication.
Specification Dell PowerEdge Servers Prerequisite • The virtual appliance is able to communicate with the PXE network in which the appliance is deployed. It is recommended to configure the virtual appliance directly on the PXE network, and not on the external network. • The virtual appliance is able to communicate with the hypervisor management network. • The DHCP server is fully functional with appropriate PXE settings to PXE boot images from ASM in your deployment network.
Specification Prerequisite • Server facing ports must be configured for spanning tree portfast. • If DCB settings are used, it must be properly configured on the switch for converged traffic. • Server facing ports must be in hybrid mode. • Server facing ports must be in switchport mode. • Server facing ports must be configured for spanning tree portfast. • Any VLAN which is dynamically provisioned by ASM must exist on the switch. • Server facing ports must be in hybrid mode..
Specification Prerequisite • Appropriate licenses are deployed on the VMware vCenter. System Center Virtual Machine Manager (SCVMM) • See System Center Virtual Machine Manager (SCVMM) Prerequisites. PXE Setup • Either use Active System Manager as the PXE responder by configuring through ASM user interface, by Getting Started page or follow instructions in Configuring ASM Virtual Appliance as PXE Responder.
Resource Prerequisites protocol lldp dcbx port-role auto-downstream no shut exit S5000 Following is the prerequisite for S5000. • Enable Fibre Channel capability and Full Fabric mode. feature fc fc switch-mode fabric-services • Enable FC ports connecting to Compellent storage array and FC ports connecting to other S5000 switch via ISL links. interface range fi 0/0 - 7 no shut • Create DCB Map.
Prerequisites for Rack Server, S5000, and Compellent The following table describes the prerequisites for the FCoE solution offered using Rack Server, S5000, and Compellent. For more information, see http://en.community.dell.com/techcenter/extras/m/ white_papers/20387203 Resource Prerequisites S5000 • DCB needs to be enabled. • VLT needs to be disabled. • Enable Fibre Channel capability and Full Fabric mode.
Resource Prerequisites MXL • DCB needs to be enabled. • VLT needs to be disabled. • FIP Snooping feature needs to be enabled on the MXL. conf Feature fip-snooping • Port-channel member interfaces needs to have below configuration.
Resource Prerequisites • Create a FCoE VLAN interface vlan [Create VLAN for FCoE] exit • Create FCoE Map fcoe-map default_full_fabric fabric-id vlan fc-map exit • Apply FCoE MAP to interface interface fibrechannel 0/0 fabric default_full_fabric no shutdown NOTE: Below is the process of generating the FC MAP For generating the fc-map use below ruby code. Here VLAN ID is FCoE VLAN ID fc_map = vlanid.to_i.to_s(16).upcase[0..1] fc_map.
Resource Prerequisites • Enable FC ports connecting to Compellent storage array and FC ports connecting to other S5000 switch via ISL links. interface range fi 0/0 - 7 no shut • Create DCB Map. dcb-map SAN_DCB_MAP priority-group 0 bandwidth 60 pfc off priority-group 1 bandwidth 40 pfc on priority-pgid 0 0 0 1 0 0 0 0 exit • Create a FCoE VLAN. interface vlan [Create VLAN for FCoE] exit • Create FCoE Map.
Resource Prerequisites port-channel-protocol lacp port-channel 128 mode active exit protocol lldp no advertise dcbx-tlv ets-reco dcbx port-role auto-upstream no shut exit • Port-channel connecting Cisco Nexus switch needs to have following configuration. interface port-channel 128 portmode hybrid switchport fip-snooping port-mode fcf • Server facing ports needs to have following configuration.
Resource Prerequisites Example interface vfc101 bind mac-address 5C:F9:DD:16:EF:07 no shutdown interface vfc102 bind mac-address 5C:F9:DD:16:EF:21 no shutdown • Move back into the VSAN database and create entries for the new VFC just created and create entries for the FC port(s) that will be used. vsan database vsan 2 interface vfc101 vsan 2 interface vfc102 vsan 2 interface fc2/1 vsan 2 interface fc2/2 NOTE: All the Compellent ports needs to part of the same VSAN.
Resource Prerequisites • Instantiate but do not configure the upstream port-channel (LAG) to the core /aggregation switch. • Instantiate but do not configure the downstream port-channel (LAG) to the IOA4. • Create the VFC interface to bind to the servers CNA FIP MAC address. This can be located in the CMC WWN table or the IDRAC page for the server.
Resource Prerequisites • Port-channel member interfaces needs to have following configuration. interface range tengigabitethernet 0/33 – 36 port-channel-protocol lacp port-channel 128 mode active exit protocol lldp no advertise dcbx-tlv ets-reco dcbx port-role auto-upstream no shut exit • Port-channel connecting Cisco Nexus switch needs to have following configuration.
Resource Prerequisites • Instantiate but do not configure the upstream port-channel (LAG) to the core /aggregation switch. • Instantiate but do not configure the downstream port-channel (LAG) to the IOA4. • Create the VFC interface to bind to the servers CNA FIP MAC address. This can be located in the CMC WWN table or the iDRAC page for the server.
Resource Prerequisites configuration. (Ensure to back up the configuration before enabling the feature). conf feature npv • Enable required features. feature fcoe feature npiv feature lacp • Create VSAN-instantiate it in the VSAN database. conf vsan database vsan • Configure regular Ethernet VLANs, and then the FCoE VLAN is created with an assignment to its respective VSAN.
Resource Prerequisites NOTE: All the Dell Compellent ports need to part of the same VSAN. Brocade Alias needs to be created having Compellent fault domain WWPN, accessible on Brocade switch. Dell Compellent Create fault domain as per Dell Compellent best practices. Prerequisites for M1000e (with MXL and FC FlexIOM), Brocade, and Dell Compellent The following table describes the prerequisites for the FCoE solution offered using M1000e (with MXL and FC FlexIOM), Brocade, and Dell Compellent.
Resource Prerequisites Dell Compellent Create fault domain as per Dell Compellent best practices. System Center Virtual Machine Manager (SCVMM) Prerequisites ASM manages resource on Microsoft System Center Virtual Machine Manager through Windows Remote Management (WinRM). Windows RM must be enabled on the SCVMM server as well as on Active Directory and DNS servers used in SCVMM/HyperV deployments. ASM deployments support Active Directory and DNS servers which exist on the same machine.
Click Next to continue. 8. If there is more than one datastore available on the host, the Datastore page displays. Select the location to store virtual machine (VM) files, and then click Next to continue. 9. On the Disk Format page, choose one of the following options: • To allocate storage space to virtual machines. as required, click thin provisioned format. • To pre-allocate physical storage space to virtual machines at the time a disk is created, click thick provisioned format.
4. Click Next. h. On the Select Destination page, select the destination host group that contains the Hyper-V server where you want to deploy ASM VM. Click Next. i. On the Select Host page, select the host on which you want to deploy ASM, and then click Next. j. On the Configuration Settings page, make the changes for your environment, if required. k. On the Select networks page, select your PXE network and configure it appropriately. l.
Configuring ASM Virtual Appliance 3 You must configure the following settings in the virtual appliance console before you start using ASM: • Change Dell administrator password. For detailed information, see Changing Delladmin Password • Configure static IP Address in the virtual appliance. For detailed information, see Configuring Static IP Address in the Virtual Appliance • Configure ASM Virtual Appliance as PXE boot responder.
10. For Hyper-V only, reboot ASM virtual appliance. Configuring ASM Virtual Appliance as PXE Boot Responder ASM requires both PXE and DHCP network services to function. ASM may be configured to act as the DHCP server and PXE responder on a PXE network if one is not present in the environment. This can be configured through the Getting Started menu for appliance setup in the ASM user interface.
Customizing Virtual Machine Templates for VMware and Hyper-V 4 ASM supports cloning virtual machines (VM) or virtual machine templates in VMware, and cloning virtual machine templates in Hyper-V. For ASM virtual machine or virtual machine template cloning, the virtual machine or virtual machine templates must be customized to make sure virtual machine or virtual machine templates have a unique identifier and can communicate back to the ASM appliance upon completion of the cloning process.
• To install the puppet agent on the virtual machine, copy the puppet agent install files to the virtual machine. The puppet agent is available on the ASM appliance for both Windows and Linux in /var/lib/razor/repo-store directory. If the virtual machine being customized has network access to the ASM appliance, you can connect to this same directory as a network share directory using the address: \\\razor\puppet-agent .
NOTE: Additional lines may be present in the puppet.conf file for your system. It is not necessary to delete any information from this file. You just need to ensure the previously noted section is present in the file. Customizing Linux Template Perform the following task to customize Linux template: 1. Ensure all instructions have been completed for VMware or Hyper-V virtual machines as noted in the previous section. a. b. c. d. e. 2.
b. Run the following command, and ensure that you see the above line, to verify the crontab is updated as expected or not, crontab -l 6. After completing customization, turn off the virtual machine. To create a virtual machine template, follow the appropriate steps for virtualization environment. NOTE: After preparing the base virtual machine, in case the virtual machine is restarted, the puppet verification file will need to be deleted from system. This file can be found in Windows at C:\ProgramData\puppe
NOTE: To create a virtual machine template in SCVMM, make sure the virtual machine template OS Configuration has an administrator password and if necessary, a Windows product key set. To do this, right click the virtual machine template and select "Properties", then select "OS Configuration" and enter a password in Admin Password and a product key in Product Key settings.
Configuring ASM Virtual Appliance for NetApp Storage Support 5 For ASM to support NetApp, perform the following tasks: • Add NetApp Ruby SDK libraries to the appliance. For more information about adding SDK libraries, see Adding NetApp Ruby SDK • Enable HTTP/HTTPs for the NFS share. For more information, see Enabling HTTP or HTTPs for NFS Share Make sure license is enabled for NFS on NetApp. To obtain and install the license, refer NetApp documentation.
sudo patch /etc/pupetlabs/puppet/modules/netapp/lib/puppet/util/ network_device/netapp/NaServer.rb < /tmp/NaServer.patch 8. Update the permissions on the NetApp module. To update the permissions, run the following command: sudo chmod 755 /etc/puppetlabs/puppet/modules/netapp/lib/puppet/util/ network_device/netapp/* 9. Change the owner of the files.
• The Percentage of Space to Reserve for Snapshot • Auto-increment • Persistent • NFS Target IP 36
Completing Initial Configuration 6 Log in to ASM using the appliance IP address, After logging into ASM, you need to complete the basic configuration setup in the Initial Setup wizard. For more information about completing the initial setup, see the Active System Manger Version 8.0 User’s Guide.
A Deploying WinPE on the Virtual Appliance You need to perform the following configuration tasks before using ASM to deploy Windows OS. NOTE: You should use Microsoft ADK 8.1 or ADK 8.0 installed in the default location.. 1. Create a Windows .iso that has been customized for use with ASM using ADK and build-razorwinpe.ps1 script. You will need to locate the appropriate drivers for your server hardware or virtual machines for the operating system you are trying to install.
NOTE: • If any additional drivers are required, add the drivers under the “Drivers” folder in the build directory you created on your ADK machine. The drivers are installed into the Windows image, if applicable. The drivers that do not apply to the OS being processed are ignored. • If you want deploy Windows to VMWare VMs, the WinPE drivers for the VMXNET3 virtual network adapter from VMWare required.
a. In the Repository Name box, enter the name of the repository. b. In the Image Type box, enter the image type. c. In the Source File or Path Name box, enter the path of the OS Image file name in a file share. d. If using a CIFS share, enter the User Name and Password to access the share. These fields are only enabled when entering a CIFS share. For more information about firmware repositories, see ASM Online Help.
B Configuring DHCP or PXE on External Servers The PXE service requires a DHCP server configured to provide boot server (TFTP PXE server) information and specific start-up file information. ASM PXE implementation uses the iPXE specification so that the configuration details include instructions to allow legacy PXE servers and resources to boot properly to this iPXE implementation. This section provides information about configuring DHCP on the following servers.
Create the DHCP Policy 1. Open the Windows 2012 DHCP Server DHCP Manager. 2. In the console tree, expand the scope that will service your ASM PXE network. Right-click Policies and select New Policy. The DHCP Policy Configuration Wizard is displayed. 3. Next to Policy Name, type iPXE and enter the description as iPXE Client. Click Next. 4. On the Configure Conditions for the policy page, click Add. 5. In the Add/Edit Condition dialog box, perform the following actions, and then click OK.
Create the DHCP User Class You must create the user class for the DHCP server before creating the DHCP Policy. 1. Open the Windows 2008 DHCP Server DHCP manager. 2. In the console tree, navigate to IPv4. Right click IPv4, and then click Define User Classes from the drop-down menu. 3. In the DHCP User Class dialog box, click Add to create a new user class. 4. In the New Class dialog box, enter the following information and click OK to create a user class. 5. a. In the Display Name box, enter iPXE.
Configuring DHCP for Linux You can manage the configuration of the Linux DHCPD service by editing the dhcpd.conf configuration file. The dhcpd.conf is located at /etc/dhcp directory of most Linux distributions. If the DHCP is not installed on your Linux server, install the Network Infrastructure Server or similar services. Before you start editing the dhcpd.conf file, it is recommended to back up the file. After you install the appropriate network services, you must configure the dhcpd.
After you modify the dhcpd.conf file based on your environment, you need to start or restart your DHCPD service. For more information, see http://ipxe.org/howto/dhcpd Sample DHCP Configuration # dhcpd.conf # # Sample configuration file for ISC dhcpd # #option definitions common to all supported networks... #option domain-name "example.org"; #option domain-name-servers 192.168.203.46; #filename "pxelinux.0"; next-server 192.168.123.
#subnet 10.254.239.32 netmask 255.255.255.224 { #range dynamic-bootp 10.254.239.40 10.254.239.60; #option broadcast-address 10.254.239.31; #option routers rtr-239-32-1.example.org; #} #A slightly different configuration for an internal subnet. #subnet 10.5.5.0 netmask 255.255.255.224 { #range 10.5.5.26 10.5.5.30; #option domain-name-servers ns1.internal.example.org; #option domain-name "internal.example.org"; #option routers 10.5.5.1; #option broadcast-address 10.5.5.
# # # # # # # # #} pool { allow members of "foo"; range 10.17.224.10 10.17.224.250; } pool { deny members of "foo"; range 10.0.29.10 10.0.29.