Active System Manager Release 8.1.
Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your product. CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. Copyright © 2015 Dell Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws.
Contents 1 Overview................................................................................................................. 5 About this Document............................................................................................................................ 5 What’s New in this Release................................................................................................................... 5 Accessing Online Help..................................................................
Customizing Linux Template.............................................................................................................. 37 Customizing Windows Template....................................................................................................... 38 5 Configuring ASM Virtual Appliance for NetApp Storage Support.............40 Adding NetApp Ruby SDK to ASM Virtual Appliance.........................................................................
Overview 1 Active System Manager (ASM) is Dell’s unified management product that provides a comprehensive infrastructure and workload automation solution for IT administrators and teams. ASM simplifies and automates the management of heterogeneous environments, enabling IT to respond more rapidly to dynamic business needs.
Log in to ASM user interface with the user name admin and then enter password admin, and press Enter. After you log in to ASM user interface, you can access the online help in any of the following ways: • To open context-sensitive online help for the active page, click ?, and then click Help. • To open context-sensitive online help for a dialog box, click ? in the dialog box.
• Standard License — A Standard license grants full access. You will receive an e-mail from customer service with the instructions for downloading ASM. The license file is attached to that email. If you are using ASM for the first time, you must upload the license file through the Initial Setup wizard. To upload and activate subsequent licenses, click Settings → Virtual Appliance Management. 1. On the Virtual Appliance Management page, under the License Management section, click Add.
ASM Port and Protocol Information The following ports and communication protocols used by ASM to transfer and receive data. Table 1.
Installation and Quick Start 2 The following sections provide installation and quick start information, including step-by-step instructions for deploying and configuring ASM in VMware vSphere or Microsoft virtualization environment. Only one instance of ASM should be installed within a network environment. Exceeding this limit can cause conflicts in device communication.
Specification Dell PowerEdge Servers Prerequisite • The virtual appliance is able to communicate with the PXE network in which the appliance is deployed. It is recommended to configure the virtual appliance directly on the PXE network, and not on the external network. • The virtual appliance is able to communicate with the hypervisor management network. • The DHCP server is fully functional with appropriate PXE settings to PXE boot images from ASM in your deployment network.
Specification Prerequisite • network must be configured on top of rack switch which are connected to C-Series server • Necessary VLAN must be configured on the service facing port of that top of rack switch. NOTE: You need to place PXE VLAN-untagged for any kind of OS deployment. If it’s Windows and Linux bare metal OS installation, you need to set worklaod network and you need to set Hypervisor management network for ESXi deployment.
Specification Prerequisite Dell PowerEdge M I/O Aggregator • Server facing ports must be in switchport mode. • Server facing ports must be configured for spanning tree portfast. • If ASM is used to do the initial configuration of credentials and IPs on the IOM in the blade chassis, you need to make sure, no enabled password is configured on the switches. • Any VLAN which is dynamically provisioned by ASM must exist on the switch.
Specification Compellent iSCSI on MXL with Hyper V Prerequisite • Discovery of EM needs to be done with same credentials which are used for add storage center in Element Manager • Enable LLDP and its corresponding attributes • DCB (with no PFC option) on the participating interfaces (Server Facing and Port-Channel Members).
Specification Prerequisite ! interface TenGigabitEthernet 0/41 no ip address mtu 12000 dcb-map DCB_MAP_PFC_OFF ! port-channel-protocol LACP port-channel 1 mode active ! protocol lldp advertise management-tlv management-address system-name no advertise dcbx-tlv ets-reco dcbx port-role auto-upstream no shutdown FTOSA1# VMware vCenter 5.1, 5.5 or 6.0 • VMware vCenter 5.1, 5.5 or 6.0 is configured and accessible through the management and hypervisor management network.
Specification Prerequisite NOTE: Prior to deployment of M1000e server, you need to disable FlexAddress every server in the chassis. To disable FlexAddress, follow the path:CMC > Server Overview > Setup > FlexAddress. You need to turn off server to disable FlexAddress. Ideally this should be done prior discovering the server. This setting applies to the chassis and the servers in the chassis, not to the IOM switches such as MXL or IOA.
Dell(config-fcoe-name)# fc-map 0efc00 Dell(config-fcoe-name)# keepalive Dell(config-fcoe-name)# fcf-priority 128 Dell(config-fcoe-name)# fka-adv-period 8 Prerequisites for M1000e (with MXL), S5000, and Compellent The following table describes the prerequisites for the FCoE solution offered using M1000e (with MXL), S5000, and Compellent. For more information, see http://en.community.dell.com/techcenter/extras/m/ white_papers/20387203 Resource Prerequisites MXL • DCB needs to be enabled.
Resource Prerequisites no shut exit S5000 Following is the prerequisite for S5000. • Enable Fibre Channel capability and Full Fabric mode. feature fc fc switch-mode fabric-services • Enable FC ports connecting to Compellent storage array and FC ports connecting to other S5000 switch via ISL links. interface range fi 0/0 - 7 no shut • Create DCB Map.
Resource Prerequisites S5000 • DCB needs to be enabled. • VLT needs to be disabled. • Enable Fibre Channel capability and Full Fabric mode. feature fc fc switch-mode fabric-services • Enable FC ports connecting to Compellent storage array and FC ports connecting to other S5000 switch via ISL links. interface range fi 0/0 - 7 no shut • Create DCB Map.
Resource Prerequisites • FIP Snooping feature needs to be enabled on the MXL. conf Feature fip-snooping • Port-channel member interfaces needs to have below configuration. interface range tengigabitethernet 0/33 – 36 port-channel-protocol lacp port-channel 128 mode active exit protocol lldp no advertise dcbx-tlv ets-reco dcbx port-role auto-upstream no shut exit • Port-channel connecting S5000 switch needs to have following configuration.
Resource Prerequisites • Create a FCoE VLAN interface vlan [Create VLAN for FCoE] exit • Create FCoE Map fcoe-map default_full_fabric fabric-id vlan fc-map exit • Apply FCoE MAP to interface interface fibrechannel 0/0 fabric default_full_fabric no shutdown NOTE: Below is the process of generating the FC MAP For generating the fc-map use below ruby code. Here VLAN ID is FCoE VLAN ID fc_map = vlanid.to_i.to_s(16).upcase[0..1] fc_map.
Resource Prerequisites • Enable FC ports connecting to Compellent storage array and FC ports connecting to other S5000 switch via ISL links. interface range fi 0/0 - 7 no shut • Create DCB Map. dcb-map SAN_DCB_MAP priority-group 0 bandwidth 60 pfc off priority-group 1 bandwidth 40 pfc on priority-pgid 0 0 0 1 0 0 0 0 exit • Create a FCoE VLAN. interface vlan [Create VLAN for FCoE] exit • Create FCoE Map.
Resource Prerequisites port-channel-protocol lacp port-channel 128 mode active exit protocol lldp no advertise dcbx-tlv ets-reco dcbx port-role auto-upstream no shut exit • Port-channel connecting Cisco Nexus switch needs to have following configuration. interface port-channel 128 portmode hybrid switchport fip-snooping port-mode fcf • Server facing ports needs to have following configuration.
Resource Prerequisites Example interface vfc101 bind mac-address 5C:F9:DD:16:EF:07 no shutdown interface vfc102 bind mac-address 5C:F9:DD:16:EF:21 no shutdown • Move back into the VSAN database and create entries for the new VFC just created and create entries for the FC port(s) that will be used. vsan database vsan 2 interface vfc101 vsan 2 interface vfc102 vsan 2 interface fc2/1 vsan 2 interface fc2/2 NOTE: All the Compellent ports needs to part of the same VSAN.
Resource Prerequisites • Instantiate but do not configure the upstream port-channel (LAG) to the core /aggregation switch. • Instantiate but do not configure the downstream port-channel (LAG) to the IOA4. • Create the VFC interface to bind to the servers CNA FIP MAC address. This can be located in the CMC WWN table or the IDRAC page for the server.
Resource Prerequisites • Port-channel member interfaces needs to have following configuration. interface range tengigabitethernet 0/33 – 36 port-channel-protocol lacp port-channel 128 mode active exit protocol lldp no advertise dcbx-tlv ets-reco dcbx port-role auto-upstream no shut exit • Port-channel connecting Cisco Nexus switch needs to have following configuration.
Resource Prerequisites • Instantiate but do not configure the upstream port-channel (LAG) to the core /aggregation switch. • Instantiate but do not configure the downstream port-channel (LAG) to the IOA4. • Create the VFC interface to bind to the servers CNA FIP MAC address. This can be located in the CMC WWN table or the iDRAC page for the server.
Resource Prerequisites configuration. (Ensure to back up the configuration before enabling the feature). conf feature npv • Enable required features. feature fcoe feature npiv feature lacp • Create VSAN-instantiate it in the VSAN database. conf vsan database vsan • Configure regular Ethernet VLANs, and then the FCoE VLAN is created with an assignment to its respective VSAN.
Resource Prerequisites NOTE: All the Dell Compellent ports need to part of the same VSAN. Brocade Alias needs to be created having Compellent fault domain WWPN, accessible on Brocade switch. Dell Compellent Create fault domain as per Dell Compellent best practices. Prerequisites for M1000e (with MXL and FC FlexIOM), Brocade, and Dell Compellent The following table describes the prerequisites for the FCoE solution offered using M1000e (with MXL and FC FlexIOM), Brocade, and Dell Compellent.
Resource Prerequisites Dell Compellent Create fault domain as per Dell Compellent best practices. System Center Virtual Machine Manager (SCVMM) Prerequisites ASM manages resource on Microsoft System Center Virtual Machine Manager through Windows Remote Management (WinRM). Windows RM must be enabled on the SCVMM server as well as on Active Directory and DNS servers used in SCVMM/HyperV deployments. ASM deployments support Active Directory and DNS servers which exist on the same machine.
Click Next to continue. 8. If there is more than one datastore available on the host, the Datastore page displays. Select the location to store virtual machine (VM) files, and then click Next to continue. 9. On the Disk Format page, choose one of the following options: • To allocate storage space to virtual machines. as required, click thin provisioned format. • To pre-allocate physical storage space to virtual machines at the time a disk is created, click thick provisioned format.
4. Click Next. h. On the Select Destination page, select the destination host group that contains the Hyper-V server where you want to deploy ASM VM. Click Next. i. On the Select Host page, select the host on which you want to deploy ASM, and then click Next. j. On the Configuration Settings page, make the changes for your environment, if required. k. On the Select networks page, select your PXE network and configure it appropriately. l.
Retry the deployment, so that ASM can retry to mount the volume(s). Error 02 SCVMM reports DNS error during mounting of the available storage on SCVMM cluster . Trying to reuse an existing volume used in another HyperV cluster. Resolution HyperV or SCVMM do not allow mounting a volume which is used in another cluster (Active / Inactive). ASM does not format already formatted volume to avoid any data-loss.
Configuring ASM Virtual Appliance 3 You must configure the following settings in the virtual appliance console before you start using ASM: • Change Dell administrator password. For detailed information, see Changing Delladmin Password • Configure static IP Address in the virtual appliance. For detailed information, see Configuring Static IP Address in the Virtual Appliance • Configure ASM Virtual Appliance as PXE boot responder.
Configuring Static IP Address in the Virtual Appliance 1. In VMware Sphere, click the Console tab to open the console of the virtual appliance or use the SSH protocol to connect to ASM virtual appliance IP (ssh needs to be enabled on the appliance). 2. Log in to the console with the user name delladmin, enter current password, and then press Enter. NOTE: The default password for delladmin account is delladmin. 3. At the command line interface, run the command asm_init_shell. 4.
Customizing Virtual Machine Templates for VMware and Hyper-V 4 ASM supports cloning virtual machines (VM) or virtual machine templates in VMware, and cloning virtual machine templates in Hyper-V and in Red Hat Enterprise Linux.
• To install the puppet agent on the virtual machine, copy the puppet agent install files to the virtual machine. The puppet agent is available on the ASM appliance for both Windows and Linux in /var/lib/razor/repo-store directory. If the virtual machine being customized has network access to the ASM appliance, you can connect to this same directory as a network share directory using the address: \\\razor\puppet-agent .
NOTE: Additional lines may be present in the puppet.conf file for your system. It is not necessary to delete any information from this file. You just need to ensure the previously noted section is present in the file. Customizing Linux Template Perform the following task to customize Linux template: 1. Ensure all instructions have been completed for VMware or Hyper-V virtual machines as noted in the previous section. a. b. c. d. e. 2.
Debian/Ubuntu: rm /lib/udev/rules.d/75-persistent-net-generator.rules 5. Configure cronjob to execute the puppet_certname.sh script and restart or start the puppet service. Type the following commands: crontab –e a. Add the following line to this file and then save and exit the file. @reboot /usr/local/bin/puppet_certname.sh; /etc/init.d/puppet restart RHEL 7 @reboot /usr/local/bin/puppet_certname.sh b.
5. Specify that task runs the script “C:\puppet_certname.bat.” 6. Specify that the task run in the “C:\” directory, this is an optional parameter but is required for ASM clone customization. 7. Make sure the task can run even you are not logged in and you must be able to run it with highest privilege.To enable this option, right-click the puppet_certname.bat and click Properties. In the puppet_certname properties dialog box, under Security options, select Run whether user is logged on or not. 8.
Configuring ASM Virtual Appliance for NetApp Storage Support 5 For ASM to support NetApp, perform the following tasks: • Add NetApp Ruby SDK libraries to the appliance. For more information about adding SDK libraries, see Adding NetApp Ruby SDK • Enable HTTP/HTTPs for the NFS share. For more information, see Enabling HTTP or HTTPs for NFS Share Make sure license is enabled for NFS on NetApp. To obtain and install the license, refer NetApp documentation.
5. Apply the patch(es) to update the NetApp SDK Ruby files. # sudo patch /etc/puppetlabs/puppet/modules/netapp/lib/puppet/util/network_device/netapp/ NaServer.rb
• Snapshot percentage • The Percentage of Space to Reserve for Snapshot • Auto-increment • Persistent • NFS Target IP 42
Completing Initial Configuration 6 Log in to ASM using the appliance IP address, After logging into ASM, you need to complete the basic configuration setup in the Initial Setup wizard. After that you will get four other wizards that allow you to define Networks, discover resources, configure resources and publish template. For more information , see the Active System Manger Release 8.1.1 User’s Guide. NOTE: If you use the ASM 8.1.
A Installing Windows ADK 8.1.1 for OS Prep for Windows You need to perform the following configuration tasks before using ASM to deploy Windows OS. NOTE: You should use Microsoft ADK 8.1.1 installed in the default location. Please make sure to install all options during ADK installation process. 1. Create a Windows .iso that has been customized for use with ASM using ADK and build-razorwinpe.ps1 script.
NOTE: • If any additional drivers are required, add the drivers under the “Drivers” folder in the build directory you created on your ADK machine. The drivers are installed into the Windows image, if applicable. The drivers that do not apply to the OS being processed are ignored. • If you want deploy Windows to VMWare VMs, the WinPE drivers for the VMXNET3 virtual network adapter from VMWare required.
To add an OS image repository, perform the following tasks in the ASM GUI: 1. In the left pane, click Settings > Repositories. 2. On the Repositories page, click OS Image Repositories tab, and then click Add. 3. In the Add OS Image Repository dialog box, perform the following actions: a. In the Repository Name box, enter the name of the repository. b. In the Image Type box, enter the image type. c. In the Source File or Path Name box, enter the path of the OS Image file name in a file share. d.
B Configuring DHCP or PXE on External Servers The PXE service requires a DHCP server configured to provide boot server (TFTP PXE server) information and specific start-up file information. ASM PXE implementation uses the iPXE specification so that the configuration details include instructions to allow legacy PXE servers and resources to boot properly to this iPXE implementation. This section provides information about configuring DHCP on the following servers.
NOTE: The binary for the output of the ASCII “iPXE” is (69 50 58 45). 5. b. In the Description box, enter iPXE Clients c. In the data pane, under ASCII, enter iPXE Click Close. Create the DHCP Policy 1. Open the Windows 2012 DHCP Server DHCP Manager. 2. In the console tree, expand the scope that will service your ASM PXE network. Right-click Policies and select New Policy. The DHCP Policy Configuration Wizard is displayed. 3.
3. Create Boot File Scope Option For additional information, see http://ipxe.org/howto/msdhcp Create the DHCP User Class You must create the user class for the DHCP server before creating the DHCP Policy. 1. Open the Windows 2008 DHCP Server DHCP manager. 2. In the console tree, navigate to IPv4. Right click IPv4, and then click Define User Classes from the drop-down menu. 3. In the DHCP User Class dialog box, click Add to create a new user class. 4.
• 006 Name Server (DNS server IP address) Configuring DHCP for Linux You can manage the configuration of the Linux DHCPD service by editing the dhcpd.conf configuration file. The dhcpd.conf is located at /etc/dhcp directory of most Linux distributions. If the DHCP is not installed on your Linux server, install the Network Infrastructure Server or similar services. Before you start editing the dhcpd.conf file, it is recommended to back up the file.
After you modify the dhcpd.conf file based on your environment, you need to start or restart your DHCPD service. For more information, see http://ipxe.org/howto/dhcpd Sample DHCP Configuration # dhcpd.conf # # Sample configuration file for ISC dhcpd # #option definitions common to all supported networks... #option domain-name "example.org"; #option domain-name-servers 192.168.203.46; #filename "pxelinux.0"; next-server 192.168.123.
#subnet 10.254.239.32 netmask 255.255.255.224 { #range dynamic-bootp 10.254.239.40 10.254.239.60; #option broadcast-address 10.254.239.31; #option routers rtr-239-32-1.example.org; #} #A slightly different configuration for an internal subnet. #subnet 10.5.5.0 netmask 255.255.255.224 { #range 10.5.5.26 10.5.5.30; #option domain-name-servers ns1.internal.example.org; #option domain-name "internal.example.org"; #option routers 10.5.5.1; #option broadcast-address 10.5.5.
# # # # # # # # #} pool { allow members of "foo"; range 10.17.224.10 10.17.224.250; } pool { deny members of "foo"; range 10.0.29.10 10.0.29.