Dell EqualLogic Best Practices Series Sizing and Best Practices for Deploying Virtual Desktops with Dell EqualLogic Virtual Desktop Deployment Utility in a VMware Environment A Dell Technical Whitepaper This document has been archived and will no longer be maintained or updated. For more information go to the Storage Solutions Technical Documents page on Dell TechCenter or contact support.
THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND. © 2012 Dell Inc. All rights reserved. Reproduction of this material in any manner whatsoever without the express written permission of Dell Inc. is strictly forbidden. For more information, contact Dell.
Table of contents 1 Executive summary ........................................................................................................................................... 5 1.1 2 3 Audience ..................................................................................................................................................... 5 Introduction .................................................................................................................................................
8 9 7.3.1 Login storm I/O ............................................................................................................................... 27 7.3.2 Steady state I/O ............................................................................................................................... 30 7.3.3 Desktop host server performance .................................................................................................31 Sizing guidelines for EqualLogic SANs .................
Acknowledgements This whitepaper was produced by the PG Storage Infrastructure and Solutions team based on testing conducted between August 2011 and February 2012 at the Dell Labs facility in Austin, Texas.
1 Executive summary The Dell™ EqualLogic® Host Integration Toolkit VMware Edition (HIT/VE) provides a suite of tools including Datastore Manager, Automatic Snapshot Manager, EqualLogic VASA Provider, and Virtual Desktop Deployment Utility for VMware environments. The purpose of this paper is to demonstrate how the Virtual Desktop Deployment Utility leverages the thin clone feature introduced in the EqualLogic PS Series Firmware version 5.
2 Introduction Desktop virtualization is an important strategy for organizations seeking to reduce the cost and complexity of managing an expanding variety of client desktops, laptops, and mobile handheld devices. VDI provides a method for achieving greater management efficiencies and increasing reliability of existing desktop computing resources in larger organizations.
performance demands on the storage infrastructure. There may also be situations like an unexpected power shutdown which requires booting of all the virtual desktops at the same time. This boot storm creates significantly higher IOPS on the underlying storage platform. To be successful, storage designs for a VDI deployment must take these demands into account.
In this paper, we demonstrate the benefits of the EqualLogic platform – specifically, Dell EqualLogic Virtual Desktop Deployment Utility and the Dell EqualLogic PS6100XS arrays – for deploying VDI in the enterprise. 2.4 Separation of user data and virtual desktop data The centralized virtual desktop model is comprised of two data types, VM data and user data. VM data consists of the virtual desktop image, including OS and common applications.
3 VMware View solution infrastructure VMware View is a VDI solution that includes a complete suite of tools for delivering desktops as a secure, managed service from a centralized infrastructure.
3.3 VMware View desktop pools A Desktop Pool is a VMware term that is used to describe the entity that is managed by the View Administration interface. View Desktop Pools allow you to group users depending on the type of service the user requires. There are two types of pools – Automated Pools and Manual Pools. In View, an Automated Pool is a collection of VMs cloned from a base template, while a Manual Desktop pool is created by the View Manager from existing desktop sources.
The core View infrastructure components used in our test configuration are shown in Figure 1: Figure 1 VDI architecture using Unified Storage: Dell EqualLogic FS7500 and EqualLogic PS Series BP1022 Dell EqualLogic Virtual Desktop Deployment Utility – Sizing and Best Practices 11
4 EqualLogic Virtual Desktop Deployment Utility The Virtual Desktop Deployment Utility is installed as part of the Dell EqualLogic HIT/VE virtual appliance and is launched from vSphere client for vCenter. The EqualLogic Virtual Desktop Deployment Utility is available under the “Solutions and Applications” tab on the home page of the vSphere client. The Virtual Desktop Deployment Utility, first released as part of HIT/VE 3.
The virtual desktop deployment process using the EqualLogic Virtual Desktop Deployment Utility and linked clones is shown in Figure 2 below: Figure 2 Virtual desktop deployment process with EqualLogic Virtual Desktop Deployment Tool In step 1, the administrator creates a gold master desktop image that is optimized for VDI deployment. As part of step 2, the administrator provides the gold master image to the Virtual Desktop Deployment Utility.
In step 3, the EqualLogic Desktop Deployment Utility creates thin clones of the template volume created in the previous step. The tool creates appropriate number of thin clones to deploy the required number of virtual desktops. VMware linked clones are created from these eight gold master images in the thin clones to provide the required number of VMs per datastore. These VMs are registered and then customized on the vCenter server.
Figure 3 Template volume considerations Figure 4 shows the capacity planning needed to deploy 512 desktops using the template volume created above. Each datastore contains 32 VMs, so 16 datastores will be needed to deploy 512 VMs. Each datastore is a thin clone volume which is expected to grow up to 160 GB in this example for differential data of all the VMs contained within it. Thus, the total volume reserve is approximately 16 * 160 GB = 2.5 TB for this 512 virtual desktop deployment scenario.
Figure 4 Capacity considerations A detailed step-by-step sample deployment using the wizard available in EqualLogic Desktop Deployment Utility is presented in Appendix A. A complete discussion on various sizing considerations is presented in section 8 of this document.
5 Infrastructure and test configuration In this chapter we provide information about the test setup used for hosting View virtual desktops, infrastructure components, networking, and storage subsystems. 5.1 Component design The entire infrastructure and test configuration was held within a Dell PowerEdge M1000e blade chassis. As shown in Figure 5, the 16 PowerEdge M610 servers were divided into three ESXi clusters.
o o Switches in fabrics B1 and C1 were uplinked to a pair of PowerConnect 7048 switches for dedicated access to the iSCSI SAN. Switches in fabrics B2 and C2 were connected to the VDI LAN and provided client connectivity to the virtual desktops. There are three physical networks that allow for segregation of different types of traffic. • • • Management LAN This network provides a separate management network for all the physical ESXi hosts, network switches, and EqualLogic arrays.
Note: For this whitepaper we used only 2 NIC ports per mezzanine card. Figure 6 iSCSI SAN connectivity 5.3 ESXi host network configuration VMware ESXi 5.0 hypervisor was installed on all 16 blades. The network configuration on each of those hosts is described below. Each ESXi host was configured with three virtual switches: vSwitch0, vSwitch1, and vSwitch2 to separate the different types of traffic on the system.
5.4 VMware View configuration View was installed by following the documentation provided by VMware. View 5.0 Documentation: http://pubs.vmware.com/view-50/index.jsp Here are the specific configuration choices we used in our setup: • • • BP1022 Two View Servers were setup to provide load balancing and high availability. The View Servers were installed as VMs on two separate hosts with two virtual CPUs, 10 GB of RAM and a 40 GB virtual hard drive.
6 VMware View test methodology 6.1 Test objectives • • • Develop best practices and sizing guidelines for a View-based VDI solution deployed on EqualLogic PS Series storage, Dell PowerConnect switches, and Dell PowerEdge Blade Servers with VMware ESXi 5.0 as the server virtualization platform while utilizing the EqualLogic Virtual Desktop Deployment Utility.
6.3.2 Monitoring tools We used the following monitoring tools: • • • • Dell EqualLogic SAN Headquarters (SAN HQ) for monitoring storage array performance VMware vCenter statistics for ESXi performance Login VSI Analyzer Custom script polling array counters for monitoring TCP retransmissions Detailed performance metrics were captured from the storage arrays, hypervisors, virtual desktops, and the load generators while the tests were running. 6.
6.5 Test setup We setup two virtual desktop pools using the EqualLogic Virtual Desktop Deployment Utility. Each pool was built from a Microsoft Windows 7 base image, which was optimized for VDI deployment. View Optimization Guide for Windows 7: http://www.vmware.
7 Test results and analysis This section shows the different View VDI characterization tests executed as well as the key findings from each test. The task worker user type represents the majority of the VDI users in the industry today, and we focused our testing on this workload profile. 7.1 Test scenarios • Boot storm Boot storm represents the worst-case scenario where many virtual desktops are started at the same time and they all contend for the system resources simultaneously.
Figure 7 SAN HQ data showing storage performance during boot storm The spike seen in the figure was caused primarily by read I/O – the boot process of the virtual desktops creates many simultaneous reads to the master gold image(s). In this test, the capacity needed for all the VM data was less than the SSD capacity provided by the PS6100XS array; therefore all the IOPS were delivered by SSDs and no data movement was involved.
Figure 8 SAN HQ data showing network performance during boot storm The above results demonstrate that a boot storm places a heavy load on system resources, and the PS6100XS array handled the simulated boot storm workload well. Typically, boot storms are only seen very rarely, for example after a power failure or a major system outage. To alleviate extra loads on various systems, it is advised that the desktops are booted over a period of time.
7.3 Task worker – Login storm and steady state 7.3.1 Login storm I/O Login VSI was configured to launch 700 virtual desktops over a period of about one hour after prebooting the virtual desktops. The peak IOPS observed during the login storm was about 6400 IOPS.
Figure 9 below shows the IOPS and latency observed during the login storm.
The following table shows the overall usage of the disks in the array during the login storm as collected by SAN HQ for the individual disks.
DepToolPS6100XS-VDI DepToolPS6100XS-VDI VDI-Images 22 10K 600GB SAS 0 0 KB/sec 0 KB/sec VDI-Images 23 10K 600GB SAS 0 0 KB/sec 0 KB/sec Table 2 clearly shows that most of the data is handled by the SSD drives during long storm and therefore provides the best performance. 7.3.2 Steady state I/O To support the 700 virtual desktops, we used thirteen M610 blade servers hosting around 58 VMs on each server. A single PS6100XS array was used to host all the virtual desktops.
Figure 10 SAN HQ data showing steady state I/O 7.3.3 Desktop host server performance During the steady state portion of the test, CPU, memory, network, and storage system performance were measured on all ESXi servers hosting the virtual desktops. The performance of one ESXi server is presented here. The other ESXi servers had similar performance characteristics.
Statistics for the ESXi hosts were captured using VMware vCenter server. The figures below show the CPU, memory, and network utilization on one of the ESXi servers hosting the virtual desktops. The key observations from the statistics were: • • • • CPU utilization was well below 80% through the entire test. (see Figure 11) Active memory usage was about 35% and there was no memory ballooning observed.
Memory Performance 100000000 90000000 80000000 70000000 60000000 KB 50000000 40000000 30000000 20000000 10000000 0 2/23/2012 10:30 AM Granted 2/23/2012 11:00 AM Swap used 2/23/2012 11:30 AM 2/23/2012 12:00 TimePM Balloon Consumed 2/23/2012 12:30 PM 2/23/2012 1:00 PM Shared common Active Figure 12 ESXi host memory utilization Network Performance 12000 10000 KBps 8000 6000 4000 2000 0 2/23/2012 10:30 AM 2/23/2012 11:00 AM 2/23/2012 11:30 AM 2/23/2012 12:00 TimePM Series2 2/23/2012 12:30 PM
Storage adapter Performance 16 14 12 Millisecond 10 8 6 4 2 0 2/23/2012 10:30 AM 2/23/2012 11:00 AM 2/23/2012 11:30 AM 2/23/2012 12:00 PM 2/23/2012 12:30 PM 2/23/2012 1:00 PM Time Read latency - vmhba35 Write latency - vmhba35 Figure 14 ESXi storage adapter read and write latencies BP1022 Dell EqualLogic Virtual Desktop Deployment Utility – Sizing and Best Practices 34
8 Sizing guidelines for EqualLogic SANs The storage array should be able to handle various I/O patterns that occur throughout the day for a VDI solution. These include the login storm that occurs at the beginning of a shift or a workday when the majority of employees login to their virtual desktops in a relatively short period of time. Once they are logged in, the virtual desktops reach a steady state where they generate predictable IOPS as the employees go about their work day.
A trade off needs to be made between maximizing the number of virtual desktops in a template volume versus maximizing the number of thin clone volumes (for most space savings), so that the total number of connections created are below the 1024 connections per pool limit and the end user receives the best possible performance from the virtual desktop.
It is important to note that the projected total volume reserve is an estimate based on the assumption that each virtual machine uses no more than the space reserved for its differential data (see Equation 4).
Table 3 Capacity requirements for various deployment scenarios Total Desktops to be Deployed VMs per Template Volume Number of Thin clone volumes Template Volume Size (GB) Projected volume reserve for pool (GB) Total Capacity Required (GB) 80 8 10 240 400 640 80 16 5 280 400 680 80 32 3 360 480 840 80 64 2 520 640 1160 160 8 20 240 800 1040 160 16 10 280 800 1080 160 32 5 360 800 1160 160 64 3 520 960 1480 320 8 40 240 1600 1840 320 16 20 280
During our tests, we have found that there is no noticeable performance difference on the number of VMs in a template volume for a given number of virtual desktops to be deployed.
9 Best practices 9.1 Application layer 9.1.1 Implement roaming profiles and folder redirection It is highly recommended that all users in the VDI deployment be configured with roaming profiles and folder redirection. This allows the virtual desktops to be non-persistent by preserving user profiles across boots. 9.1.
- Separate virtual switches to segregate iSCSI SAN traffic, VDI traffic, and Management network traffic Each network path should be assigned to a minimum of two physical NICs for high availability. More information is available at the following links: VMware KB article on best practices for installing ESXi 5.0: http://kb.vmware.com/kb/2005099 Installing and configuring the Dell EqualLogic MEM for VMware vSphere 5: http://www.equallogic.com/WorkArea/DownloadAsset.aspx?id=10798 9.
9.5 EqualLogic Virtual Desktop Deployment Utility - Limit the total number of desktops per pool to no more than 512. This is a limitation on the current HIT/VE version. Provide at least 20% VM space reserve per virtual desktop when deploying the pool. If using VMware View 5.0 and ESXi 5.0, make sure to use the VMFS-5 filesystem for formatting datastores. Information on known issues and limitations from the EqualLogic HIT/VE are available here: https://support.equallogic.com/support/download_file.
Appendix A EqualLogic Virtual Desktop Deployment Utility Here is an example deployment of a Virtual Desktop Pool using the EqualLogic Virtual Desktop Deployment Tool. 1. Launch the EqualLogic Virtual Desktop Deployment Utility in VMware vSphere Client Figure 15 vCenter Home screen 2.
Figure 16 EqualLogic Virtual Desktop Deployment Utility – Login screen 3. Create a Desktop Pool Click on the “Create desktop pool” button in either of the two locations to launch the Virtual Desktop Deployment Utility.
Figure 17 Creating a virtual desktop pool 4. Desktop Pool Settings The following table provides a short description of the fields on this screen: Table 4 Desktop Pool settings options Field Display Name Description Desktop persistence Description Name of the virtual desktop pool Optional description for the virtual desktop pool.
Figure 18 Desktop pool settings 5. Resource Pool Selection Select the resource pool (cluster or host) that will be used to host the deployed VMs. NOTE: View limits the maximum hosts in a VMware cluster to eight. See the following publication for more information: http://pubs.vmware.com/view-50/topic/com.vmware.view.planning.doc/GUID-E5BEA591-D4744CEE-9646-E9FB3CAF87B4.
Figure 19 Resource pool selection 6. File System Properties The following table contains a short description of each of the fields on this screen: Table 5 File system options Field File system version File system block size BP1022 Description VMware file system version that the data stores will be formatted with. (VMware ESXi 5.0 supports VMFS-3 or VMFS-5, hosts with an older version of ESXi do not support VMFS-5 Block size for each file on the datastore. Pull-down to select the desired block size.
Figure 20 File system options 7. Select the Virtual Machine to use as Deployment Template Select the VM that was customized to be the base image for the virtual desktops to be created. In this example we are using the template named “Win7Image”.
Figure 21 Base image selection 8.
Table 6 Deployment unit layout options Field Virtual machine space reserve Range 10 - 100% VMs per datastore Range 8 – 64 VM cloning Description Each VM requires some reserve space to store unique data for each VM and other VM runtime files in addition to the base OS image. These are stored in the reserve space that is allocated on the volume created. Recommended minimum space is 20% for each VM The number of VMs that will be provisioned on a single data store.
9. Capacity Planning The following table provides a short description of each of the fields on this screen: Table 7 Capacity planning options Field VMs per datastore Number of provisioned deployment units (datastores) Maximum number of provisioned desktops Initial number of deployed desktops Description This value is carried forward from the previous screen. Specifies the number of datastores that need to be provisioned.
Figure 23 Capacity planning for desktop pool 10. iSCSI Access Control for EqualLogic Volumes This screen allows for the auto generation of ACL records for the volumes created by the Virtual Desktop Deployment Utility. It is important to note that a datastore can have a maximum of 16 ACL records and if additional ACL records are required, you should look into using alternative ways including CHAP authentication.
Figure 24 iSCSI access control 11. Naming the Virtual Machines and storing them in vCenter This screen allows the administrator to select where the generated virtual desktops will be seen on VMware vCenter. The utility uses the virtual desktop pool name to group all desktops that are part of a desktop pool into a logical folder structure. This screen also allows administrators to pick a naming pattern for the virtual desktops that will be generated.
Figure 25 Virtual machine options 12. Desktop Pool Settings All the generated virtual desktops need to be initialized for Microsoft Windows with the sysprep utility. This screen allows the selection of a pre-configured customization specification that VMware will use to prepare these desktops. VMware vCenter Customization Specifications Manager can be used to create or modify the customization specifications used here.
Figure 26 Virtual desktop customization 13. Summary of Desktop Pool This last screen allows the administrator to see all the settings and options for the virtual desktop pool being generated. This is a summary screen and has no options that the administrator can change.
Figure 27 Virtual desktop pool summary 14. Job History Screen The job history screen is available any time by clicking the highlighted icon on the top right side of the screen in Figure 28. This screen shows the status of the current job with detailed information available in the lower half of the screen.
Figure 28 Job history BP1022 Dell EqualLogic Virtual Desktop Deployment Utility – Sizing and Best Practices 57
Appendix B VMware View solution configuration Solution configuration - Hardware components: Virtual Desktops VMware View Servers • • 13 x Dell PowerEdge M610 Servers: o ESXi 5.0 o BIOS Version: 3.0.0 o 2 x Hexa Core Intel® Xeon® X5680 3.33Ghz Processors o 96 GB RAM o 2 x 146GB 10K SAS internal disk drives o 1 x Dual-port Broadcom 5709 1GbE NIC (LAN on motherboard) o 2 x Broadcom NetXtreme II 5709s 1GbE NIC, Quad-Port 3 x Dell PowerEdge M610 servers: Description ESXi 5.
Ethernet Switch Firmware: 4.1.0.19 • Storage • Performance Monitoring 1 x Dell EqualLogic PS6100XS: o 7 x 400GB SSD o 17 x 600GB 10K SAS disks o Dual 4 port 1GbE controllers o Firmware: 5.2.1 1 x Dell EqualLogic PS6500E: o 48 x 1TB 7.2K SATA disks o Dual 4 port 1GbE controllers o Firmware: 5.2.1 • SAN Head Quarters – 2.2.0 • vCenter Performance monitoring Pools of desktops are stored on the PS6100XS array.
Appendix C Network design and VLAN configuration C.1 Management LAN configuration • • • Each PowerEdge M610 Server has an onboard Broadcom 5709 dual-port 1GbE NIC. Dual PowerConnect M6220 switches are installed in fabric A of the blade chassis. The onboard LOM NICs are connected to each of the M6220 switches. The two PowerConnect M6220 switches are inter-connected using 2 x 10GbE stacking interconnects.
C.2 VDI LAN configuration • • • Users wanting to access the Virtual Desktops hosted by View connect on this LAN. The client network connections of the FS7500 are connected to this switch to provide file share capabilities to the network. These switches may be uplinked to external switches to provide connectivity to the rest of the organization.
Appendix D ESXi network configuration Each ESXi host was configured with three virtual switches, vSwitch0, vSwitch1, and vSwitch2. Figure 31 ESXi vSwitch logical connection paths D.1 vSwitch0 vSwitch0 provides connection paths for all management LAN traffic. The physical adapters from the two onboard NICs (Fabric A) were assigned to this switch. VLANs are used to segregate network traffic into different classes (tagged packets) within this LAN.
Figure 32 vSwitch0 D.2 vSwitch1 This virtual switch provided paths for all the iSCSI SAN traffic. Two physical adapters were assigned, one each from the mezzanine cards on Fabric B and Fabric C. In our configuration we used the software iSCSI initiator provided by the ESXi host. To take advantage of EqualLogic-aware multi-path I/O, the EqualLogic Multipathing Extension Module (MEM) for VMware vSphere was installed on each ESXi host.
D.3 vSwitch2 Two physical adapters, one each from mezzanine cards on Fabric B and C were assigned to this switch. This virtual switch carries all traffic for the Server LAN.
Related Publications The following Dell publications are referenced in this document or are recommended sources for additional information. • • • • • • Dell EqualLogic PS Series Network Performance Guidelines: http://www.equallogic.com/resourcecenter/assetview.aspx?id=5229 Dell EqualLogic HIT/VE Documentation (EqualLogic Support login required): https://support.equallogic.com/support/download_file.aspx?id=1268 Dell EqualLogic PS series arrays – Scalability and Growth in Virtual Environments: http://en.
BP1022 Dell EqualLogic Virtual Desktop Deployment Utility – Sizing and Best Practices 66
THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND.