Best Practices for Virtualizing & Managing Exchange 2013 v1.
Copyright Information © 2013 Microsoft Corporation. All rights reserved. This document is provided "as-is." Information and views expressed in this document, including URL and other Internet Web site references, may change without notice. You bear the risk of using it. This document does not provide you with any legal rights to any intellectual property in any Microsoft product. You may copy and use this document for your internal, reference purposes.
Table of Contents Introduction .......................................................................................... 5 Executive Summary .......................................................................................................................................................... 5 Target Audience ................................................................................................................................................................ 6 Scope ............................
Single Exchange 2013 Virtual Machine on a Hyper-V Cluster ................................................................60 Resilient Exchange Configuration on a Hyper-V Cluster ...........................................................................61 System Center 2012 SP1 ................................................................. 65 Comprehensive Management Capabilities.......................................................................................................
Introduction This guide provides high-level best practices and considerations for deploying and managing Microsoft Exchange 2013 on a Windows Server 2012 Hyper-V-based virtualization infrastructure. The recommendations and guidance in this document aim to: Complement the architectural design of an organization’s specific environment.
Target Audience This guide is intended for IT professionals and technical decision makers (TDMs), including IT consultants and architects, IT managers, and messaging administrators. With this guide, IT professionals can better understand how to set up an environment for virtualizing Exchange 2013 using an integrated virtualization platform built on some of the latest Microsoft technologies, including Windows Server 2012 Hyper-V and System Center 2012 SP1.
Why Virtualize Exchange? The demand to virtualize tier-1 applications such as Exchange Server continuously increases as IT organizations push toward completely virtualized environments to improve efficiency, reduce operational and capital costs, and improve the management of IT infrastructure. By using Windows Server 2012 Hyper-V to virtualize Exchange application workloads, organizations can overcome potential scalability, reliability, and performance concerns of virtualizing such a workload.
Why Microsoft Virtualization and Management? Organizations today want the ability to consistently and coherently develop, deploy, and manage their services and applications across on-premises and cloud environments. Microsoft offers a consistent and integrated platform that spans from on-premises to cloud environments. This platform is based on key Microsoft technologies, including Windows Server 2012 Hyper-V and System Center 2012 SP1.
Fabric Configuration With Windows Server 2012 Hyper-V, customers can make the best use of new and existing server hardware investments by consolidating multiple workloads as separate virtual machines, reducing the number of physical machines in the infrastructure and improving utilization. Windows Server 2012 provides a number of compelling capabilities to help organizations build scalable, high-performing, and reliable virtualized infrastructure for their mission-critical workloads like Exchange 2013.
Hardware-enforced Data Execution Prevention must be available and enabled. Specifically, you must enable Intel XD bit (execute disable bit) or AMD NX bit (no execute bit). The minimum system requirements for Windows Server 2012 are as follows:5 Processor: Minimum of 1.4 GHz 64-bit processor Memory: Minimum of 512 MB Disk: Minimum of 32 GB Note The above are minimum requirements only.
Scalability Maximums of Windows Server 2012 Hyper-V Windows Server 2012 Hyper-V provides significant scalability improvements over Windows Server 2008 R2 Hyper-V. Hyper-V in Windows Server 2012 greatly expands support for the number of host processors and memory for virtualization—up to 320 logical processors and 4 TB physical memory, respectively.
Compute Considerations Organizations need virtualization technology that can support the massive scalability requirements of a demanding Exchange 2013 deployment. One of the key requirements to virtualizing such workloads is to have a large amount of processing and memory power. Therefore, when planning to virtualize missioncritical, high-performance workloads, you must properly plan for these compute resources.
SLAT technologies also help to reduce CPU and memory overhead, thereby allowing more virtual machines to be run concurrently on a single Hyper-V machine. The Intel SLAT technology is known as Extended Page Tables (EPT); the AMD SLAT technology is known as Rapid Virtualization Indexing (RVI), formerly Nested Paging Tables (NPT). Best Practices and Recommendations For optimal performance of demanding workloads like Exchange 2013, run Windows Server 2012 Hyper-V on SLAT-capable processors/hardware.
Figure 3: Weights and reserves in Windows Server 2012 Best Practices and Recommendations The Weights and Reserves feature, when used properly, can be a great tuning mechanism for Exchange 2013 virtual machines. If CPU resources are overcommitted through other additional workloads, you can set weights and reserves to optimize the way these resources are used so that the Exchange 2013 VMs have priority. Ideally, you should not oversubscribe CPU resources in environments where Exchange 2013 is virtualized.
NUMA is a memory design architecture that delivers significant advantages over the single system bus architecture and provides a scalable solution to memory access problems. In a NUMA-supported operating system, CPUs are arranged in smaller systems called nodes (Figure 4). Each node has its own processors and memory, and is connected to the larger system through a cache-coherent interconnect bus.
Page File Guidance When a machine runs low on memory and needs more immediately, the operating system uses hard disk space to supplement system RAM through a procedure called paging. Too much paging degrades overall system performance. However, you can optimize paging by using the following best practices and recommendations for page file placement. Best Practices and Recommendations Let the Windows Server 2012 Hyper-V host operating system handle the page file sizing. It is well optimized in this release.
Storage Considerations Storage configuration is one of the critical design considerations for any Mailbox Server role in Exchange 2013. With a growing number of physical storage devices resulting in increased power use, organizations want to reduce energy consumption and hardware maintenance costs through virtualization.
Thinly provisioned virtual disks can be provisioned from the available capacity. Thin provisioning helps to reserve the actual capacity by reclaiming capacity on the space when files are deleted or no longer in use. Figure 6: Conceptual deployment model for storage spaces and storage pools Types of Storage Spaces There are three key types of storage spaces: simple/striped spaces, mirror spaces, and parity spaces. Each is discussed in more detail below.
Striped storage spaces can be used for the following: Delivering the overall best performance in terms of reads and writes. Balancing the overall storage load across all physical drives. Backing up disks to increase backup throughput or to distribute the use of space across disks. Mirror spaces: This data layout process uses the concept of mirroring to create copies of data on multiple physical disks. A logical virtual disk is created by combining two or more sets of mirrored disks.
Figure 9: Parity storage space across four disks Parity storage spaces are used for the following: Providing data recovery of failed disks. Offering efficient capacity utilization. Delivering faster read operations. Providing bulk backups by writing data in large sequential append blocks. The graphs in Figure 10 show the performance scaling of a simple storage space with up to 32 disks, which resulted in a random read 1.4 million IOPS and 10.9 GB/sec of sequential throughput.
Server Message Block 3.0 The Server Message Block (SMB) protocol is a network file sharing protocol that allows applications to read, create, update, and access files or other resources at a remote server. The SMB protocol can be used on top of its TCP/IP protocol or other network protocols. Windows Server 2012 introduces the new 3.0 version of the SMB protocol that greatly enhances the reliability, availability, manageability, and performance of file servers. SMB 3.
Dual-Node File Server: In a Dual-Node File Server, file servers can be clustered storage spaces, where shares are used for VHD storage (Figure 12). This configuration provides flexibility for shared storage, fault-tolerant storage, and low costs for acquisition and operation. It also offers continuous availability but with limited scalability.
Best Practices and Recommendations Fixed VHDs may be stored on SMB 3.0 files that are backed by block-level storage if the guest machine is running on Windows Server 2012 Hyper-V (or a later version of Hyper-V). The only supported usage of SMB 3.0 file shares is for storage of fixed VHDs. Such file shares cannot be used for direct storage of Exchange data. When using SMB 3.
Best Practices and Recommendations SMB Direct works with SMB Multichannel to transparently provide exceptional performance and failover resiliency when multiple RDMA links between clients and SMB file servers are detected. Also, because RDMA bypasses the kernel stack, it does not work with Network Interface Card (NIC) Teaming, but does work with SMB Multichannel (because SMB Multichannel is enabled at the application layer).
Internet SCSI The Internet Small Computer System Interface (iSCSI) protocol is based on a storage networking standard that facilitates data transfers over the Internet and manages storage over long distances, all while enabling hosts to operate as if the disks were attached locally. An iSCSI target is available as a built-in option in Windows Server 2012; it allows sharing block storage remotely by using the Ethernet network without any specialized hardware.
Best Practices and Recommendations Standard 10/100, 1 Gb, or 10 GbE do not support FCoE. FCoE runs on versions of Ethernet that have been improved to provide low latency, quality of service, guaranteed delivery, and other functionality traditionally associated with channel interfaces. Fibre Channel, OM3, and OM4 cabling are suitable for FCoE and 10 GbE.
With MPIO, Windows Server 2012 efficiently manages up to 32 paths between storage devices and the Windows host operating system, and provides fault-tolerant connectivity to storage. Further, as more data is consolidated on SANs, the potential loss of access to storage resources is unacceptable. To mitigate this risk, high availability solutions like MPIO have become a requirement. MPIO provides the logical facility for routing I/O over redundant hardware paths connecting servers to storage.
To eliminate the inefficient and unnecessary steps required by traditional host-based file transfers, ODX uses a token-based mechanism for reading and writing data within or between intelligent virtual storage database volumes (Figure 16). Instead of routing the data through the host, a small token is copied between the source and destination. The token serves as a point-in-time representation of the data.
Figure 17: Faster SAN-attached virtual machine migrations with ODX Best Practices and Recommendations If you are using SAS or FC in all clustered servers, all elements of the storage stack should be identical. It is required that the MPIO and DSM software be identical. It is recommended that the mass storage device controllers (that is, the HBA, HBA drivers, and HBA firmware attached to cluster storage) be identical.
Host Resiliency with NIC Teaming NIC Teaming gives the ability to bond multiple high-speed network interfaces together into one logical NIC to support workload applications that require heavy network I/O and redundancy (Figure 18). Windows Server 2012 offers fault tolerance of network adapters with inbox NIC Teaming.
Best Practices and Recommendations We recommend that you use host-level NIC Teaming to increase resiliency and bandwidth. NIC Teaming supports up to 32 NICs from mixed vendors. It is important to have NICs within a team with the same speed. Hyper-V Extensible Switch Shown in Figure 19, the Hyper-V Extensible Switch is a layer-2 virtual interface that provides programmatically managed and extensible capabilities to connect virtual machines to the physical network.
In this path, the outgoing and incoming packets can be modified or examined before additional processing occurs. By accessing the TCP/IP processing path at different layers, ISVs can easily create firewalls, antivirus software, diagnostic software, and other types of applications and services. Extensions can extend or replace the following three aspects of the switching process: ingress filtering, destination lookup and forwarding, and egress filtering.
Figure 20: PVLAN in Windows Server 2012 Best Practices and Recommendations VLANS and PVLANS can be a useful mechanism to isolate different Exchange infrastructures—for instance, a service provider hosting multiple unrelated Exchange infrastructures. For customers with VLAN constraints, PVLANS enable extra levels of isolation granularity within the same VLAN. PVLANS can be configured through PowerShell.
VMQ, the interrupts were spread across more processors. However, network load could vary over time. A fixed number of processors may not have be suitable in all traffic regimes. Windows Server 2012, on the other hand, dynamically distributes the processing of incoming network traffic to host processors, based on processor use and network load. In times of heavy network load, Dynamic VMQ (D-VMQ) automatically uses more processors. In times of light network load, D-VMQ relinquishes those same processors.
Host Resiliency & VM Agility For mission-critical workloads, high availability and scalability are becoming increasingly important to ensure that all users can access data and applications whenever they want. Windows Server 2012 Hyper-V provides enhanced capabilities that help to ensure that Exchange 2013 workloads are agile, easy to manage, and highly available at the hypervisor level.
Best Practices and Recommendations All hardware should be certified for Windows Server 2012, and the failover cluster solution should pass all tests in the Validate a Configuration Wizard. For more information about validating a failover cluster, see Validate Hardware for a Windows Server 2012 Failover Cluster. Windows Server 2012 Hyper-V supports scaling clusters up to 64 nodes and 8,000 virtual machines per cluster.
Figure 23: Cluster Shared Volumes Best Practices and Recommendations To provide high availability, use at least one separate CSV for Exchange 2013 databases/logs so that each mailbox database copy of a particular mailbox database is stored on separate infrastructure. In other words, in a four-copy deployment, a minimum of four CSVs should be used to isolate database copies, and the storage and networking infrastructure used to provide access to those CSVs should be completely isolated, as well.
Cluster Networking Before you begin to construct the failover cluster that will form the resilient backbone for key virtualized workloads, it is important to ensure that the networking is optimally configured. Clusters, as a minimum, require 2 x 1 GbE network adapters; however, for a traditional production Hyper-V failover cluster, it is recommended that a greater number of adapters be used to provide increased performance, isolation, and resiliency.
Figure 24: High-level overview of cluster member converged networking configuration This converged approach can significantly reduce the number of physical NICs required in each host and, subsequently, the number of overall switch ports. Yet at the same time, the approach provides resiliency and high levels of bandwidth for key virtual machines and workloads. More information about converged infrastructure options can be found here on TechNet.
Figure 25: Cluster-Aware Updating Wizard CAU can perform the cluster updating process in two different modes: self-updating mode and remoteupdating mode. In self-updating mode, the CAU clustered role is configured as a workload on the failover cluster that is to be updated. In remote-updating mode, a remote computer running Windows Server 2012 or Windows 8 is configured with the CAU clustered role. This remote computer is also called the Update Coordinator and is not part of the cluster that is updating.
CAU supports an extensible architecture that helps to update the cluster node with node-updating tools and software updates that are not available from Microsoft or through Windows Update or Microsoft Update. Examples include custom software installers, updates for non-Microsoft device drivers, and network adapter/HBA firmware updating tools. This is beneficial for publishers who want to coordinate the installation of non-Microsoft software updates.
controllers running on the same Hyper-V host can be migrated to different nodes to prevent loss of the domain in case of failure. Windows Server 2012 Hyper-V provides a cluster group property called AntiAffinityClassNames that can be applied to any virtual machine in the Hyper-V cluster group. This property allows preferences to be set to keep a virtual machine off the same node as other virtual machines of a similar kind.
Figure 26: Live migration with Hyper-V In addition to improvements made with the Live Migration feature, Windows Server 2012 now allows the live migration of virtual machine storage—independent of the virtual machine itself and without any downtime. This is known as live storage migration and can be initiated using the Hyper-V Manager console, Failover Cluster console, Microsoft System Center Virtual Machine Manager (SCVMM) console, or PowerShell.
Virtual Machine Configuration In addition to configuring and establishing the host server as a virtualization server with Hyper-V, it is important to design detailed architecture and system specifications for building virtual machines for expected workloads. It is also necessary to plan for needed resources for the virtual machines. The number of virtual machines you can run on any individual server depends on the server’s hardware configuration and the anticipated workloads.
reports client access and server usage information of Exchange Server deployments. The reports and proposals provided by the MAP toolkit include a server consolidation report, server consolidation proposal, workload discovery report, and cost savings and ROI assessment. The information provided in these reports and proposals can be used to consolidate Exchange workloads, better utilize hardware resources, and determine licensing needs.
Windows Server 2012 Hyper-V, the default virtual NUMA topology is optimized to match the NUMA topology of the host/physical computer, as shown in Figure 27.31 Figure 27: Guest NUMA topology by default matching host NUMA topology The best practices below provide more guidance around managing varying CPU demand, reducing overhead on the CPU, and optimizing processor performance for Exchange workloads.
Best Practices and Recommendations Crossing the NUMA boundary can reduce virtual performance by as much as 8 percent. Therefore, configure a virtual machine to use resources from a single NUMA node.34 For Exchange Server, make sure that allocated memory is equal to or smaller than a NUMA boundary. While setting NUMA node preferences (NUMA node balancing) for virtual machines, ensure that all virtual machines are not assigned to the same NUMA node.
Best Practices and Recommendations Microsoft does not support Dynamic Memory for virtual machines that run any of the Exchange 2013 roles. Exchange 2013 uses in-memory data caching to provide better performance and faster I/O operations. To achieve this, Exchange 2013 needs a substantial amount of memory at all times and full control over the memory.
Figure 28: Hyper-V Smart Paging Hyper-V Smart Paging can lead to some performance degradation due to slower disk access speeds. Therefore, to ensure that the performance impact of Smart Paging is minimized, this feature is used only when all of the following are true: The virtual machine is being restarted. No physical memory is available. No memory can be reclaimed from other virtual machines that are running on the host.
The best practices below provide more guidance around planning and managing memory for virtual machines running Exchange 2013 workloads.42, 43, 44 Best Practices and Recommendations For any virtual machine that is running Exchange 2013 roles, detailed and accurate capacity planning and sizing should be performed to determine the correct amount of minimum memory that should be assigned to the Exchange virtual machine.
VHDX File Format Hyper-V in Windows Server 2012 introduces VHDX, a new version of the virtual hard disk format that is designed to handle current and future workloads. VHDX has a much larger storage capacity than the older VHD format. It also provides protection from data corruption during power failures and optimizes structural alignments to prevent performance degradation on new, large sector physical disks.
Best Practices and Recommendations Using differencing disks and dynamically expanding disks is not supported in a virtualized Exchange 2013 environment. The thin-provisioned nature of the dynamically expanding VHDX file means that the underlying storage can become overcommitted. As each dynamically expanding VHDX file grows in size toward its configured maximum, the underlying storage could run out of space if not carefully monitored.
Guest Storage In addition to presenting VHD or VHDX files to Exchange 2013 virtual machines, administrators can choose to connect the guest operating system of an Exchange virtual machine directly to existing storage investments. Two methods provided in Windows Server 2012 Hyper-V are In-Guest iSCSI and Virtual Fibre Channel.
Exchange 2013 Virtual Machine Network Considerations Networking and network access are critical to the success of an Exchange deployment. Windows Server 2012 Hyper-V provides a number of capabilities, technologies, and features that an administrator can use to drive the highest levels of networking performance for the virtualized Exchange infrastructure.
Using Hyper-V Manager, you can enable SR-IOV in Windows Server 2012 when you create a virtual switch (Figure 30).45 Figure 30: Enabling SR-IOV in the Virtual Switch Properties window Once the virtual switch is created, SR-IOV should also be enabled while configuring a virtual machine in the Hardware Acceleration node (Figure 31).
Figure 32 shows how SR-IOV attaches a physical NIC to an Exchange 2013 virtual machine. This provides the Exchange 2013 virtual machine with a more direct path to the underlying physical network adapter, increasing performance and reducing latency—both of which are important considerations for the Exchange 2013 workload.
If the Exchange virtual machine is configured to use SR-IOV, but the guest operating system does not support it, SR-IOV VFs are not allocated to the virtual machine. We recommend that you disable SR-IOV on all virtual machines that run guest operating systems that do not support SR-IOV.49 Best Practices and Recommendations SR-IOV can provide the highest levels of networking performance for virtualized Exchange virtual machines.
Hyper-V Remote Desktop Virtualization Service (vmicrdv) Hyper-V Volume Shadow Copy Requestor Service (vmicvss) Integration Services in a child partition communicate over a VMBus with components in the parent partition virtualization stack that are implemented as virtual devices (VDev). The VMBus supports highspeed, point-to-point channels for secure interpartition communication between child and parent partitions.
Exchange 2013 Resiliency The backbone of business communication, messaging solutions require high availability. Exchange 2013 provides advanced capabilities to deliver a messaging solution that is always available. Moreover, the data in mailbox databases is one of the most critical business elements of any Exchange-based organization. These mailbox databases can be protected by configuring them for high availability and site resilience.
Single Exchange 2013 Virtual Machine on a Hyper-V Cluster For smaller organizations that require only a single Exchange 2013 server but still need a high level of availability, a good option is to run the Exchange 2013 server as a virtual machine on top of a Hyper-V physical cluster. Figure 33 shows two Hyper-V cluster nodes connected, in this case, to some centralized SAN storage. Note that this storage could be iSCSI or FC, or with Windows Server 2012 Hyper-V, it could also be SMB 3.0-based storage.
ensures that even under contention, the Exchange 2013 virtual machine, upon failover, will successfully start and receive the resources it needs to perform at the desired levels—taking resources from other currently running virtual machines, if required. Resilient Exchange Configuration on a Hyper-V Cluster A Database Availability Group is the base component of the high availability and site resilience framework built into Exchange 2013.
With DAGs, Exchange 2013 can work as a cluster-aware application to provide better availability and resiliency for Exchange 2013 mailboxes in a virtualized environment (Figure 36). (For more information, see the Mailbox Server Role subsection above.) Figure 36: Creating Exchange 2013 Mailbox servers In this example, a three-node Hyper-V cluster is connected to some shared storage. This storage could be an iSCSI or FC SAN, or with Windows Server 2012 Hyper-V, it could be an SMB 3.
Continuing this example, even though Mailbox Server 1 was down for a short period of time, the level of built-in high availability in Exchange 2013, at the DAG level, ensures that users are still able to access information and connect to their mailboxes via Mailbox Server 2, which was still running during the outage of Hyper-V Host 1 (Figure 37).
Deploy as much bandwidth as possible for the Live Migration network to ensure that the live migration completes as quickly as possible. In the prior example, placing Mailbox Server 1 on a separate host from Mailbox Server 2 was ideal because it ensured that if a host were lost, the entire DAG would not be lost as well. To help enforce this kind of configuration, you can use some of the features within the Hyper-V cluster, such as Preferred and Possible Owners.
System Center 2012 SP1 System Center 2012 SP1 provides several components that give IT the ability to streamline infrastructure management and—as discussed in this guide specifically—to better deploy, manage, maintain, and protect Exchange 2013 in a virtualized environment. Comprehensive Management Capabilities Cloud computing is transforming the way organizations provide and consume IT services with the promise of more productive infrastructure and more predictable applications.
Centralized Fabric Configuration System Center 2012 SP1 Virtual Machine Manager (VMM) enables the IT administrator to quickly and easily configure and manage virtualization hosts, networking, and storage resources in order to create and deploy virtual machines to host key Exchange 2013 components (Figure 38).
data from VSS-aware applications. VMM uses the Volume Shadow Copy Service (VSS) to ensure that data is backed up consistently while the server continues to service user requests. VMM uses this read-only snapshot to create a VHD. For a busy Exchange Server, however, the point-in-time local copy for the online P2V will be out of date very quickly. Therefore, an automated offline conversion may be more appropriate.
Guest operating system profile: A guest operating system profile defines operating systemconfigured settings that will be applied to a virtual machine created with the template. This profile defines common operating system settings, including type of operating system, roles and features to be enabled, computer name, administrator password, domain name, product key, time zone, answer file, and run-once file (Figure 40).
Once the template is created, you can start to use some other profiles to enhance the template and accelerate deployment of virtual machines that have specific applications within them. One such key profile is the application profile. Application profiles provide instructions for installing Microsoft Application Virtualization (Server App-V) applications, Microsoft Web Deploy applications, and Microsoft SQL Server data-tier applications (DACs).
Figure 41: System Center 2012 SP1 Virtual Machine Manager – Intelligent Placement Intelligent Placement in VMM inputs host system data, workload performance history, and administratordefined business requirements into sophisticated algorithms. This provides easy-to-understand, ranked results that can take the guesswork out of the placement task and help to ensure that workloads are spread across physical resources for optimal performance.
Dynamic Optimization Once Exchange 2013 virtual machines have been deployed onto the Hyper-V cluster, VMM actively monitors key cluster and host metrics—such as CPU, Memory, Disk, and Network—to see if it can better balance the virtual machine workloads across different hosts (Figure 42). For example, you may have a number of hosts in a cluster, and one of the hosts has some Exchange 2013 virtual machines that are exhibiting higher levels of demand than some others on other hosts.
Virtual Machine Priority and Affinity If you deploy virtual machines on a host cluster, you can use VMM to configure priority settings for them. With these settings, the cluster starts or places high-priority virtual machines before medium-priority or low-priority virtual machines. This ensures that the high-priority virtual machines, like those running Exchange Server, are allocated memory and other resources first, for better performance.
Figure 44: System Center 2012 SP1 Virtual Machine Manager – Availability Set for Exchange Virtual Machines Best Practices and Recommendations When creating a DAG or CAS array virtualized on top of a Hyper-V host cluster, consider keeping the individual Exchange 2013 roles on separate hosts.
Self-service: Administrators can delegate management and use of the private cloud while retaining the opaque usage model. Self-service users do not need to ask the private cloud provider for administrative changes beyond increasing capacity and quotas. Elasticity: Administrators can add resources to a private cloud to increase capacity. Optimization: Use of underlying resources is continually optimized without affecting the overall private cloud user experience.
Users who are part of the newly created group can then access the cloud and associated virtual machine templates and service templates through the VMM console or, for a true self-service experience, through System Center 2012 SP1 App Controller. App Controller Among the advantages of a private cloud is the ability to quickly provision and deprovision compute, networking, and storage resources through virtual machines.
Figure 48: Virtual machine view in App Controller for an Exchange administrator In Figure 49, the Exchange administrator chooses a particular cloud. From here, the Exchange administrator can choose from a list of provided virtual machine templates and service templates to determine the final pieces of configuration (such as Service Name, VM Name, and OS Name) for a customized deployment.
Once the virtual machine is deployed, the Exchange administrator can access it through App Controller and perform the tasks and actions that the IT administrator has enabled (Figure 50). The Exchange administrator also can connect to the virtual machine through remote desktop to perform Exchangespecific actions.
Service Manager IT service management: System Center 2012 SP1 Service Manager provides an integrated platform for automating and adapting your organization’s IT service management best practices, such as those found in Microsoft Operations Framework (MOF) and Information Technology Infrastructure Library (ITIL). It provides built-in processes for incident and problem resolution, change control, and asset lifecycle management.
Orchestrator Custom automation: System Center 2012 SP1 Orchestrator provides tools to build, test, debug, deploy, and manage automation in your environment. These automated procedures, called runbooks, can function independently or start other runbooks (Figure 52). The standard activities defined in every installation of Orchestrator provide a variety of monitors, tasks, and runbook controls, which you can integrate with a wide range of system processes.
unanticipated errors and shorten service delivery time by automating the common tasks associated with enterprise tools and products. End-to-end orchestration: Orchestration is the collective name for the automated arrangement, coordination, and management of systems, software, and practices. It enables the management of complex cross-domain processes. Orchestrator provides the tools for orchestration to combine software, hardware, and manual processes into a seamless system.
infrastructure; once the request is approved, integrated automation orchestrates delivery of and access to the infrastructure—reducing the need for IT involvement and accelerating time to market. Using the key components of System Center and the Cloud Services Process Pack, IT administrators can define a rich self-service experience for Exchange administrators who want to request infrastructure to run their Exchange workloads.
Cloud Resources Subscription Cloud Resources Update Subscription Virtual Machine Virtual Machine Update Tenant Registration Cancellation Cloud Resources Subscription Cancellation From an Exchange perspective, IT can define specific requests that relate to Exchange Server.
Operations Manager Microsoft has a long history of defining and refining monitoring capabilities for its products. System Center 2012 SP1 Operations Manager continues this tradition as a solution with a deeper level of insight and improved scalability. With Operations Manager, your organization can gain levels of visibility into its infrastructure at every level of the stack, helping to ensure that the infrastructure is optimized and running efficiently.
The Microsoft Exchange 2013 Management Pack provides comprehensive service health information for an Exchange infrastructure and has been engineered for organizations that include servers running Exchange 2013. The key feature of this management pack is user-focused monitoring. The simplified dashboard focuses on the user experience and makes it easier for an administrator to quickly determine exactly what users are experiencing.
The Server Health view provides details about individual servers in your organization. Here, in Figure 58, you can see the individual health of all your servers. Using this view, you can narrow down any issues to a particular server. Figure 58: Exchange 2013 Server Health view in Operations Manager While going through the three views in the Exchange 2013 dashboard, you will notice that in addition to the State column, you have four additional health indicators (Figure 59).
The Exchange 2013 Management Pack provides simple but powerful views that makes it easy and fast to determine if your organization is healthy. However, the views are also robust and structured in a way to quickly guide you to the root of the problem, should an alert be triggered.
Figure 61: Key capabilities of Data Protection Manager DPM provides continuous data protection for Exchange Server. DPM performs replication, synchronization, and recovery point creation to provide reliable protection and rapid recovery of Exchange data by both system administrators and end users. DPM also allows users to exclude virtual machine page files from incremental backups to improve storage usage and backup performance.
With System Center 2012 SP1, DPM now can back up data from the DPM server to offsite storage that is managed by the Windows Azure Backup service. (Your organization must sign up for the service, and you must download and install the Windows Azure Backup agent on the DPM server, which is used to transfer the data between the server and the service.
Conclusion Windows Server 2012 Hyper-V is a great fit for virtualizing Exchange 2013 workloads. As demand for virtualization technology grows, Microsoft has continued to make it easier for organizations to choose to virtualize workloads that were not previously considered good candidates. Virtualization of Exchange 2013 is a valid option for organizations looking to address the impact of any wasted resources from Exchange deployments on underutilized hardware.
Additional Resources For more information, please visit the following links: Exchange 2013 for IT Pros http://technet.microsoft.com/en-us/exchange/fp179701.aspx Exchange 2013 Virtualization http://technet.microsoft.com/en-us/library/jj619301(v=exchg.150).aspx Windows Server 2012 http://www.microsoft.com/en-us/server-cloud/windows-server/default.aspx Windows Server 2012 TechNet http://technet.microsoft.com/en-us/windowsserver/hh534429.aspx Microsoft System Center 2012 http://www.microsoft.
References Microsoft. “Lab Validation Report: Microsoft Windows Server 2012 with Hyper-V and Exchange 2013.” http://download.microsoft.com/download/C/2/A/C2A36672-19B9-4E96-A1E08B99DED2DC77/ESG_Windows_Server_2012_with_Hyper-V_and%20Exchange_2013_Lab_Validation_Report.pdf 1 Gartner. “Magic Quadrant for x86 Server Virtualization Infrastructure.” Jun 2012. http://www.gartner.com/technology/reprints.do?id=1-1AVRXJO&ct=120612&st=sb 2 Microsoft. “System Center 2012 - Infrastructure Management.” http://www.
Microsoft TechNet. “Storage Spaces - Designing for Performance.” http://social.technet.microsoft.com/wiki/contents/articles/15200.storage-spaces-designing-for-performance.aspx 21 Barreto, Jose (TechNet Blog). “Hyper-V over SMB - Sample Configurations.“ http://blogs.technet.com/b/josebda/archive/2013/01/26/hyper-v-over-smb-sample-configurations.aspx 22 Microsoft. “Lab Validation Report: Microsoft Windows Server 2012: Storage and Networking Analysis.” Dec 2012. http://download.microsoft.
Microsoft. “Running SQL Server with Hyper-V Dynamic Memory.” Jul 2011. http://download.microsoft.com/download/D/2/0/D20E1C5F-72EA-4505-9F26FEF9550EFD44/Best%20Practices%20for%20Running%20SQL%20Server%20with%20HVDM.docx 43 Rabeler, Carl. “Running Microsoft SQL Server 2008 Analysis Services on Windows Server 2008 vs. Windows Server 2003 and Memory Preallocation: Lessons Learned.” Jul 2008. http://sqlcat.