Dell EMC HCI Solutions for Microsoft Windows Server Deployment Guide Part Number: H17977.
Notes, cautions, and warnings NOTE: A NOTE indicates important information that helps you make better use of your product. CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. © 2019 —2021 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries.
Contents Chapter 1: Introduction................................................................................................................. 5 Document overview............................................................................................................................................................ 5 Audience and scope............................................................................................................................................................
Dell Technologies documentation..................................................................................................................................22 Microsoft documentation................................................................................................................................................ 22 Appendix A: Appendix A: Persistent Memory for Azure Stack HCI................................................23 Configuring persistent memory for Azure Stack HCI................
1 Introduction Topics: • • • Document overview Audience and scope Known issues Document overview This deployment guide provides an overview of Dell EMC HCI Solutions for Microsoft Windows Server, guidance on how to integrate solution components, and instructions for preparing and deploying the solution infrastructure. For end-to-end deployment steps, use the information in this guide with the information in Network Configuration and Host Configuration Options.
Known issues Before starting the cluster deployment, review the known issues and workarounds. For information about known issues and workarounds, see https://www.dell.com/support/article/sln313305.
2 Solution Overview Topics: • • • Solution introduction Deployment models Solution integration and network connectivity Solution introduction Dell EMC HCI Solutions for Microsoft Windows Server include various configurations of AX nodes. These AX nodes power the primary compute cluster that is deployed as a hyperconverged infrastructure (HCI). The HCI uses a flexible solution architecture rather than a fixed component design.
Figure 1. Switchless storage networking Scalable infrastructure The scalable offering within Dell EMC HCI Solutions for Microsoft Windows Server encompasses various AX node configurations. In this Windows Server HCI solution, as many as 16 AX nodes power the primary compute cluster. The following figure illustrates one of the flexible solution architectures.
Figure 2. Scalable solution architecture Dell EMC HCI Solutions for Microsoft Windows Server do not include management infrastructure components such as a cluster for hosting management VMs and services such as Microsoft Active Directory, Domain Name System (DNS), Windows Server Update Services (WSUS), and Microsoft System Center components such as Operations Manager (SCOM).
If you are using an RODC at the remote site, connectivity to the central management infrastructure with a writeable domain controller is mandatory during deployment of the Azure Stack HCI cluster. NOTE: Dell Technologies does not support expansion of a two-node cluster to a larger cluster size. A three-node cluster provides fault-tolerance only for simultaneous failure of a single node and a single drive.
Nonconverged network connectivity In the nonconverged network configuration, storage traffic uses a dedicated set of network adapters either in a SET configuration or as physical adapters. A separate set of network adapters is used for management, VM, and other traffic classes. In this connectivity method, DCB configuration is optional because storage traffic has its own dedicated fabric.
3 Solution Deployment Topics: • • • • • • • • • • • • • • Introduction to solution deployment Deployment prerequisites Predeployment configuration Operating system deployment Installing roles and features Verifying firmware and software compliance with the support matrix Updating out-of-box drivers Changing the hostname Configuring host networking Joining cluster nodes to an Active Directory domain Deploying and configuring a host cluster Best practices and recommendations Recommended next steps Deployment
Table 2. Management services Management service Purpose Required/optional Active Directory User authentication Required Domain Name System Name resolution Required Windows Software Update Service (WSUS) Local source for Windows updates Optional SQL Server Database back end for System Center Virtual Machine Manager (VMM) and System Center Operations Manager (SCOM) Optional Predeployment configuration Before deploying AX nodes, complete the required predeployment configuration tasks.
Configuring BIOS settings including the IPv4 address for iDRAC Perform these steps to configure the IPv4 address for iDRAC. You can also follow these steps to configure any additional BIOS settings. Steps 1. During the system boot, press F12. 2. At System Setup Main Menu, select iDRAC Settings. 3. Under iDRAC Settings, select Network. 4. Under IPV4 SETTINGS, at Enable IPv4, select Enabled. 5. Enter the static IPv4 address details. 6. Click Back, and then click Finish.
NOTE: The command output that is shown in the subsequent sections might show only Mellanox ConnectX-4 LX adapters as physical adapters. The output is shown only as an example. NOTE: For the PowerShell commands in this section and subsequent sections that require a network adapter name, run the Get-NetAdapter cmdlet to retrieve the correct value for the associated physical network port.
NOTE: Install the storage-replica feature if Azure Stack HCI operating system is being deployed for a stretched cluster. NOTE: Hyper-V and the optional roles installation require a system restart. Because subsequent procedures also require a restart, the required restarts are combined into one (see the Note in the "Changing the hostname" section).
Changing the hostname By default, the operating system deployment assigns a random name as the host computer name. For easier identification and uniform configuration, Dell Technologies recommends that you change the hostname to something that is relevant and easily identifiable. Change the hostname by using the Rename-Computer cmdlet: Rename-Computer -NewName S2DNode01 -Restart NOTE: This command induces an automatic restart at the end of rename operation.
Deploying and configuring a host cluster After joining the cluster nodes to an Active Directory domain, you can create a host cluster and configure it for Storage Spaces Direct. Creating the host cluster Verify that the nodes are ready for cluster creation, and then create the host cluster. Steps 1.
Configuring the host management network as a lower-priority network for live migration After you create the cluster, live migration is configured by default to use all available networks. During normal operations, using the host management network for live migration traffic might impede the overall cluster role functionality and availability.
Configuring a cluster witness A cluster witness must be configured for a two-node cluster. Microsoft recommends configuring a cluster witness for a four-node Azure Stack HCI cluster. Cluster witness configuration helps maintain a cluster or storage quorum when a node or network communication fails and nodes continue to operate but can no longer communicate with one another. A cluster witness can be either a file share or a cloud-based witness.
Recommended next steps Before proceeding with operational management of the cluster, Dell Technologies recommends that you validate the cluster deployment, verify that the infrastructure is operational, and, if needed, activate the operating system license. 1.
4 References Topics: • • Dell Technologies documentation Microsoft documentation Dell Technologies documentation These links provide more information from Dell Technologies: ● iDRAC documentation ● Support Matrix for Microsoft HCI Solutions ● Dell EMC HCI Solutions for Microsoft Windows Server—Managing and Monitoring the Solution Infrastructure Life Cycle Microsoft documentation The following link provides more information about Storage Spaces Direct: Storage Spaces Direct overview 22 References
A Appendix A: Persistent Memory for Azure Stack HCI Topics: • • Configuring persistent memory for Azure Stack HCI Configuring Azure Stack HCI persistent memory hosts Configuring persistent memory for Azure Stack HCI Intel Optane DC persistent memory is designed to improve overall data center system performance and lower storage latencies by placing storage data closer to the processor on nonvolatile media.
Configuring persistent memory BIOS settings Configure the BIOS to enable persistent memory. Steps 1. During system startup, press F12 to enter System BIOS. 2. Select BIOS Settings > Memory Settings > Persistent Memory. 3. Verify that System Memory is set to Non-Volatile DIMM. 4. Select Intel Persistent Memory. The Intel Persistent Memory page provides an overview of the server's Intel Optane DC persistent memory capacity and configuration. 5. Select Region Configuration.
Configuring Azure Stack HCI persistent memory hosts Three types of device objects are related to persistent memory on Windows Server 2019: the NVDIMM root device, physical INVDIMMs, and logical persistent memory disks. In Device Manager, physical INVDIMMs are displayed under Memory devices, while logical persistent disks are under Persistent memory disks. The NVDIMM root device is under System Devices. The scmbus.sys driver controls the NVDIMM root device. The nvdimm.
Managing persistent memory using Windows PowerShell Windows Server 2019 provides a PersistentMemory PowerShell module that enables user management of the persistent storage space. PS C:\> Get-Command -Module PersistentMemory CommandType Name Version Source ------------------------Cmdlet Get-PmemDisk 1.0.0.0 PersistentMemory Cmdlet Get-PmemPhysicalDevice 1.0.0.0 PersistentMemory Cmdlet Get-PmemUnusedRegion 1.0.0.0 PersistentMemory Cmdlet Initialize-PmemPhysicalDevice 1.0.0.
1111 008906320000 INVDIMM device 0 GB 1121 008906320000 INVDIMM device 0 GB 121 008906320000 INVDIMM device 0 GB 21 008906320000 INVDIMM device 0 GB Healthy {Ok} B11 102005395 126 GB Healthy {Ok} B12 102005395 126 GB Healthy Healthy {Ok} {Ok} A12 A9 102005395 102005395 126 GB 126 GB 2. Run Get-PmemUnusedRegion to verify that two unused Pmem regions are available, one region for each physical CPU. PS C:\> Get-PmemUnusedRegion RegionId -------1 3 TotalSizeInBytes DeviceId -------------------