Storage Manager 2020 R1 Administrator's Guide Part Number: 680-017-029 May 2021 Rev.
Preface This guide describes how to use Storage Manager to manage and monitor your storage infrastructure. For information about installing and configuring required Storage Manager components, see the Storage Manager Installation Guide. How to Find Information To Find Action A description of a field or option in the user interface In Storage Manager, click Help. In Unisphere, select Help from the ? dropdown menu. Tasks that can be performed from a particular area of the user interface 1.
Contains installation and setup information. ● Storage Manager Administrator’s Guide Contains in-depth feature configuration and usage information. ● Unisphere and Unisphere Central for SC Series Administrator’s Guide Contains instructions and information for managing storage devices using Unisphere and Unisphere Central for SC Series. ● Storage Manager Release Notes Provides information about Storage Manager releases, including new features and enhancements, open issues, and resolved issues.
Provides information about deploying an FS8600 appliance, including cabling the appliance to the Storage Center(s) and the network, and deploying the appliance using the Storage Manager software. The target audience for this document is Dell installers and certified business partners who perform FS8600 appliance installations. ● FluidFS FS8600 Appliance CLI Reference Guide Provides information about the FS8600 appliance command-line interface. The target audience for this document is customers.
Notes, cautions, and warnings NOTE: A NOTE indicates important information that helps you make better use of your product. CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. © 2020 - 2021 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries.
Contents Chapter 1: Storage Manager Overview......................................................................................... 24 Environmental and System Requirements...................................................................................................................24 Storage Manager Components...................................................................................................................................... 24 Default Ports Used by Storage Manager...................
Charting Tab................................................................................................................................................................. 48 Alerts Tab...................................................................................................................................................................... 49 Logs Tab...................................................................................................................................................
Validate the SupportAssist Connection................................................................................................................. 65 Update Storage Center..............................................................................................................................................65 Complete Configuration and Perform Next Steps..............................................................................................
Deploy the Storage Center....................................................................................................................................... 84 Enter Key Management Server Settings............................................................................................................... 84 Create a Storage Type...............................................................................................................................................85 Configure Ports...............
Managing Storage Profiles............................................................................................................................................ 135 Create a Storage Profile (Storage Center 7.2.1 and Earlier)...........................................................................135 Create a Storage Profile (Storage Center 7.2.10 and Later).......................................................................... 136 Apply a Storage Profile to One or More Volumes.....................
Chapter 7: Managing Virtual Volumes With Storage Manager..................................................... 174 Configuring VVols in Storage Manager...................................................................................................................... 174 Safeguarding VVols Data..........................................................................................................................................174 VMware Virtual Volume Concepts............................................
Create a Snapshot Schedule...................................................................................................................................201 Modify Snapshot Properties...................................................................................................................................202 Control Snapshot Space Borrowing .................................................................................................................... 202 Set a Snapshot Online or Offline....
Managing Local Storage Center User Groups................................................................................................... 236 Enabling Directory Services Authentication....................................................................................................... 238 Managing Directory Service Users........................................................................................................................ 241 Managing Directory User Groups..............................
Create Secure Data Disk Folder............................................................................................................................ 280 Managing Data Redundancy........................................................................................................................................ 280 Redundancy Requirements..................................................................................................................................... 280 Managing RAID..............
Using the Front End IO Summary Plugin.............................................................................................................306 Using the Current Alerts Plugin............................................................................................................................. 307 Using the Replication Validation Plugin................................................................................................................
File Metadata Protection......................................................................................................................................... 341 Load Balancing and High Availability..................................................................................................................... 341 Ports Used by the FluidFS Cluster........................................................................................................................
Moving a NAS Volume Between Tenants........................................................................................................... 394 Managing NAS Volumes.......................................................................................................................................... 395 Managing SMB Shares............................................................................................................................................ 406 Managing NFS Exports..................
Change the Link Speed for a QoS Definition..................................................................................................... 498 Enable or Disable Bandwidth Limiting for a QoS Definition............................................................................ 498 Modify the Bandwidth Limit Schedule for a QoS Definition...........................................................................498 Delete a QoS Definition....................................................................
Disaster Recovery Activation Limitations............................................................................................................547 Planned vs Unplanned Disaster Recovery Activation.......................................................................................547 Disaster Recovery Activation Procedures...........................................................................................................547 Activating Disaster Recovery for PS Series Group Replications.........
Filter Threshold Alerts by Storage Center.......................................................................................................... 575 Filter Threshold Alerts by Threshold Definition Properties.............................................................................575 View the Threshold Definition that Generated an Alert.................................................................................. 576 Delete Historical Threshold Alerts...............................................
Setting Up Departments......................................................................................................................................... 599 Managing Department Line Items.........................................................................................................................600 Assigning Volumes to Chargeback Departments............................................................................................... 601 Perform a Manual Chargeback Run...................
Managing Available FluidFS Clusters..........................................................................................................................644 Delete an Available FluidFS Cluster...................................................................................................................... 644 Remove a FluidFS Cluster from a Data Collector User Account...................................................................645 Managing the Storage Manager Virtual Appliance.................
Manually Send Diagnostic Data for Multiple Storage Centers........................................................................671 Send Diagnostic Data for a Single Storage Center .......................................................................................... 671 Save SupportAssist Data to a File.........................................................................................................................672 Saving SupportAssist Data to a USB Flash Drive .............................
1 Storage Manager Overview Storage Manager allows you to monitor, manage, and analyze Storage Centers, FluidFS clusters, and PS Series Groups from a centralized management console. ● The Storage Manager Data Collector stores data and alerts it gathers from Storage Centers in an external database or an embedded database. Some functions of the Data Collector are managed by the web application Unisphere Central.
Default Ports Used by Storage Manager The Storage Manager components use network connections to communicate with each other and with other network resources. The following tables list the default network ports used by the Storage Manager Data Collector, Storage Manager Client, and Storage Manager Server Agent. Many of the ports are configurable. NOTE: Some ports might not be needed for your configuration. For details, see the Purpose column in each table.
Client Ports Storage Manager clients use the following ports: Inbound Ports The Storage Manager Client and Unisphere Central do not use any inbound ports. Outbound Ports The Storage Manager Client and Unisphere Central initiate connections to the following port: Port Protocol Name Purpose 3033 TCP Web Server Port Communicating with the Storage Manager Data Collector Server Agent Ports The following tables list the ports used by the Storage Manager Server Agent.
Storage Manager Features Storage Manager provides the following features. Storage Management Features Storage Manager provides the following storage management features. Storage Center Management Storage Manager allows you to centrally manage multiple Storage Centers. For each Storage Center, you can configure volumes, snapshot profiles, and storage profiles. You can also present configured storage to servers by defining server objects and mapping volumes to them.
VVols Storage Manager supports the VMware virtual volumes (VVols) framework. VMware administrators use vCenter to create virtual machines and VVols. You must be connected to a Data Collector Managerto use VVols. When properly configured, you can use Storage Manager to manage and view VVols, storage containers, datastores, and other aspects of VMware infrastructure.
Remote Data Collector A remote Data Collector is installed at a remote site and connected to the primary Data Collector to provide access to disaster recovery options when the primary Data Collector is unavailable. In the event that the primary Data Collector is down, you can connect to the remote Data Collector at another site to perform Disaster Recovery.
Log Monitoring The Log Monitoring feature provides a centralized location to view Storage Center alerts, indications, and logs collected by the Storage Manager Data Collector and system events logged by Storage Manager. Related concepts Storage Center Monitoring on page 609 Performance Monitoring The Performance Monitoring feature provides access to summary information about the managed Storage Centers and historical/current I/O performance information.
Callout Client Elements Description ● About – When clicked, opens a dialog box that displays the software version of the Storage Manager Client. 2 Navigation pane Displays options specific to the view that is currently selected. For example, when the Storage view is selected, the view pane displays the Storage Centers, PS Groups, and FluidFS clusters that have been added to Storage Manager. 3 Views Displays the view buttons.
2 Getting Started Start the Storage Manager Client and connect to the Data Collector. When you are finished, review the next steps for suggestions on how to proceed. For instructions on setting up a new Storage Center, see Storage Center Deployment on page 50. Topics: • • Use the Client to Connect to the Data Collector Next Steps Use the Client to Connect to the Data Collector Start the Storage Manager Client and use it to connect to the Data Collector.
3. To change the language displayed in the Storage Manager Client, select a language from the Display Language drop-down menu. 4. Type the user name and password in the User Name and Password fields. 5. Specify your credentials. ● If you want to log on as a local Storage Manager user, Active Directory user, or OpenLDAP user, type the user name and password in the User Name and Password fields. ○ For OpenLDAP, the user name format is supported (example: user).
Related concepts Storage Manager User Management on page 651 Add Storage Centers to Storage Manager Use the Storage Manager Client to add Storage Centers to Storage Manager. Related concepts Adding and Organizing Storage Centers on page 92 Configure Storage Center Volumes After you have added Storage Centers to the Data Collector or connected directly to a single Storage Center, you can create and manage volumes on the Storage Centers.
Configuring Email Notifications for Threshold Alerts on page 583 Configure Storage Manager to Email Reports on page 594 Set up Remote Storage Centers and Relication QoS If you want to protect your data by replicating volumes from one Storage Center to another, set up connectivity between your Storage Centers. Create Replication Quality of Service (QoS) definitions on each Storage Center to control how much bandwidth is used to transmit data to remote Storage Centers.
3 Storage Center Overview Storage Center is a storage area network (SAN) that provides centralized, block-level storage that can be accessed by Fibre Channel, iSCSI, or Serial Attached SCSI (SAS). Topics: • • How Storage Virtualization Works User Interface for Storage Center Management How Storage Virtualization Works Storage Center virtualizes storage by grouping disks into pools of storage called Storage Types, which hold small chunks (pages) of data.
Switches Switches provide robust connectivity to servers, allowing for the use of multiple controllers and redundant transport paths. Cabling between controller I/O cards, switches, and servers is referred to as front-end connectivity. Enclosures Enclosures house and control drives that provide storage. Enclosures are connected directly to controller I/O cards. These connections are referred to as back-end connectivity.
○ 7K (RPM) ○ 10K (RPM) ○ 15K (RPM) ● Solid State Drives (SSDs) – SSDs are differentiated by read or write optimization. ○ Write-intensive (SLC SSD) ○ Read-intensive (MLC SSD) Drive Spares Drive spares are drives or drive space reserved by the Storage Center to compensate for a failed drive. When a drive fails, Storage Center restripes the data across the remaining drives. Distributed Sparing When updating to Storage Center version 7.3, a banner message prompts you to optimize disks.
Disk Types The type of disks present in a Storage Center determines how Data Progression moves data between tiers. Storage Center supports write-intensive SSDs, and 7K, 10K, and 15K HDDs. A minimum number of disks are required, which may be installed in the controller or in an expansion enclosure: ● An all-flash array requires a minimum of four SSDs of the same disk class, for example four write-intensive SSDs.
Table 4. SSD Redundancy Recommendations and Requirements Disk Size Level of Redundancy Recommended or Enforced Up to 18 TB Dual redundant is the recommended level NOTE: Non-redundant storage is not an option for SCv2000 Series storage systems. 18 TB and higher Dual redundant is required and enforced Data Progression Storage Center uses Data Progression to move data within a virtualized storage environment.
Emergency Mode Storage Center enters Emergency mode when the system can no longer operate because it does not have enough free space. In Emergency mode, Storage Center responds with the following actions: ● ● ● ● ● Generates an Emergency Mode alert. Expires snapshots at a faster rate than normal (Storage Center version 7.2 and earlier) Prevents new volume creation. Volumes become either inaccessible or read-only.
Storage Center Operation Modes Storage Center operates in four modes: Installation, Pre-production, Production, and Maintenance. Name Description Install Storage Center is in Install mode before completing the setup wizard for the Storage Center. Once setup is complete, Storage Center switches to Pre-Production mode. Pre-Production During Pre-production mode, Storage Center suppresses alerts sent to support so that support is not alerted to expected test scenarios caused by testing.
When a volume uses the Recommended profile, all new data is written to Tier 1 RAID level 10 storage. Data progression moves less active data to Tier 1 RAID 5/ RAID 6 or a slower tier based on how frequently the data is accessed. In this way, the most active blocks of data remain on high-performance drives, while less active blocks automatically move to lower-cost, high-capacity SAS drives.
Flash-Optimized with Progression (Tier 1 to All Tiers) The Flash Optimized with Progression storage profile provides the most efficient storage for an enclosure containing both read-intensive and write-intensive SSDs. When a storage type uses this profile, all new data is written to write-intensive Tier 1 drives. Snapshot data is moved to Tier 2, and less-active data progresses to Tier 3.
Maximize Efficiency Maximize Efficiency writes new data to RAID 5/6 and keeps snapshot data on RAID 5/6. Use Maximize Efficiency for volumes with less-important data and infrequently used data. User Interface for Storage Center Management Most storage configuration and management for an individual Storage Center is performed from the Storage view in the Storage Manager Client. Select a Storage Center in the Storage navigation pane to view and manage it.
Storage Tab The Storage tab of the Storage view allows you to view and manage storage on the Storage Center. This tab is made up of two elements: the navigation pane and the right pane. Figure 5. Storage Tab Call Out Name 1 Navigation pane 2 Right pane Navigation Pane The Storage tab navigation pane shows the following nodes: ● Storage Center: Shows a summary of current and historical storage usage on the selected Storage Center.
Right Pane The right pane shows information and configuration options for the node or object selected in the navigation pane. The information and configuration options displayed for each node is described in the online help.
IO Usage Tab The IO Usage tab of the Storage view displays historical performance statistics for the selected Storage Center and associated storage objects. This tab is visible only when connected to the Storage Center through the Data Collector. Figure 7. Storage View IO Usage Tab Related concepts Viewing Historical IO Performance on page 312 Charting Tab The Charting tab of the Storage view displays real-time IO performance statistics for the selected storage object. Figure 8.
Alerts Tab The Alerts tab displays alerts for the Storage Center. Figure 9. Alerts Tab Logs Tab The Logs tab displays logs from the Storage Center. Figure 10.
4 Storage Center Deployment Use the Discover and Configure Uninitialized Storage Centers or Configure Storage Center wizard to set up a Storage Center to make it ready for volume creation and storage management. After configuring a Storage Center, you can set up a localhost, or a VMware vSphere or vCenter host.
Steps 1. Click the Storage view. 2. In the Storage pane, click Storage Centers. 3. In the Summary tab, click Discover and Configure Uninitialized Storage Centers . The Discover and Configure Uninitialized Storage Centers wizard opens. Select a Storage Center to Initialize The next page of the Discover and Configure Uninitialized Storage Centers wizard provides a list of uninitialized Storage Centers discovered by the wizard. Steps 1. Select the Storage Center to initialize. 2.
4. Type the subnet mask of the management network in the Subnet Mask field. 5. Type the gateway address of the management network in the Gateway IPv4 Address field. 6. Type the domain name of the management network in the Domain Name field. 7. Type the DNS server addresses of the management network in the DNS Server and Secondary DNS Server fields. 8. Click Next. Set Administrator Information The Set Administrator Information page allows you to set a new password and an email address for the Admin user.
Deploy the Storage Center The Storage Center sets up the controller using the information provided on the previous pages. Steps 1. The Storage Center performs system setup tasks. The Deploy Storage Center page displays the status of these tasks. To ● ● ● learn more about the initialization process, click More information about Initialization. If one or more of the system setup tasks fails, click Troubleshoot Initialization Error to learn how to resolve the issue.
Configure SMTP Server Settings If you have an SMTP server, configure the SMTP email settings to receive information from the Storage Center about errors, warnings, and events. Steps 1. By default, the Enable SMTP Email checkbox is selected and enabled. If you do not have an SMTP server you can disable SMTP email by clearing the Enable SMTP Email checkbox. 2. Alternatively, if you have an SMTP server, configure the SMTP server settings. a.
Provide Contact Information Specify contact information for technical support to use when sending support-related communications from SupportAssist. Steps 1. Specify the contact information. 2. (Storage Center 7.2 or earlier) To receive SupportAssist email messages, select the Send me emails from SupportAssist when issues arise, including hardware failure notifications check box. 3. Select the preferred contact method, language, and available times. 4. (Storage Center 7.
1. Click Accept SupportAssist Data Collection and Storage Agreement to review the agreement. 2. Select By checking this box you accept the above terms. 3. Click Next. The Storage Center attempts to contact the SupportAssist Update Server to check for updates. ● The Setup SupportAssist Proxy Settings dialog box appears if the Storage Center cannot connect to the SupportAssist Update Server. If the site does not have direct access to the Internet but uses a web proxy, configure the proxy settings: 1.
Related concepts Creating Volumes on page 98 Related tasks Create a Server from the localhost on page 147 Create a Server from a VMware vSphere Host on page 148 Create a Server from a VMware vCenter Host on page 148 Configure Embedded iSCSI Ports Configure the embedded Ethernet ports on the Storage Center for use as iSCSI ports. Prerequisites The storage system must be one of the following: ● SCv2000 ● SCv2020 ● SCv2080 ● SC4020 Steps 1.
● The Storage Manager Client must be run using Windows Administrator privileges. Steps 1. Open the Storage Manager Client welcome screen. 2. Click Discover and Configure Uninitialized Storage Centers. The Discover and Configure Uninitialized Storage Centers wizard opens. Open the Discover and Configure Uninitialized Storage Centers Wizard from the Storage Manager Client Open the wizard from the Storage Manager Client to discover and configure a Storage Center.
b. If the cable is not properly connected or the host cannot access the controller, an Error setting up connection message is displayed. Correct the connection, and click OK. c. If the deployment wizard is closed, click Discover and Configure Uninitialized Storage Centers to relaunch the deployment wizard. d. Type Admin in the User Name field, type the password entered on the Set Administrator Information page in the Password field, and click Next.
Set Administrator Information The Set Administrator Information page allows you to set a new password and an email address for the Admin user. Steps 1. Enter a new password for the default Storage Center administrator user in the New Admin Password and Confirm Password fields. 2. Enter the email address of the default Storage Center administrator user in the Admin Email Address field. 3. Click Next. ● For a Fibre Channel or SAS storage system, the Confirm Configuration page appears.
6. If the key management server requires a password to validate the Storage Center certificate, enter the password in the Password field. 7. Click Browse next to the Root CA Certificate. Navigate to the location of the root CA certificate on your computer and select it. 8. Click Browse next to the certificate fields for the controllers. Navigate to the location of the controller certificates on your computer and select them. 9. Click Next.
Configure Fibre Channel Ports For a Storage Center with Fibre Channel front-end ports, the Review Fault Domains page displays information about the fault domains that were created by the Storage Center. Prerequisites One port from each controller within the same fault domain must be cabled. NOTE: If the Storage Center is not cabled correctly to create fault domains, the Cable Ports page opens and explains the issue. Click Refresh after cabling more ports. Steps 1.
● The ports for each fault domain must be cabled to the same server. Steps 1. Review the information on the SAS - Cable Ports page. If the Storage Center is not cabled correctly to create fault domains, fix the cabling and click Refresh. 2. Click Next. The SAS – Review Fault Domains page opens. 3. Review the fault domains that have been created. 4. (Optional) Click Copy to clipboard to copy the fault domain information. 5. (Optional) Review the information on the Hardware and Cabling Diagram tabs. 6.
f. (Optional) In the Common Subject Line field, enter a subject line to use for all emails sent by the Storage Center. 3. Click Next. Set Up SupportAssist If the storage system is running Storage Center 7.3 or later, the Set Up SupportAssist page opens. About this task Use the Set Up SupportAssist page to enable SupportAssist. Steps 1.
2. Type a shipping address where replacement Storage Center components can be sent. 3. Click Next. The Confirm Enable SupportAssist dialog box opens. 4. Click Yes. Validate the SupportAssist Connection If the storage system is running Storage Center 7.3 or later, the Validate SupportAssist Connection page opens. About this task The Validate SupportAssist Connection page displays a summary of the SupportAssist contact information and confirms that the Storage Center is connected to SupportAssist.
Related concepts Creating Volumes on page 98 Related tasks Create a Server from the localhost on page 147 Create a Server from a VMware vSphere Host on page 148 Create a Server from a VMware vCenter Host on page 148 Discover and Configure Uninitialized SC5020 and SC7020 Storage Systems Use the Discover and Configure Uninitialized Storage Centers wizard to find and configure new SC5020, SC5020F, SC7020, or SC7020F storage systems.
Introduction to Storage Center Initial Configuration The Storage Center Initial Configuration page provides a list of prerequisite actions that must be performed and information that is required to initialize a Storage Center. Prerequisites ● The host server, on which the Storage Manager software is installed, must be on the same subnet or VLAN as the Storage Center. ● Layer 2 multicast must be allowed on the network.
c. If the deployment wizard is closed, click Discover and Configure Uninitialized Storage Centers to relaunch the deployment wizard. d. Type Admin in the User Name field, type the password entered on the Set Administrator Information page in the Password field, and click Next. Welcome Page Use the Welcome page to verify the Storage Center information, and optionally change the name of the Storage Center. Steps 1. Verify that the Service Tag and serial number match the Storage Center to set up. 2.
Set Administrator Information The Set Administrator Information page allows you to set a new password and an email address for the Admin user. Steps 1. Enter a new password for the default Storage Center administrator user in the New Admin Password and Confirm Password fields. 2. Enter the email address of the default Storage Center administrator user in the Admin Email Address field. 3. Click Next. ● For a Fibre Channel or SAS storage system, the Confirm Configuration page appears.
6. If the key management server requires a password to validate the Storage Center certificate, enter the password in the Password field. 7. Click Browse next to the Root CA Certificate. Navigate to the location of the root CA certificate on your computer and select it. 8. Click Browse next to the certificate fields for the controllers. Navigate to the location of the controller certificates on your computer and select them. 9. Click Next.
Configure Fibre Channel Ports Create a Fibre Channel fault domain to group FC ports for failover purposes. Steps 1. On the first Configure Fibre Channel Fault Tolerance page, select a transport mode: Virtual Port or Legacy. 2.
5. Click Next. ● If you are setting up SAS back-end ports, the Configure Back-End Ports page opens. ● If you are not setting up SAS back-end ports, the Inherit Settings or Time Settings page opens. Configure SAS Ports For a Storage Center with SAS front-end ports, the Review Fault Domains page displays information about the fault domains that were created by the Storage Center. Prerequisites ● One port from each controller within the same fault domain must be cabled.
Configure SMTP Server Settings If you have an SMTP server, configure the SMTP email settings to receive information from the Storage Center about errors, warnings, and events. Steps 1. By default, the Enable SMTP Email checkbox is selected and enabled. If you do not have an SMTP server you can disable SMTP email by clearing the Enable SMTP Email checkbox. 2. Alternatively, if you have an SMTP server, configure the SMTP server settings. a.
Provide Contact Information Specify contact information for technical support to use when sending support-related communications from SupportAssist. Steps 1. Specify the contact information. 2. (Storage Center 7.2 or earlier) To receive SupportAssist email messages, select the Send me emails from SupportAssist when issues arise, including hardware failure notifications check box. 3. Select the preferred contact method, language, and available times. 4. (Storage Center 7.
Create a Server from a VMware vSphere Host on page 148 Create a Server from a VMware vCenter Host on page 148 Configuring SC4020 and SC8000 Storage Centers Use the Configure Storage Center wizard to set up a new SC4020 or SC8000 Storage Center. The wizard helps configure the Storage Center to make it ready for volume creation. Prerequisites ● The Storage Manager Client must be running on a system with a 64-bit operating system.
Submit the Storage Center License Use the Submit Storage Center License page to type the name and title of the approving customer and to select the Storage Center license file. Steps 1. Click Browse. The Select License File window opens. 2. Browse to the location of the license file, select the file, and then click Select. 3. Verify the approving customer information and license file path, then click Next. The Create Disk Folder page opens.
Set System Information The Set System Information page allows you to enter Storage Center and storage controller configuration information to use when connecting to the Storage Center using Storage Manager. Steps 1. Type a descriptive name for the Storage Center in the Storage Center Name field. 2. If the storage system is running Storage Center 7.3 or later, select the network configuration option from the Network Configuration Source drop-down menu. ● DHCP IPv4 – Selected by default.
3. Select Configure Fault Domains next to SAS (Front-End) to set up front-end SAS ports to connect directly to hosts. 4. Click Next. Configure Fibre Channel Ports Create a Fibre Channel fault domain to group FC ports for failover purposes. Steps 1. On the first Configure Fibre Channel Fault Tolerance page, select a transport mode: Virtual Port or Legacy. 2.
● Click Create Fault Domain to create a new fault domain. ● Click Edit Fault Domain to edit the current fault domain. ● Click Remove to delete a fault domain. 5. Click Next. ● If you are setting up SAS back-end ports, the Configure Back-End Ports page opens. ● If you are not setting up SAS back-end ports, the Inherit Settings or Time Settings page opens. Configure SAS Fault Domains Specify the number of fault domains to create for front-end SAS ports on SC4020 controllers. Steps 1.
Configure Time Settings Configure an NTP server to set the time automatically, or set the time and date manually. Steps 1. From the Region and Time Zone drop-down menus, select the region and time zone used to set the time. 2. Select Use NTP Server and type the host name or IPv4 address of the NTP server, or select Set Current Time and set the time and date manually. 3. Click Next.
5. (Storage Center 7.2 or earlier) Click Finish. 6. (Storage Center 7.3 or later) Click Next. Provide Site Contact Information If the storage system is running Storage Center 7.3 or later, specify the site contact information. Steps 1. Select the Enable Onsite Address checkbox. 2. Type a shipping address where replacement Storage Center components can be sent. 3. Click Next. The Confirm Enable SupportAssist dialog box opens. 4. Click Yes.
● SCv2020 ● SCv2080 ● SC4020 Steps 1. Configure the fault domain and ports (embedded fault domain 1 or Flex Port Domain 1). NOTE: The Flex Port feature allows both Storage Center system management traffic and iSCSI traffic to use the same physical network ports. However, for environments where the Storage Center system management ports are mixed with network traffic from other devices, separate the iSCSI traffic from management traffic using VLANs. a.
11. Make sure that you have the required information that is listed on the first page of the wizard. This information is needed to configure the Storage Center. 12. Click Next. The Submit Storage Center License page opens. Welcome Page Use the Welcome page to verify the Storage Center information, and optionally change the name of the Storage Center. Steps 1. Verify that the Service Tag and serial number match the Storage Center to set up. 2. (Optional) Type a name for the Storage Center. 3. Click Next.
Create a Disk Folder Create a disk folder to manage unassigned disks. Steps 1. Type a name for the disk folder. 2. (Optional) To create a secure disk folder, select the Create as a Secure Data Folder checkbox. NOTE: This option is available only if all drives support Secure Data. 3. Click Change to open a dialog box for selecting the disks to assign to the folder. 4. Click Next. The Create Storage Type page opens. 5. Select the redundancy level from the drop-down menu for each disk tier. 6.
4. To add alternate key management servers, type the host name or IP address of another key management server in the Alternate Hostnames area, and then click Add. 5. If the key management server requires a user name to validate the Storage Center certificate, enter the name in the Username field. 6. If the key management server requires a password to validate the Storage Center certificate, enter the password in the Password field. 7. Click Browse next to the Root CA Certificate.
4. Click Next. Configure Fibre Channel Ports Create a Fibre Channel fault domain to group FC ports for failover purposes. Steps 1. On the first Configure Fibre Channel Fault Tolerance page, select a transport mode: Virtual Port or Legacy. 2.
● Click Edit Fault Domain to edit the current fault domain. ● Click Remove to delete a fault domain. 5. Click Next. ● If you are setting up SAS back-end ports, the Configure Back-End Ports page opens. ● If you are not setting up SAS back-end ports, the Inherit Settings or Time Settings page opens. Inherit Settings Use the Inherit Settings page to copy settings from a Storage Center that is already configured. Prerequisites You must be connected through a Data Collector. Steps 1.
Set Up SupportAssist If the storage system is running Storage Center 7.3 or later, the Set Up SupportAssist page opens. About this task Use the Set Up SupportAssist page to enable SupportAssist. Steps 1. To allow SupportAssist to collect diagnostic data and send this information to technical support, select the Receive proactive notifications, notices, and other predictive support checkbox. 2. Click Next.
4. Click Yes. Validate the SupportAssist Connection If the storage system is running Storage Center 7.3 or later, the Validate SupportAssist Connection page opens. About this task The Validate SupportAssist Connection page displays a summary of the SupportAssist contact information and confirms that the Storage Center is connected to SupportAssist. Steps ● To complete the SupportAssist setup, click Finish. Complete Configuration and Perform Next Steps The Storage Center is now configured.
Steps 1. On the Configuration Complete page of the Discover and Configure Storage Center wizard, click Configure this host to access a Storage Center. The Set up localhost on Storage Center wizard appears. 2. Click Next. ● If the Storage Center has iSCSI ports and the host is not connected to an iSCSI interface, the Log into Storage Center via iSCSI page appears. Select the target fault domains, and then click Next. ● In all other cases, the Verify localhost Information page appears. 3.
Set Up a VMware vCenter Host from Initial Setup Configure a VMware vCenter host to access block-level storage on the Storage Center. Prerequisites ● Client must be running on a system with a 64-bit operating system. ● The Storage Manager Client must be run by a Storage Manager Client user with the Administrator privilege. ● The Storage Center must be added to Storage Manager using a Storage Center user with the Administrator or Volume Manager privilege.
5 Storage Center Administration Storage Center provides centralized, block-level storage that can be accessed by Fibre Channel, iSCSI, or SAS connections.
User Privilege Levels Each user is assigned a single privilege level. Storage Center has three levels of user privilege. Table 5. Storage Center User Privilege Levels Privilege Level Allowed Access Administrator Read and write access to the entire Storage Center (no restrictions). All Administrators have the same predefined privileges. Only Administrators can manage users and user groups. Volume Manager Read and write access to the folders associated with the assigned user groups.
● If one or more Storage Centers are mapped to another user, the dialog box displays a list of available Storage Centers. ● If no Storage Centers are mapped to another user, the dialog box allows you to enter a new Storage Center. 4. (Conditional) If the dialog box is displaying a list of Storage Centers, select a Storage Center from the list or add a new one.
a. Select the Storage Center from which you want to inherit settings, then click Next. The wizard advances to the next page. b. Select the check box for each category of settings that you want to inherit. For user interface reference information, click Help. c. When you are done, click Finish. ● If passwords are not configured for the SupportAssist proxy, Secure Console proxy, or SMTP server, the dialog box closes.
Create a Storage Center Folder Use folders to group and organize Storage Centers. Steps 1. Click the Storage view. 2. In the Storage pane, select Storage Centers. 3. In the Summary tab, click Create Folder. The Create Folder dialog box opens. 4. If a Storage Center is selected from the drop-down list in Unisphere Central, click The Unisphere Central Home page is displayed. (Home). 5. In the Name field, type a name for the folder. 6. In the Parent field, select a parent folder. 7. Click OK.
4. In the Parent area, select the Storage Centers node or a parent folder. 5. Click OK. Delete a Storage Center Folder Delete a Storage Center folder if it is no longer needed. Prerequisites The Storage Center folder must be empty. Steps 1. Click the Storage view. 2. In the Storage pane, select the Storage Center folder to delete. 3. In the Summary tab, click Delete. The Delete dialog box opens. 4.
Icon Description The volume is the source for a replication to a remote Storage Center. NOTE: This icon is also displayed for volumes that have been configured to Copy, Mirror, or Migrate in the Storage Center Manager. These operations are not available in the Storage Manager Client. The volume is the destination for a replication from a remote Storage Center. The volume is the primary or secondary volume in a Live Volume. The volume is the source or destination of Live Migration.
● To adjust the Read/Write Cache, enter the desired size of the cache. ● To configure Replications and Live Volumes if they are licensed, select Replications and Live Volumes. 9. Click OK. Create a Volume Using the Multiple-step Wizard The multiple-step wizard is the default method of creating volumes for SCv2000 series controllers. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3.
The Replication Tasks page appears. This step appears only if Replication is licensed. 15. Set the Replication options for the new volume. ● To create the volume without setting up a replication, select No Replication or Live Volume. ● To create a volume as a replication, select Replication Volume to Another Storage Center. ● To create the volume as a Live Volume, select Create as Live Volume. 16. Click Next. The Volume Summary page opens. 17. Click Finish.
7. When you are finished, click OK. Create Multiple Volumes Simultaneously Using the Multiple-Step Wizard If you need to create many volumes, you can streamline the process by creating multiple volumes at a time. The multiple-step wizard is the default way to create volumes for the SCv2000 series controllers, and the only method available for direct connect SCv2000 series controllers to create multiple volumes simultaneously. Steps 1.
NOTE: The storage options vary based on the features the Storage Center supports. 10. Click Next. The Set Snapshot Profiles page opens. 11. Select a Snapshot Profile. ● (Optional) To create a new Snapshot Profile, click Create New Snapshot Profile. 12. Click Next. The Map to Server page opens. 13. Select a server. For more detailed options, click Advanced Mapping. To create a volume without selecting a server, click Yes to the No Server Specified dialog. To create a new server, click New Server. 14.
Rename a Volume A volume can be renamed without affecting its availability. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. In the Storage tab navigation pane, select the volume you want to modify. 4. In the right pane, click Edit Settings. The Edit Volume dialog box opens. 5. In the Name field, type a new name for the volume. 6. Click OK.
NOTE: Expanding a volume to a configured size greater than half the supported maximum volume size, as defined in the Storage Center Release Notes, will no longer support view volumes Enable or Disable Read/Write Caching for a Volume Read and write caching generally improves performance. To improve performance, disable write caching on volumes that use SSD storage. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2.
4. In the right pane, select the volumes that you want to modify. ● To select contiguous volumes, select the first volume, then hold down Shift and select the last volume. ● To select individual volumes, hold down Control while selecting them. 5. Right-click the selection, then select Set Snapshot Profiles. The Set Snapshot Profiles dialog box opens. 6. In the upper table, select the check box for each Snapshot Profile you want to assign to the volume. 7.
Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. In the Storage tab navigation pane, select the volume to modify. 4. In the right pane, click Edit Settings. The Edit Volume Settings dialog box appears. 5. Click Edit Advanced Volume Settings. The Edit Advanced Volume Settings dialog box appears. 6. Select the Import to lowest tier checkbox. 7. Click OK.
5. Click Edit Advanced Volume Settings. The Edit Advanced Volume Settings dialog box appears. 6. In the OpenVMS Unique Disk ID field, type a new disk ID. 7. Click OK to close the Edit Advanced Volume Settings dialog box, then click OK to close the Edit Volume dialog box. Configure Related View Volume Maximums for a Volume For a given volume, you can configure the maximum number of view volumes, including the original volume, that can be created for volumes that share the same snapshot.
Create a Mirroring Volume A mirroring volume is a copy of a volume that also dynamically changes to match the source volume. The source and destination volumes are continuously synchronized. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. In the Storage tab navigation pane, select a volume. 4. In the right pane, select Local Copy > Mirror Volume. The Mirror Volume dialog box opens. 5.
View Copy/Mirror/Migrate Information The Summary tab displays information for any copy, mirror, or migrate relationship involving the selected volume. Copy and migrate information is displayed in the Summary tab only during the copy or migrate operation. Prerequisites The volume must be in a copy, mirror, or migrate relationship. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the Storage tab navigation pane, select a volume.
Migrating Volumes With Live Migrate Live Migration moves a volume from one Storage Center to another Storage Center with no down time. Live Migration Requirements To create Live Migrations, the requirements listed in the following table must be met: Requirement Description Storage Center version The source and destination Storage Centers must be running version 7.1 or later. NOTE: Dell recommends that both Storage Centers run the same version of Storage Center software.
Before Live Migration Before a Live Migration, the server sends I/O requests only to the volume to be migrated. Figure 11. Example of Configuration Before Live Migration 1. Server 2. Server I/O request to volume over Fibre Channel or iSCSI 3. Volume to be migrated Live Migration Before Swap Role In the following diagram, the source Storage Center is on the left and the destination Storage Center is on the right. Figure 12. Example of Live Migration Configuration Before Swap Role 1. Server 2.
Live Migration After Swap Role In the following diagram, a role swap has occurred. The destination Storage Center is on the left and the new source Storage Center is on the right. Figure 13. Example of Live Migration Configuration After Swap Role 1. Server 2. Server I/O request to destination volume (forwarded to source Storage Center by destination Storage Center) 3. Destination volume 4. New source volume Live Migration After Complete In the following diagram, the Live Migration is complete.
Create a Live Migration for a Single Volume Use Live Migration to move a volume from one Storage Center to another Storage Center with limited or no downtime. Prerequisites ● The volume to be migrated must be mapped to a server. ● The volume cannot be part of a replication, Live Volume, or Live Migration. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. In the Storage tab navigation tree, select the volume.
7. (Optional) Modify Live Migration default settings. ● In the Replication Attributes area, configure options that determine how replication behaves. ● In the Destination Volume Attributes area, configure storage options for the destination volume and map the destination volume to a server. ● In the Live Migration Attributes area, enable or disable automatic role swap. When automatic role swap is enabled, Live Migrate swaps the roles immediately after the volume is synced.
Steps 1. Click the Replications & Live Volumes view. 2. On the Live Migrations tab, select the Live Migration you want to modify, and then click Edit Settings. The Edit Live Migration dialog box opens. 3. Select the Automatically Swap Roles After In Sync checkbox, and then click OK. Complete a Live Migration Complete a Live Migration to stop server I/O requests to the old source Storage Center and send all I/O requests only to the destination Storage Center.
3. From the Source Replication QoS Node drop-down menu, select the QoS definition that will be used to control bandwidth usage between the local and remote Storage Centers. 4. Click OK. Delete a Live Migration Use the Live Migrations tab to delete a Live Migration whose source and destination Storage Center have not been swapped.
Creating and Managing Volume Folders Use volume folders to organize volumes or to restrict access to volumes. NOTE: For user interface reference information, click Help. Create a Volume Folder Create a volume folder either to organize volumes or to restrict access to volumes. About this task NOTE: Members of a user group can only access volume folders that have been assigned to their user group, regardless of how the folders are organized.
Associate a Chargeback Department with a Volume Folder If Chargeback is enabled, you can assign a Chargeback Department to a folder to make sure the department is charged for the storage used by all volumes in the folder. Steps 1. Click the Storage view. 2. In the Storage pane, select a Storage Center. 3. Click the Storage tab. 4. In the Storage tab navigation pane, select the volume folder you want to modify. 5. In the right pane, click Edit Settings. The Edit Settings dialog box appears. 6.
● Table View displays all of the information for a snapshot on one screen. This information includes Freeze Time, Expire Time, Size, Create Volume, and Snapshot Profile. ● Tree View displays a single field for each snapshot: Freeze Time, Expire Time, Size, or Description. To change the field displayed, click Select Display Field and then select a new field. Assign Snapshot Profiles to a Volume Assign one or more snapshot profiles to a volume if you want snapshots to be created on an automated schedule.
Pause Snapshot Creation for a Volume Pause snapshot creation for a volume to temporarily prevent snapshot profiles from creating automatic snapshots for the volume. When snapshot creation is paused, the Create Snapshot option is not available when you right-click any volume on the Storage Center. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3.
Expire a Snapshot Manually If you no longer need a snapshot and you do not want to wait for it to be expired based on the snapshot profile, you can expire it manually. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. In the Storage tab navigation pane, select the volume you want to modify. 4. Click the Storage tab. 5.
● To select individual volumes, hold down Control while selecting them. 5. In the right pane, click Map Volume to Server. The Map Volume to Server wizard opens. 6. Select the server to which you want to map the volumes, then click Next. The wizard advances to the next page. 7. (Optional) Click Advanced Mapping to configure LUN settings, restrict mapping paths, or present the volume as read-only. 8. Click Finish.
Demote a Mapping from a Server Cluster to an Individual Server If a volume is mapped to a server cluster, you can demote the mapping so that it is mapped to one of the servers that belongs to the cluster. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. In the Storage tab navigation pane, select the volume. 4. In the right pane, click the Mappings tab. 5.
9. Configure the LUN settings: ● To specify a specific LUN number, clear the Use next available LUN checkbox, then type the LUN in the LUN to use when mapping to Volume field. ● To assign the next unused LUN for the server, select the Use next available LUN checkbox. ● To make the volume bootable, select the Map volume using LUN 0 checkbox. 10. Click OK.
7. Click OK. Deleting Volumes and Volume Folders Delete volumes and volume folders when they are no longer needed. NOTE: For user interface reference information, click Help. Delete a Volume A deleted volume is moved to the Recycle Bin by default. Prerequisites Delete all associated replications, Live Volumes, or Live Migrations before deleting a volume. CAUTION: You can recover a deleted volume that has been moved to the Recycle Bin.
Delete a Volume Folder A volume folder must be empty before it can be deleted. If the deleted volumes from the folder are in the Recycle Bin, the volume folder is not considered empty and cannot be deleted. Steps 1. Click the Storage view. 2. In the Storage pane, select a Storage Center. 3. Click the Storage tab. 4. In the Storage tab navigation pane, select the volume folder you want to delete. 5. In the right pane, click Delete. . The Delete dialog box opens 6. Click OK to delete the folder.
Supported Hardware Platforms The following controller series support Data Reduction: ● SCv3000 Series (Supports Compression only) ● SC4020 ● SC5020 ● SC5020F ● SC7020 ● SC7020F ● SC8000 ● SC9000 Compression Compression reduces the amount of space used by a volume by encoding data. Compression runs daily with Data Progression. To change the time at which compression runs, reschedule Data Progression. Compression does not run with an on-demand Data Progression.
Deduplication Deduplication reduces the space used by a volume by identifying and deleting duplicate pages. Deduplication requires SSD drives. Apply Deduplication With Compression to a Volume Apply Deduplication with Compression to reduce the size of the volume. Deduplication and compression run during daily Data Progression. Prerequisites Allow Data Reduction Selection must be enabled in the Preferences tab of the Edit Storage Center Settings dialog box.
View Amount of Space Saved by Data Reduction on a Volume The percentage of space saved by data reduction for a volume is an estimate found by comparing the total amount of space saved by compression and deduplication with the total amount of space processed by data reduction in the volume. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. In the Storage tab navigation pane, select a volume. 4.
Pause or Resume Data Reduction for all Volumes Pausing Data Reduction from the Storage Center Edit Settings dialog box pauses compression and deduplication for all volumes in that Storage Center. About this task NOTE: Pause Data Reduction cannot be applied to other Storage Centers from the Storage Center Edit Settings dialog box using inherit settings. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2.
Default Snapshot Profiles By default, Storage Center provides two standard snapshot profiles that cannot be deleted. ● Daily – Creates a snapshot every day at 12:01 AM, and expires the snapshot in one week. ● Sample – Applies three schedule rules: ○ Creates a snapshot every 12 hours between 12:05 AM and 6 PM, expiring in five days. ○ Creates a snapshot on the first day of every month at 11:30 PM, expiring in 26 weeks. ○ Creates a snapshot every Saturday at 11:30 PM, expiring in 5 weeks.
6. Add a rule to the Snapshot Profile. a. b. c. d. e. Click Add Rule. The Add Rule dialog box opens. From the drop-down menu, select the frequency at which the rule runs. Configure the dates and times at which you want snapshots to be created. In the Expiration field, type the length of time to keep snapshots before deleting them. Click OK. The Add Rule dialog box closes. 7. (Optional) Create additional rules as necessary. 8.
Create a Snapshot for all Volumes Associated with a Snapshot Profile You can create a snapshot for all volumes associated with a Snapshot Profile instead of manually creating a snapshot for each volume. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. In the Storage tab navigation pane, select the Snapshot Profile. 4. In the right pane, click Create Snapshot. The Create Snapshot dialog box opens. 5.
d. In the Expiration field, type the length of time to keep snapshots before deleting them. e. Click OK. 6. (Optional) Modify the existing rules as needed. ● To modify a rule, select the rule, then click Edit Rule. ● To remove a rule, select the rule, then click Remove Rule. 7. Click OK. Change the Snapshot Creation Method for a Snapshot Profile The snapshot creation method controls how snapshots triggered by the snapshotprofile are created. Steps 1.
2. Click the Storage tab. 3. In the Storage tab navigation pane, select the Snapshot Profile. 4. In the Schedule Rules pane, right-click the schedule and select Edit Remote Snapshot Expiration. The Edit Remote Snapshot Expiration dialog box opens. 5. Configure the remote snapshot expiration rule. a. Select one or more Storage Centers for which you want to specify an expiration rule for remote snapshots. b.
The Create Storage Profile dialog box opens. 5. Configure the storage profile. a. Type a name for the storage profile in the Name field. b. Select the RAID levels to use for volumes associated with the storage profile from the RAID Type Used drop-down menu. c. In the Storage Tiers area, select the checkboxes of the storage tiers (disk classes) that can be used for volumes associated with the storage profile. 6. Click OK. Create a Storage Profile (Storage Center 7.2.
Apply a Storage Profile to a Server Apply a storage profile to a server to specify the RAID level and tiers used by all volumes that are mapped to the server. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. In the Storage tab navigation pane, select the storage profile to apply to a server. 4. In the right pane, click Apply to Server. The Apply to Server dialog box opens. 5.
Create a QoS Profile QoS profiles include a set of attributes that control the QoS behavior for any volume to which it is applied. Prerequisites ● To enable users to set QoS profiles for a Storage Center, the Allow QoS Profile Selection option must be selected on the Storage Center Preferences settings. ● To enable QoS profiles to be enforced, the QoS Limits Enabled and Server Load Equalizer Enabled options must be selected on the Storage Center Storage settings. Steps 1.
Apply a QoS Profile to a Volume Apply a previously defined QoS profile to a volume. Prerequisites The QoS profile must already exist. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Expand the QoS Profiles navigation tree. Right-click the name of the QoS profile. 3. Select Apply to Volumes. The Apply to Volumes dialog box opens. 4. Select the checkbox next to each volume to which you want to apply the QoS profile. 5. Click OK.
5. In the Remote IPv4 Address field, type the IPv4 address of the external device. 6. From the iSCSI Network Type drop-down menu, select the speed of the iSCSI network. 7. Click Finish. A confirmation dialog box appears. 8. Click OK. PS Series Storage Array Import Requirements A PS Series storage array must meet the following requirements to import data to a Storage Center storage system. Component Requirement PS Series Firmware Version 6.0.
Import Data from an External Device (Offline) Importing data from an external device copies data from the external device to a new destination volume in Storage Center. Complete the following task to import data from an external device. Prerequisites ● An external device must be connected into the Storage Center. ● The destination volume must be unmapped from the server.
Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. From the External Devices node in the Storage tab navigation pane, select an external device. 4. Click Online Import from External Device. The Online Import from External Device dialog box opens. 5. Modify the Destination Volume Attributes as needed. NOTE: For more information, click Help. 6.
6 Storage Center Server Administration Storage Manager allows you to allocate storage on a Storage Center to the servers in your SAN environment. Servers that are connected to Storage Centers can also be registered to Storage Manager to streamline storage management. To present storage to a server, a server object must be added to the Storage Center.
Managing Servers Centrally Using Storage Manager Servers that are registered to Storage Manager are managed from the Servers view. Registered servers are centrally managed regardless of the Storage Centers to which they are connected. Figure 16. Servers View The following additional features are available for servers that are registered to Storage Manager: ● ● ● ● Storage Manager gathers operating system and connectivity information from registered servers.
● iSCSI – Configure the iSCSI initiator on the server to use the Storage Center HBAs as the target. ● Fibre Channel – Configure Fibre Channel zoning to allow the server HBAs and Storage Center HBAs to communicate. ● SAS – Directly connect the controller to a server using SAS ports configured as front-end connections. 2. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 3. Click the Storage tab. 4. Select Servers in the Storage tab navigation pane.
Steps 1. Make sure the server HBAs have connectivity to the Storage Center HBAs. ● iSCSI – Configure the iSCSI initiator on the server to use the Storage Center HBAs as the target. ● Fibre Channel – Configure Fibre Channel zoning to allow the server HBAs and Storage Center HBAs to communicate. ● SAS – Directly connect the controller to a server using SAS ports configured as front-end connections. 2.
2. Click the Storage tab. 3. Select Servers in the Storage tab navigation pane. 4. In the right pane, click Create Server Cluster. The Create Server Cluster dialog box opens. Figure 19. Create Server Cluster Dialog Box 5. Configure the server cluster attributes. The server attributes are described in the online help. a. Enter a name for the server in the Name field. b. To add the server cluster to a server folder, click Change, select a folder, and click OK. c.
The Set up localhost for Storage Center wizard opens. ● If the Storage Center has iSCSI ports and the host is not connected to any interface, the Log into Storage Center via iSCSI page appears. Select the target fault domains, and then click Log In. ● In all other cases, proceed to the next step. 5. On the Verify localhost Information page, verify that the information is correct. Then click Create Server.
● On a Storage Center with Fibre Channel IO ports, configure Fibre Channel zoning before starting this procedure. About this task NOTE: VMware vCenter is not supported on servers connected to the Storage Center over SAS. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. In the Storage tab, click Servers. 4. Click Create Server from a VMware vSpehere or vCenter.
4. In the right pane, click Add Server to Cluster. The Add Server to Cluster dialog box opens. 5. Select the server cluster to which to add the server. 6. Click OK. Remove a Server from a Server Cluster You can remove a server object from a server cluster at any time. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. Select the server to remove from the server cluster in the Storage tab navigation pane. 4.
4. Select the server in the Storage tab navigation pane. 5. In the right pane, click Edit Settings. The Edit Server Settings dialog box opens. 6. Type a name for the server in the Name field. 7. Click OK. Change the Operating System of a Server If you installed a new operating system or upgraded the operating system on a server, update the corresponding server object accordingly. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2.
Related tasks Configure Front-End I/O Ports (Fibre Channel and SAS) on page 255 Configure Front-End I/O Ports (iSCSI) on page 255 Remove One or More HBAs from a Server If a server HBA has been repurposed and is no longer used to communicate with the Storage Center, remove it from the server object. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. Select the server in the Storage tab navigation pane. 4.
6. Click Next. The Map Volume to Server wizard advances to the next page. 7. (Optional) Click Advanced Options to configure LUN settings, restrict mapping paths, or present the volume as read-only. 8. Click Finish. Unmap One or More Volumes From a Server If a server no longer uses a volume, you can unmap the volume from the server. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3.
NOTE: When a volume is preallocated, the Storage Center allocates all of the space on the volume t>o the server. The Free Space of the volume is 0 MB and the Used/Active Space of the volume is the equal to the size of volume on Storage Center. To keep the volume preallocated when it is formatted on the server, the SCSI UNMAP feature must be disable on the server. 9. Click OK.
● To modify a previous volume, select it from the list and click Edit Volume. ● To remove a previous volume, select it from the list and click Remove Volume. 12. Click OK. The volumes are created and mapped to servers. Related concepts Modifying Volumes on page 102 Creating and Managing Server Folders Use server folders to group and organize servers defined on the Storage Center. NOTE: For user interface reference information, click Help.
5. Select a new parent folder in the Parent navigation tree. 6. Click OK. Deleting Servers and Server Folders Delete servers and server folders when they no longer utilize storage on the Storage Center. NOTE: For user interface reference information, click Help. Delete a Server Delete a server if it no longer utilizes storage on the Storage Center. When a server is deleted, all volume mappings to the server are also deleted. Steps 1.
Server Type Supported Versions/Models VMware ● ESXi 6.5 and later ● vCenter Server 6.5 and later NOTE: SAS protocol for host connections is supported beginning in VMware ESXi version 6.5, and VMware vCenter Web Client Server version 6.5. Storage Manager Server Agent for Windows Servers To register a Windows server to Storage Manager, the Storage Manager Server Agent must be installed on the server.
NOTE: If the server has physical iSCSI HBAs, Storage Manager may not automatically recognize the WWNs for the server. In this situation, configure the iSCSI HBA(s) to target the Storage Center, create a server on the Storage Center, then manually map the Storage Center server to the Server Agent. 7. Select a parent folder for the server in the Folder navigation tree. 8. Click OK.
Results NOTE: After a Storage Manager update, the VASA version number displayed in vCenter is not updated unless the VASA provider is unregistered and reregistered with that vCenter. NOTE: If the VASA provider loses network access to the external database, the VASA provider needs to be unregistered and reregistered to continue with VVols operations. Organizing and Removing Registered Servers Use server folders to organize servers into groups.
Move a Server to a Different Folder Use the Edit Settings dialog box to move a server to a different folder. Steps 1. Click the Servers view. 2. In the Servers pane, select the server that you want to move. 3. In the right pane, click Edit Settings. The Edit Settings dialog box appears. 4. In the Folder navigation tree, select a folder. 5. Click OK.
5. Click OK. Updating Server Information You can retrieve current information from servers and scan for new volumes on servers. Retrieve Current Information from a Single Server Refresh the view to see the most current server data. Steps 1. Click the Servers view. 2. Select a server in the Servers pane. The Summary tab appears. 3. In the right pane, click Update Information. The Update Information dialog box appears.
Change the Connection Timeout for a Windows Server You can configure the maximum time in seconds that Storage Manager waits for a response for queries sent to the Server Agent. Steps 1. Click the Servers view. 2. In the Servers pane, select a Windows server. 3. In the right pane, click Edit Settings. The Edit Settings dialog box appears. 4. In the Connection Timeout field, type a new timeout in seconds. ● The default is 300 seconds. ● The minimum value is 180 seconds. ● The maximum value is 1200 seconds. 5.
Creating Server Volumes and Datastores Creating a volume on a Windows server or creating a datastore on a VMware server automatically creates a Storage Center volume and maps it to the server in one operation. Related tasks Create a Datastore and Map it to VMware ESX Server on page 164 Create a Volume and Map it to a Windows Server You can create a volume, map it to a Windows server, format it, and mount it on the server in one operation. Steps 1. Click the Servers view. 2.
Create an RDM Volume You can create a volume, map it to a VMware virtual machine, and create a raw device mapping to the virtual machine in one operation. Prerequisites In order for the Create RDM Volume option to appear in Storage Manager, the virtual machine must be powered on. If Storage Manager determines that the VM is not powered on, the Create RDM Volume menu option is not displayed. Steps 1. Click the Servers view. 2.
● VVol Datastore 6. Click Next. 7. If you selected Standard Datastore (VMFS), complete the following steps: a. Select a unit of storage from the drop-down menu and type the size for the datastore in the Total Space field. The available storage units are bytes, kilobytes (KB), megabytes (MB), gigabytes (GB), and terabytes (TB). b. Select the size limit for virtual disks within the datastore from the Max File Size drop‑down. c.
Delete a Volume or Datastore Delete a volume or datastore if it is no longer needed by the server. Volumes that are not hosted on a Storage Center cannot be deleted. Steps 1. Click the Servers view. 2. Select the volume or datastore to delete in the Servers pane. 3. In the right pane, click Delete. The Delete Objects dialog box appears. 4. Click OK.
Manually Mapping a Windows Server to a Storage Center Server If the WWNs of a server are not correctly associated with the appropriate Storage Center server objects, you can manually create the mappings. Add a Mapping Between a Windows Server and a Storage Center Server If Storage Manager did not automatically recognize the WWNs of a Windows server when it was registered, manually associate the server with a Storage Center server. Steps 1. Click the Servers view. 2.
View HBA Connectivity Information for a Windows-Based NAS Appliance The Connectivity tab displays information about the HBAs installed in the appliance. For each HBA, the Storage Center Server Ports pane displays the corresponding Storage Center server objects. Steps 1. Click the Servers view. 2. In the Servers pane, select a Windows-based NAS appliance. The Summary tab appears. 3. Click the Connectivity tab.
3. Click the IPMI tab. 4. Click Power Off. The Power Off dialog box appears. 5. Click OK. The appliance is powered off. Reset the Power for a Windows-Based NAS Appliance If the IPMI card is configured correctly, you can remotely reset power for a Windows-based NAS appliance. Prerequisites ● The IPMI card in the appliance must be configured. ● IPMI card information must be configured in Storage Manager. Steps 1. Click the Servers view. 2. In the Servers pane, select a Windows-based NAS appliance.
Install and Register the Server Agent Install the Storage Manager Server Agent on a Windows server to collect information and display information about the server. If you are using Microsoft Hyper-V virtualization, the Server Agent can be installed on the host server and virtual machines running Windows. If you are using VMware virtualization, the Server Agent can be installed on virtual machines running Windows. Install the Server Agent on a Server Core Installation of Windows Server Install Microsoft .
Install the Server Agent on a Full Installation of Windows Server Install the Server Agent and register it to the Data Collector. Prerequisites ● ● ● ● ● The Server Agent must be downloaded. The server must meet the Server Agent requirements in the Storage Manager 2020 R1 Release Notes. The server must have network connectivity to the Storage Manager Data Collector. The firewall on the server must allow TCP port 27355 inbound and TCP port 8080 outbound.
Manage the Server Agent with Server Agent Manager Use the Server Agent Manager to manage and configure the Server Agent service. Figure 21. Server Agent Manager Dialog Box The following table lists the objects in the Server Agent window. Callout Name 1 Minimize/Close 2 Status Message Area 3 Control Buttons 4 Version and Port 5 Commands Start the Server Agent Manager Under normal conditions, the Server Agent Manager is minimized to the Windows system tray.
Modify the Connection to the Data Collector If the Data Collector port, host name, or IP address has changed, use the Server Agent Manager to update the information. Steps 1. In Server Agent Manager, click Properties. The Properties dialog box appears. 2. Specify the address and port of the Storage Manager Data Collector. ● Host/IP Address: Enter the host name or IP address of the Data Collector. ● Web Services Port: Enter the Legacy Web Service Port of the Data Collector. The default is 8080. 3.
7 Managing Virtual Volumes With Storage Manager VVols is VMware’s storage management and integration framework, which is designed to deliver a more efficient operational model for attached storage. This framework encapsulates the files that make up a virtual machine (VM) and natively stores them as objects on an array. The VVols architecture enables granular storage capabilities to be advertised by the underlying storage.
Use of the internal database is a consideration for lab deployments only. Depending upon the protection model used in deployment, failure to use the external database could result in the loss of some or all VVols metadata when the Data Collector is uninstalled or deleted. Use of the external database negates this risk during uninstall or delete. The external database is expected to be deployed in a highly available manner including redundant switching connectivity.
vSphere recognizes them as protocol endpoints after the VASA provider is registered and a Storage Container is created using Storage Manager. ● Storage container — A storage container is a quantity of storage made available for the placement of virtual volumes-based VMs. Each array has at least one storage container. Each storage container has one or more protocol endpoints associated with it. NOTE: Storage containers are not supported outside of the virtual volumes context.
Thick provisioning is not supported for operations such as creating or cloning a VVol VM. Only thin provisioning is supported. VASA Provider The VASA provider enables support for VMware VVols operations. A VASA provider is a software interface between the vSphere vCenter server and vendor storage arrays. Dell provides its own VASA provider that enables vCenter to work with Dell storage. This VASA provider supports the VMware VASA 2.0 API specifications.
● Changing the Storage Manager IP address Unregistering VASA will affects control plane operations on virtual volume VMs and datastores which are in use. It does not affect data transfer between an ESXi host and the respective SAN storage. Unregistering the VASA provider results in powered-off VVol VMs being shown as inaccessible and datastores as inactive. To avoid prolonged control plane down time, minimize the period where the VASA provider remains unregistered.
IP Change Action Required networking properties on the host. Then follow the Dell Storage Manager procedure for deleting existing certificates and restart the Storage Manager. After the restart, re-register the VASA Provider. FQDN changes on Windows or Virtual Appliance If certificates are already using FQDN and you want to change the FQDN, unregister VASA Provider first. Then make changes to the name lookup service or Storage Manager host (or both) for the new FQDN.
Data Reduction Options for VVols You can specify data reduction options when creating storage containers. These options are advertised (made available) to the VMware administrator during VM Storage Profile creation. When you use Storage Manager to create storage containers, you can optionally set these data reduction options: ● Deduplication Allowed ● Compression Allowed Specifying one or both of these options indicates the data reduction preferences for VMs that are then created.
NOTE: When applying a VM Storage Policy containing rules for the ScStorageProfile capability, the vCenter administrator can ignore the datastore compatibility warning Datastore does not satisfy required properties.. The VASA provider overrides the datastore's configured value and applies the user-provided value of ScStorageProfile for VVols of the VM.
Table 9. Expected Behavior for Compression and Deduplication Checkboxes on Storage Container Old Checkbox Value New Checkbox Value Expected Behavior Compression Enabled Compression Disabled Data Reduction Profile of existing volumes remains unchanged. Compliance check warns that the VM is not compliant with storage container.
Table 10. Expected Behavior Related to Migration (continued) Source Datastore Destination Datastore Expected Behavior Storage Container Compression = Supported Storage Container Compression = Not Supported Migration fails because the source VM Storage Policy is invalid on the destination.
Edit a Storage Container Using the Storage View Modify a storage container to edit its settings. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. In the navigation pane, select Volumes. 4. Right-click on the storage container to modify and select Edit Settings. The Edit Storage Container Settings dialog box opens. 5. Modify the fields of the storage container as needed. 6. Click OK.
3. In the right pane, click Create Datastore. The Create Datastore dialog box opens. 4. Type a name for the datastore in the Datastore Name field. 5. Select the type of datastore to create: ● Standard Datastore (VMFS) ● VVol Datastore 6. Click Next. 7. If you selected Standard Datastore (VMFS), complete the following steps: a. Select a unit of storage from the drop-down menu and type the size for the datastore in the Total Space field.
If the datastore was created with the type VVOL, the VVols tab identifies the virtual volumes stored in the storage container. Protocol Endpoint Monitoring You can view details about protocol endpoints that are associated with virtual volumes (VVols). Protocol endpoints are automatically created when an ESXi 6.0 server is created in Storage Manager. Storage Manager exposes protocol endpoints in the Storage view. You can use Storage Manager to view protocol endpoint details for vSphere hosts.
If the host contains VVols, the Storage view for that host includes the following details about the protocol endpoints: ● ● ● ● ● ● Device ID Connectivity status Server HBA Mapped Via LUN Used Read Only (Yes or No) Managing Virtual Volumes With Storage Manager 187
8 PS Series Storage Array Administration PS Series storage arrays optimize resources by automating performance and network load balancing. Additionally, PS Series storage arrays offer all-inclusive array management software, host software, and free firmware updates. To manage PS Series storage arrays using Dell Storage Manager, the storage arrays must be running PS Series firmware version 7.0 or later.
Table 11. PS Series Group (continued) Callout Description 3 PS Series storage pools Containers for storage resources (disk space, processing power, and network bandwidth). A pool can have one or more members assigned to it. A group can provide both block and file access to storage data. Access to block-level storage requires direct iSCSI access to PS Series arrays (iSCSI initiator).
● Hostname or IP Address – Type the group or management IP address of the PS Series group. NOTE: Do not type the member IP address in this field. ● User Name and Password – Type the user name and password for a PS Series group user account. ● Folder – Select the PS Groups node or the folder to which to add the PS Series group.
Organizing PS Series Groups Use folders to organize PS Series groups in Storage Manager. Create a PS Group Folder Use folders to group and organize PS Series groups. Steps 1. Click the Storage view. 2. In the Storage pane, select the PS Groups node. 3. In the Summary tab, click Create Folder. The Create Folder dialog box opens. 4. In the Name field, type a name for the folder. 5. In the Parent field, select the PS Groups node or a parent folder. 6. Click OK.
Delete a PS Group Folder Delete a PS Group folder if it is no longer needed. Prerequisites The PS Group folder must be empty to be deleted. Steps 1. Click the Storage view. 2. In the Storage pane, select the PS Group folder to delete. 3. In the Summary tab, click Delete. The Delete PS Group Folders dialog box opens. 4. Click OK. Remove a PS Series Group Remove a PS Series group when you no longer want to manage it from Storage Manager.
Figure 23. PS Series Volumes Table 12. PS Series Volumes Callout Description 1 PS Series group Storage area network (SAN) comprising one or more PS Series arrays connected to an IP network. Arrays are high-performance (physical) block storage devices. 2 PS Series members Each PS Series array is a member in the group and is assigned to a storage pool. 3 PS Series storage pools Containers for storage resources (disk space, processing power, and network bandwidth).
Table 12. PS Series Volumes (continued) Callout Description Thin provisioning allocates space based on how much is actually used, but gives the impression the entire volume size is available. (For example, a volume with 100GB storage can be allocated to use only 20GB, while the rest is available for other uses within the storage pool.) An offline volume indicates that it can no longer be accessed by the iSCSI initiator until it has been set online.
○ To generate an warning event message when the in-use warning limit is exceeded, select the Generate initiator error when in-use warning limit is exceeded. checkbox. ○ In the Maximum In-Use Space field, type the maximum in-use space percentage of the volume. ○ To set the volume offline when the maximum in-use space is exceeded, select the Set offline when maximum in-use space is exceeded checkbox. 11. Click OK. Modify a Volume You can rename, move, or expand a volume after it has been created.
4. In the Storage tab navigation pane, select the Volumes node. 5. In the right pane, click Create Volume Folder. The Create Volume Folder dialog box opens. 6. In the Name field, type a name for the folder. 7. (Optional) In the Notes field, type a description for the folder. 8. Click OK. Edit a Volume Folder Create a volume folder to organize volumes on a PS Series group. Prerequisites To use volume folders in Storage Manager, the PS Series group members must be running PS Series firmware version 8.
Steps 1. Click the Storage view. 2. In the Storage pane, select a PS Series group. 3. Click the Storage tab. 4. In the Storage tab navigation pane, select the volume to move. 5. In the right pane, click Move to Folder. The Move to Folder dialog box opens. 6. In the navigation pane, select a new volume folder. 7. Click OK. Move Multiple Volumes to a Folder Multiple volumes can organized by moving a selection of volumes to a volume folder.
3. Click the Storage tab. 4. In the Storage tab navigation pane, select a volume to clone. 5. In the right pane, click Clone. The Clone Volume dialog box opens. 6. In the Name field, type a name for the clone. 7. Click OK. Modify Volume Access Settings The read-write permission for a volume can be set to read-only or read-write. In addition, access to the volume from multiple initiators with different IQNs can be enabled or disabled. Steps 1. Click the Storage view. 2.
4. In the Storage tab navigation pane, select a volume. 5. In the right pane, click Add Access Policy Groups. The Add Access Policy Groups to Volume dialog box opens. 6. In the Access Policy Groups area, select the access policy groups to apply to the volume. 7. In the Access Policy Group Targets area, select whether the access policy groups apply to volumes and snapshots, volumes only, or snapshots only. 8. Click OK.
4. In the Storage tab navigation pane, expand the Volumes node and select the volume to delete. 5. Click Delete. The Delete dialog box opens. 6. Click OK. ● If the volume does not contain data, the volume is permanently deleted. ● If the volume does contain data, the volume is moved to the recycle bin. Restore a Volume from the Recycle Bin If you need to access a recently deleted volume, you can restore the volume from the recycle bin.
About Snapshots Snapshots enable you to capture volume data at a specific point in time without disrupting access to the volume. A snapshot represents the contents of a volume at the time of creation. If needed, a volume can be restored from a snapshot. Creating a snapshot does not prevent access to a volume, and the snapshot is instantly available to authorized iSCSI initiators.
Modify Snapshot Properties After a snapshot is created, you can modify the settings of the snapshot. Steps 1. Click the Storage view. 2. In the Storage pane, select a PS Series group. 3. Click the Storage tab. 4. In the Storage tab navigation pane, expand the Volumes node and select a volume that contains a snapshot. 5. From the Snapshots tab, select a snapshot to modify. 6. Click Edit Settings. The Modify Snapshot Properties dialog box opens. 7. In the Name field, type a name for the snapshot. 8.
7. Click OK. Restore a Volume from a Snapshot You can restore a volume to the state of a snapshot. Steps 1. Click the Storage view. 2. In the Storage pane, select a PS Series group. 3. Click the Storage tab. 4. In the Storage tab navigation pane, select a volume that contains a snapshot. 5. From the Snapshots tab, select a snapshot to restore. 6. Click Restore Volume. The Restore Volume dialog box opens. 7.
6. Click the Enable Schedule checkbox. 7. In the Name field, type a name for the schedule. 8. From the Frequency drop-down menu, select Hourly Schedule. 9. Select the Replication Schedule radio button. 10. From the Start Date drop-down menu, select the start date of the schedule. 11. To enable an end date for the schedule, select the checkbox next to End Date then select a date from the End Date drop-down menu. 12. Specify when to start the replication.
6. Click the Enable Schedule checkbox. 7. In the Name field, type a name for the schedule. 8. From the Frequency drop-down menu, select Run Once. 9. From the Date field, select the start date of the replication. 10. In the Time field, specify the start time of the replication. 11. From the Replica Settings field, type the maximum number of replications the schedule can initiate.
4. From the Storage tab navigation pane, select a volume. The volume must be the source of a replication relationship. 5. From the Schedules tab, select the replication schedule to delete. 6. Click Delete. A confirmation dialog box appears. 7. Click OK. About Access Policies In earlier versions of the PS Series firmware, security protection was accomplished by individually configuring an access control record for each volume to which you wanted to secure access.
8. In the Password field, type a password (otherwise known as a CHAP secret). 9. To enable the local CHAP account, select the Enable checkbox. To disable the local CHAP account, clear the Enable checkbox. 10. Click OK. Modify Target Authentication A PS Series group automatically enables target authentication using a default user name and password. If needed, you can change these credentials. Steps 1. Click the Storage view. 2. In the Storage pane, select a PS Series group. 3. Click the Storage tab. 4.
Edit an Access Policy Group After an access policy group is created, you can edit the settings of the access policy group. Steps 1. Click the Storage view. 2. In the Storage pane, select a PS Series group. 3. Click the Storage tab. 4. In the Storage tab navigation pane, expand the Access node and select an access policy group. 5. In the right pane, click Edit Settings. The Edit Access Policy Group dialog box opens. 6. In the Name field, type a name for the access policy group. 7.
2. In the Storage pane, select a PS Series group. 3. Click the Storage tab. 4. In the Storage tab navigation pane, expand the Access node and select the access policy group to delete. 5. In the right pane, click Delete. The Delete Access Policy Group dialog box opens. 6. Click OK. Create an Access Policy Access policies associate one or more authentication methods to available volumes. Steps 1. Click the Storage view. 2. In the Storage pane, select a PS Series group. 3. Click the Storage tab. 4.
6. In the Access Points area, click Create. The Create Access Point dialog box opens. 7. (Optional) In the Description field, type a description for the basic access point. 8. In the CHAP Account field, type the user name of the CHAP account that a computer must supply to access a volume. 9. In the iSCSI Initiator field, type the iSCSI initiator name of a computer to which you want to provide access to a volume. 10.
4. In the Storage tab navigation pane, expand the Access node and select an access policy. 5. In the right pane, click Add Volumes. The Add Volumes to Access Policy dialog box opens. 6. In the Volumes area, select the checkboxes of the volumes to associate with the access policy. 7. In the Access Policy Targets area, select whether the access policy applies to volumes and snapshots, volumes only, or snapshots only. 8. Click OK.
View Event Logs You can view event logs for the last day, last 3 days, last 5 days, last week, last month, or a specified period of time. Steps 1. Click the Storage view. 2. In the Storage pane, select a PS Series group. 3. Click the Monitoring tab. 4. In the Monitoring tab navigation pane, select the Event Logs node. 5. Select the date range of the event log data to display.
View Replication History You can view the replication history for a PS Series group. Steps 1. Click the Storage view. 2. In the Storage pane, select a PS Series group. 3. Click the Monitoring tab. 4. In the Monitoring tab navigation pane, select the Replication History node. Information about past replications is displayed in the right pane. View Alerts You can view the current alerts for a PS Series group. Steps 1. Click the Storage view. 2. In the Storage pane, select a PS Series group. 3.
9 Storage Center Maintenance Storage Manager can manage Storage Center settings, users and user groups, and apply settings to multiple Storage Centers.
Rename a Storage Center Rename a Storage Center when the purpose of the Storage Center has changed or the name no longer applies. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 3. Click the General tab. 4. In the Name field, type a new name. 5. Click OK.
4. Click OK. Apply a New License to a Storage Center If you add applications, or increase the number of disks licensed for your Storage Center, you may need to apply a new license. You can submit multiple licences in a zip file. Prerequisites ● You must be able to access a Storage Center license file from the computer from which you are running Storage Manager . About this task NOTE: Applying the Flex Port license requires the Storage Center to restart.
5. (Optional) In the IP Settings area, type the IPv6 addresses for the management IP. 6. (Optional) In the Network Information area, type the server addresses and domain name. 7. Click OK. Modify Management Interface Settings for a Controller The IP address, net mask, and gateway can be modified for the controller management interface. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Hardware tab. 3.
Configuring Storage Center User Preferences Storage Center user preferences establish defaults for the Storage Center user account that was used to add the Storage Center to Storage Manager . Storage Manager honors these preferences. NOTE: For user interface reference information, click Help. Set the Default Size for New Volumes The default volume size is used when a new volume is created unless the user specifies a different value. Steps 1.
Set Default Cache Settings for New Volumes The default cache settings are used when a new volume is created unless the user changes them. You can prevent the default cache settings from being changed during volume creation by clearing the Allow Cache Selection checkbox. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 3.
2. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 3. Click the Preferences tab. 4. From the Operating System drop-down menu, select the default operating system for new servers. 5. Click OK. Set the Default Storage Profile for New Volumes The default Storage Profile is used when a new volume is created unless the user selects a different Storage Profile.
Allow QoS Profile Selection To enable users to select QoS Profiles, set the option to enabled. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 3. Click the Preferences tab. 4. In the Quality of Service Profiles area, select the Allow QoS Profile Selection checkbox. 5. Click OK.
Set RAID Stripe Width The RAID stripe width controls the number of disks across which RAID data is striped. The stripe widths for RAID 5 and RAID 6 are independently configured. Prerequisites The Storage Center must be added to Storage Manager using a Storage Center user with the Administrator privilege. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the Summary tab, click Edit Settings.
Configuring Storage Center Secure Console Settings The secure console allows support personnel to access the Storage Center console without connecting through the serial port. NOTE: Do not modify the secure console configuration without the assistance of technical support. Enable Secure Console Access Enable the secure console to allow support personnel to access the Storage Center console without connecting through the serial port. Steps 1.
Apply Secure Console Settings to Multiple Storage Centers Secure Console settings that are assigned to a single Storage Center can be applied to other Storage Centers. Prerequisites The Storage Center must be added to Storage Manager using a Storage Center user with the Administrator privilege. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens.
● To use ESMTP, select the Send Extended Hello (EHLO) check box, then type the Storage Center fully qualified domain name in the Extended Hello Message (EHLO) field. 5. Click OK. Apply SMTP Settings to Multiple Storage Centers SMTP settings that are assigned to a single Storage Center can be applied to other Storage Centers. Prerequisites The Storage Center must be added to Storage Manager using a Storage Center user with the Administrator privilege. Steps 1.
To create a new user: a. Click Create SNMP v3 User. The Create SNMP v3 User dialog box opens. b. In the Name field, type a user name. c. In the Password field, type a password. d. Select an authentication method from the Authentication Type drop-down menu. e. Select an encryption type from the Encryption Type drop-down menu. f. Click OK. g. Select the user from the SNMP v3 Settings table. 8. Specify settings for the network management system to which Storage Center will send SNMP traps. a.
2. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 3. Click the Time Settings tab. 4. From the Region drop-down menu, select the region where the Storage Center is located. 5. From the Time Zone drop-down menu, select the time zone where the Storage Center is located. 6. Set the date and time. ● To set the date and time manually, clear Use NTP Server, then select Set Current Time and set the date and time in the Curent Time fields.
5. Select the Storage Center user or user privilege level to allow. ● To allow access to a Storage Center user privilege level, select User Privilege Level, then select a privilege level from the drop-down menu. ● To allow access to an individual Storage Center user, select Specific User, then select a user from the drop-down menu. 6. Specify which source IP addresses to allow.
View and Delete Access Violations for a Storage Center View access violations to determine who has unsuccessfully attempted to log in. A maximum of 100 access violations are recorded and displayed for a Storage Center. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 3. Click the IP Filtering tab. 4. Click Show Access Violations.
Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the Summary tab, click Inherit Settings. The Inherit Settings wizard opens. 3. Select the Storage Center from which you want to inherit settings, then click Next. The wizard advances to the next page. 4. Select the checkbox for each category of settings that you want to inherit. For user interface reference information, click Help. 5. Click Finish.
Create a Local Storage Center User Create a local Storage Center user to assign privileges to a new user. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 3. Click the Users and User Groups tab. 4. On the Local Users subtab, click Create Local User. The Create Local User dialog box opens. 5. In the Name field, type a name for the user.
7. Click OK. Related tasks Configure Preferences for a Local Storage Center User on page 234 Increase the Privilege Level for a Local Storage Center User The privilege level can be increased for local users that have the Volume Manager or Reporter privilege. The privilege level for a user cannot be decreased. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the Summary tab, click Edit Settings.
3. Click the Users and User Groups tab. 4. On the Local Users subtab, select the user, then click Edit Settings. The Edit Local User Settings dialog box opens. 5. From the Session Timeout drop-down menu, select the maximum length of time that the local user can be idle while logged in to the Storage Center before the connection is terminated. 6. Click OK. The Edit Settings dialog box closes. 7. Click OK.
Configure Preferences for a Local Storage Center User By default, each Storage Center user inherits the default user preferences. If necessary, the preferences can be individually customized for a user. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 3. Click the Users and User Groups tab. 4.
3. Click the Users and User Groups tab. 4. On the Local Users subtab, select the user, then click Change Password. The Change Password dialog box opens. 5. Type the old password. 6. Type and confirm a new password for the local user, then click OK. 7. Click OK. Delete a Local Storage Center User Delete a Storage Center user if he or she no longer requires access. The local user that was used to add the Storage Center to Storage Manager cannot be deleted.
Managing Local Storage Center User Groups User groups grant access to volume, server, and disk folders. NOTE: For user interface reference information, click Help. Create a Local User Group Create a local Storage Center user group to grant access to specific volume, server, and disk folders. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 3.
7. Click OK. Manage Directory User Group Membership for a Local Storage Center User Group Add a directory user group to a local user group to grant access to all directory users in the directory user group. Prerequisites ● The Storage Center must be configured to authenticate users with an external directory service. ● The directory user group(s) you want to add to a local Storage Center user group must have been granted Volume Manager or Reporter access to the Storage Center.
The wizard closes. 8. Click OK. Delete a Local Storage Center User Group Delete a local Storage Center user group if it is no longer needed. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 3. Click the Users and User Groups tab. 4. On the Local User Groups subtab, select the local user group, then click Delete. The Delete dialog box opens. 5.
Example URIs for two servers: ldap://server1.example.com ldap://server2.example.com:1234 NOTE: Adding multiple servers ensures continued authorization of users in the event of a resource outage. If Storage Center cannot establish contact with the first server, Storage Center attempts to connect to the remaining servers in the order listed.
The Directory Service Manual Configuration wizard opens. 5. From the Directory Type drop-down menu, select Active Directory or OpenLDAP. 6. Type the settings for the directory server. ● In the URI field, type the uniform resource identifier (URI) for one or more servers to which Storage Center connects. NOTE: Use the fully qualified domain name (FQDN) of the servers. Example URIs for two servers: ldap://server1.example.com ldap://server2.example.
10. (Optional) Select the Enabled checkbox to enable Kerberos authentication. 11. To change any of the Kerberos settings, clear the Auto-Discover checkbox, and then type a new value into that field. ● Kerberos Domain Realm: Kerberos domain realm to authenticate against. In Windows networks, this is the domain name in uppercase characters. ● KDC Hostname or IP Address: Fully qualified domain name (FQDN) or IP address of the Key Distribution Center (KDC) to which Storage Center will connect.
b. (Optional) To create a new local user group, click Create Local User Group , then complete the Create Local User Group wizard. For user interface reference information, click Help. c. Select the checkbox for each local user group you want to associate with the user. d. Click OK. The Select Local User Groups dialog box closes. 11. (Optional) Specify more information about the user in the Details area. For user interface reference information, click Help. 12. Click OK.
Enable or Disable Access for a Directory Service User When a directory service user is disabled, the user is not allowed to log in. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 3. Click the Users and User Groups tab. 4. On the Directory Users subtab, select the user, then click Edit Settings. The Edit Settings dialog box opens. 5.
2. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 3. Click the Users and User Groups tab. 4. On the Directory Users subtab, select the user, then click Edit Settings. The Edit Settings dialog box opens. 5. Click Configure User Preferences. The Configure User Preferences dialog box opens. 6. Modify the user preferences as needed, then click OK. NOTE: For user interface reference information, click Help. 7. Click OK. The local user Edit Settings dialog box closes.
Restore a Deleted Directory Service User If you are restoring a deleted user with the Volume Manager or Reporter privilege, the user must be added to one or more local user groups. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 3. Click the Users and User Groups tab. 4. On the Directory Users subtab, click Actions > Restore Deleted User.
● Administrator: When selected, directory users in the group have full access to the Storage Center. ● Volume Manager: When selected, directory users in the group have read and write access to the folders associated with the assigned user groups. ● Reporter: When selected, directory users in the group have read-only access to the folders associated with the assigned user groups. 8. (Volume Manager and Reporter only) Add one or more local user groups to the directory user group. a.
b. (Optional) To create a new local user group, click Create Local User Group, then complete the Create Local User Group wizard. For user interface reference information, click Help. c. Select the checkbox for each local user group you want to associate with the directory user group. d. To remove the directory user group from a local group, clear the checkbox for the local group. e. Click OK. The Select Local User Groups dialog box closes. 6. Click OK. The Edit Settings dialog box closes. 7. Click OK.
● To set the number of days before a user can change his or her password, type a value in the Minimum Age field. To disable the minimum age requirement, type 0. ● To set the number of days after which a password expires, type a value in the Maximum Age field. To disable the maximum age requirement, type 0. ● To set the number of days before a password expires when the expiration warning message is issued, type a value in the Expiration Warning Time field. To disable the expiration warning message, type 0.
Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 3. Click the Password Configuration tab. 4. Select the Enabled checkbox. 5. Select the Requires Password Change checkbox. 6. Click OK. Managing Front-End I/O Ports Fibre Channel (FC), iSCSI, and SAS ports on a storage system may be designated for use as front-end I/O ports.
ALUA Port Mode Asymmetric Logical Unit Access (ALUA) provides port and controller redundancy for SAS front-end connections. Volumes mapped to a server using SAS front-end also have port and controller redundancy. Volumes mapped over SAS are mapped to both controllers. The volume mapping is Active/Optimized on one controller and Standby on the other controller. If the port or controller fails on the active controller, the paths to the other controller become Active/Optimized.
NOTE: Additional front-end fault domains cannot be created on SCv3000 series storage systems. In addition, existing fault domains cannot be modified or deleted on SCv3000 series storage systems. Fault Domains for Front-End SAS Ports for SC4020 Storage Systems Users can select the number of fault domains to create for front-end SAS ports on SC4020 Storage Systems.
Table 14. Front-End I/O Ports Failover Behavior (continued) Scenario Virtual Port Mode Legacy Mode ALUA Port Mode A controller fails in a dual-controller Storage Center Virtual ports on the failed controller move to physical ports on the functioning controller. Primary ports on the failed controller fail over to reserved ports on the functioning controller. Active/Optimized ports on the failed controller fail over to the Standby ports on the functioning controller.
7. Click OK. The Edit Controller Port Settings dialog box closes. Reset a Front-End IO Port Name to the WWN Reset a physical or virtual IO port name to the World Wide Name if you no longer need the descriptive name defined by an administrator. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Hardware tab. 3.
7. Click OK. The Edit Controller Port Settings dialog box closes. Test Network Connectivity for an iSCSI Port Test connectivity for an iSCSI I/O port by pinging a port or host on the network. About this task NOTE: If multiple virtual fault domains (VLANs) are associated with the port, the physical fault domain is used for ping tests issued from the Hardware tab. To test network connectivity for a VLAN, initiate a ping test from a physical port in a fault domain on the Storage tab. Steps 1.
The threshold alert is automatically set for the selected alert definition. 9. (Optional) To remove the Threshold Alert Definition, hold Ctrl and click the alert definition in the .Selected Alert Definitions table, then hold Ctrl and click the threshold alert in the .Available Alert Definitions table. The alert threshold is removed from the alert definition. 10. Click OK.
Converting Front-End Ports to Virtual Port Mode Using the Convert to Virtual Port Mode tool converts all front-end iSCSI or Fibre Channel IO ports to virtual port mode. After the conversion is complete, the ports can not be converted back to legacy mode. Convert Fibre Channel Ports to Virtual Port Mode Use the Convert to Virtual Port Mode tool to convert all Fibre Channel ports on the Storage Center controllers to virtual port mode. Prerequisites The Fibre Channel ports must be in legacy port mode.
Managing Back-End I/O Port Hardware Back-end ports can be renamed and monitored with threshold definitions. Configure Back-End Ports Use the Generate Default Back End Port Configuration dialog box to configure back-end ports. After the ports are configured, they can be used to connect enclosures. Prerequisites ● Supported only on CT-SC040, SC8000, or SC9000 storage systems. ● Back-end ports have not been previously configured during Storage Center configuration.
Grouping Fibre Channel I/O Ports Using Fault Domains Front-end ports are categorized into fault domains that identify allowed port movement when a controller reboots or a port fails. Ports that belong to the same fault domain can fail over to each other because they have connectivity to the same resources. NOTE: Fault domains cannot be added or modified on SCv2000 or SCv3000 series storage systems. Storage Center creates and manages fault domains on these systems.
Remove Ports from an Fibre Channel Fault Domain To repurpose front-end Fibre Channel ports, remove the ports from the fault domain. About this task ● If the front-end ports are configured for virtual port mode, the storage system must be running Storage Center 7.5.1 or later to remove all ports from a fault domain. ● If the front-end ports are configured for virtual port mode, but the storage system is running a version of Storage Center earlier than Storage Center 7.5.
Grouping iSCSI I/O Ports Using Fault Domains Front-end ports are categorized into fault domains that identify allowed port movement when a controller reboots or a port fails. Ports that belong to the same fault domain can fail over to each other because they have connectivity to the same resources. NOTE: Fault domains cannot be added or modified on SCv2000 or SCv3000 series storage systems. Storage Center creates and manages fault domains on these systems.
Creating iSCSI Fault Domains Create an iSCSI fault domain to group ports that can fail over to each other because they have connectivity to the same resources. NOTE: For user interface reference information, click Help. Create a Physical iSCSI Fault Domain Create a physical iSCSI fault domain to group physical ports for failover purposes. Prerequisites ● In virtual port mode, all iSCSI ports that are connected to the same iSCSI network should be added to the same fault domain.
Next steps (Optional) Configure VLANs for the iSCSI ports in the fault domain by creating a virtual fault domain for each VLAN. Base the virtual fault domains on the physical fault domain.
Modifying iSCSI Fault Domains Modify an iSCSI fault domain to change its name, modify network settings for iSCSI ports in the domain, add or remove iSCSI ports, or delete the fault domain. NOTE: For user interface reference information, click Help. Rename an iSCSI Fault Domain The fault domain name allows administrators to identify the fault domain. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3.
The Edit Fault Domain Settings dialog box opens. 5. Select the VLAN Tagged checkbox. 6. In the VLAN ID field, type a VLAN ID for the fault domain. Allowed values are 1–4096. 7. (Optional) To assign a priority level to the VLAN, type a value from 0-7 in the Class of Service Priority field. 0 is best effort, 1 is the lowest priority, and 7 is the highest priority. 8. Click OK.
6. In the Window Size field, type a value for the window size. ● Allowed values are 16 KB to 32 MB. ● The window size must be divisible by 16 KB. 7. Click OK to close the Edit Advanced Port Settings dialog box. 8. Click OK. Modify Digest Settings for an iSCSI Fault Domain The iSCSI digest settings determine whether iSCSI error detection processing is performed. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab.
2. Click the Storage tab. 3. In the Storage tab navigation pane, expand Fault Domains, then expand iSCSI and click the fault domain. 4. In the right pane, click Edit Settings. The Edit Fault Domain Settings dialog box opens. 5. In the Ports table, click Add Ports to Fault Domain. The Add Ports to Fault Domain dialog box opens. 6. In the Select the ports to add table, select the iSCSI ports to add to the fault domain. All iSCSI ports in the fault domain should be connected to the same Ethernet network. 7.
Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. In the Storage tab navigation pane, expand Fault Domains, expand iSCSI, and select a fault domain. 4. If you are using an SCv2000 Series or SCv3000 Series storage system, click Edit Settings, clear the check boxes of the ports to remove from the fault domain, and click OK. Otherwise, perform the following steps to remove ports from a fault domain: a. b. c. d.
iSCSI NAT Port Forwarding Example Configuration In this example, a router separates the Storage Center on a private network (192.168.1.0/24) from a server (iSCSI initiator) on the public network (1.1.1.60). To communicate with Storage Center iSCSI target ports on the private network, the server connects to a public IP address owned by the router (1.1.1.1) on ports 9000 and 9001. The router forwards these connections to the appropriate private IP addresses (192.168.1.50 and 192.168.1.51) on TCP port 3260.
6. Repeat the preceding steps for each additional iSCSI control port and physical port in the fault domain. 7. In the Public Networks/Initiators area, define an iSCSI initiator IP address or subnet that requires port forwarding to reach the Storage Center because it is separated from the Storage Center by a router performing NAT. a. Click Add. The Create iSCSI NAT Public Network/Initiator dialog box opens. b.
the correct shared secret to access Storage Center (target) volumes. To enable bidirectional CHAP authentication, unique shared secrets (passwords) must be configured for the remote initiator and the target Storage Center. About this task NOTE: Changing CHAP settings will cause existing iSCSI connections between SAN systems using the selected fault domain to be lost. You will need to use the Configure iSCSI Connection wizard to reestablish the lost connections after changing CHAP settings. Steps 1.
Remove CHAP Settings for a Server in an iSCSI Fault Domain Remove CHAP settings for a server to prevent it from targeting the Storage Center while CHAP is enabled for the fault domain. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. In the Storage tab navigation pane, expand Fault Domains, then expand iSCSI and click the fault domain. 4. In the right pane, click Configure CHAP.
5. In the Ports table, select the SAS ports to add to the fault domain. When pairing SAS ports into the fault domain: ● Use one port from each controller. ● Make sure the paired ports have the same port number and are connected to the same server. 6. Click OK. Delete a SAS Fault Domain Delete a SAS fault domain if it is no longer needed. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3.
The following disk management options are not available for SCv2000 series storage systems: ● Creating disk folders ● Adding disks to disk folders ● Managing disk spares Related tasks Restore a Disk on page 276 Scan for New Disks Scanning for disks recognizes new disks and allows them to be assigned to a disk folder. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3.
Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. In the Storage tab navigation pane, expand Disks, then select a disk folder. The Disk Folder view is displayed. 4. Click Delete. The Delete dialog box opens. 5. Click OK. Modify a Disk Folder The disk folder Edit Settings dialog box allows you to change the name of the folder, add notes, or change the Storage Alert Threshold. Steps 1.
Enable or Disable the Disk Indicator Light The drive bay indicator light identifies a drive bay so it can be easily located in an enclosure. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Hardware tab. 3. In the Hardware tab navigation pane, expand Enclosures, then select and expand an enclosure. The Enclosure view is displayed. 4. Under the selected enclosure, click Disks. The Disks view is displayed. 5.
Delete a Disk Deleting a disk removes that disk object from Unisphere. Before deleting the disk object, you must release the disk, moving the data off the disk. Prerequisites ● The disk failed and it does not have any allocated blocks. ● The disk was removed from the enclosure. ● If the disk was in an enclosure that has been removed, that enclosure object must be deleted first. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2.
4. In the right pane, select the failed disk to replace and click Replace Disk. The Replace Failed Disk wizard opens. 5. Locate the failed disk in the enclosure and click Next. 6. Follow the instructions to physically remove the failed disk from the enclosure and click Next. 7. Follow the instructions to insert the replacement disk into the enclosure and click Next. Storage Center attempts to recognize the replacement disk.
Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 3. Click the Secure Data tab. 4. In the Hostname field, type the host name or IP address of the key management server. 5. In the Port field, type the number of a port with open communication with the key management server. 6.
Rekey a Disk Folder Perform an on-demand rekey of a Secure Disk folder. Prerequisites The disk or disk folder must be enabled as Secure Disk. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. Click the Disks node. The Disks view is displayed. 4. Right-click the name of a Secure Disk folder and select Rekey Disk Folder. The Rekey Disk Folder dialog box opens. 5. Click OK.
Create Secure Data Disk Folder A Secure Data folder can contain only SEDs that are FIPS certified. If the Storage Center is licensed for Self-Encrypting Drives and unmanaged SEDs are found, the Create Disk folder dialog box shows the Secure Data folder option. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. Click the Disks node. The Disks view is displayed. 4. Click Create Disk Folder.
Table 16. SSD Redundancy Recommendations and Requirements (continued) Disk Size Level of Redundancy Recommended or Enforced NOTE: Non-redundant storage is not an option for SCv2000 Series storage systems. 18 TB and higher Dual redundant is required and enforced Managing RAID Modifying tier redundancy, or adding or removing disks can cause data to be unevenly distributed across disks. A RAID rebalance redistributes data over disks in a disk folder.
4. Click Rebalance RAID. The RAID Rebalance dialog box opens. If a RAID rebalance is needed, the dialog box shows RAID rebalance options. If a RAID rebalance is needed, the dialog box shows RAID rebalance options. 5. Select Schedule RAID rebalance. 6. Select a date and time. 7. Click OK. Check the Status of a RAID Rebalance The RAID Rebalance displays the status of an in-progress RAID rebalance and indicates whether a rebalance is needed. Steps 1.
○ RAID 10 (each drive is mirrored) ○ RAID 5-5 (striped across 5 drives) ○ RAID 5-9 (striped across 9 drives) ● Dual redundant: Dual redundant is the recommended redundancy level for all tiers. It is enforced for 3 TB HDDs and higher and for 18 TB SSDs and higher.
Managing Disk Enclosures Use the Hardware view to rename an enclosure, set an asset tag, clear the swap status for replaceable hardware modules in a disk enclosure, mute alarms, reset the temperature sensors, and delete an enclosure from a Storage Center. Add an Enclosure This step-by-step wizard guides you through adding a new enclosure to the system. Prerequisites This wizard is available only for SCv2000 series and SCv3000 series arrays. This procedure can be performed without a controller outage.
3. In the Hardware tab navigation pane, expand Enclosure. The Enclosure view is displayed. 4. Select the enclosure you want to remove and click Remove Enclosure. The Remove Enclosure wizard opens. 5. Confirm the details of your current install, and click Next. 6. Locate the enclosure in the Storage Center and click Next. 7. Follow the directions to disconnect the A side chain cables connecting the enclosure to the Storage Center. Click Next. 8.
Rename a Disk Enclosure Change the display name of a disk enclosure to differentiate it from other disk enclosures. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Hardware tab. 3. In the Hardware tab navigation pane, expand Enclosure, then select an enclosure. The Enclosure view is displayed. 4. In the right pane, click Edit Settings. The Edit Settings dialog box opens. 5.
Mute an Enclosure Alarm Mute an enclosure alarm to prevent it from sounding. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Hardware tab. 3. In the Hardware tab navigation pane, expand Enclosure and select an enclosure. The Enclosure view is displayed. 4. Under Enclosure, select Audible Alarms. 5. In the right pane, right-click the audible alarm, then select Request Mute.
Clear the Swap Status for an Enclosure Power Supply Clear the swap status for an enclosure power supply to acknowledge that it has been replaced. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Hardware tab. 3. In the Hardware tab navigation pane, expand Enclosures, then select and expand an enclosure. The Enclosure view is displayed. 4. Under the selected enclosure, click Power Supplies.
Clear the Swap Status for a Temperature Sensor The swap status for a temperature sensor is set when the component that contains the sensor is replaced. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Hardware tab. 3. In the Hardware tab navigation pane, expand Enclosures, then select and expand an enclosure. The Enclosure view is displayed. 4. Under the selected enclosure, click Temperature Sensor.
Enable or Disable the Disk Indicator Light The drive bay indicator light identifies a drive bay so it can be easily located in an enclosure. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Hardware tab. 3. In the Hardware tab navigation pane, expand Enclosures, then select and expand an enclosure. The Enclosure view is displayed. 4. Under the selected enclosure, click Disks. The Disks view is displayed. 5.
The Add New Controller wizard opens. 5. Confirm the details of your current install, and click Next. 6. Insert the controller into the existing enclosure. Click Next to validate the install. 7. Click Finish to exit the wizard. Replace a Failed Controller This step-by-step wizard guides you through replacing a failed controller in the Storage Center without an additional controller outage. Prerequisites This wizard is only available for the SCv2000 series controllers Steps 1.
Replace a Failed Cooling Fan Sensor This step-by-step wizard guides you through replacing a failed cooling fan sensor in the Storage Center without a controller outage. Prerequisites This wizard is only available for the SCv2000 series and SCv3000 series Storage Centers. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Hardware tab. 3.
Plan a Hardware Change Upon boot, the Storage Center searches back-end targets for the configuration. Because a controller cannot boot without configuration information, back-end access must be maintained during the controller replacement procedure. This can be done in two ways: About this task ● Keep at least one common back-end slot/port defined and connected in the same manner on the new hardware configuration as it was on the old hardware configuration.
Add a UPS to a Storage Center An uninteruptable power supply (UPS) provides power redundancy to a Storage Center. When a UPS is added to a Storage Center, the status of the UPS is displayed in Storage Manager. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the right pane, select Actions > UPS > Create UPS. The Create UPS dialog box opens. 3. In the IPv4 Address field, type the IP address of the UPS. 4.
The Update Storage Center dialog opens. This dialog displays details of the installation process and updates those details every 30 seconds. This is also displayed as a blue message bar in the Summary tab, and in the update status column of the Storage Center details. In case of an update failure, click Retry to restart the interrupted process. 7. Click OK. If the update is service affecting, the connection to the Storage Center will be lost.
Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the right pane, click Actions→ System→ Shutdown/Restart. The Shut Down/Restart dialog box opens. 3. From the first drop-down menu, select Shut Down. 4. Click OK. 5. After the controllers have shut down, shut down the disk enclosures by physically turning off the power supplies.
2. Click the Hardware tab. 3. In the Hardware tab navigation pane, expand Controllers and select the controller. The Controller view is displayed. 4. In the right pane, click Shut Down/Restart Controller. The Shut Down/Restart Controller dialog box opens. 5. From the drop-down menu, select Restart. 6. Click OK. Reset a Controller to Factory Default Reset a controller to apply the factory default settings, erase all data stored on the controller, and erase all data on the drives.
2. Click the Alerts tab. 3. Select a FRU ticket. 4. Click View FRU Ticket. The FRU Ticket Information dialog opens. 5. Click OK. Close a FRU Ticket Close a FRU ticket if the FRU ticket is not needed. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Alerts tab. 3. Select a FRU ticket. 4. Click Close FRU Ticket. The Close FRU Ticket dialog opens. 5. Click OK.
10 Viewing Storage Center Information Storage Manager provides access to summary information about managed Storage Centers, including I/O performance and hardware status. Use this information to monitor the health and status of your Storage Centers.
Figure 25. Summary Tab View Summary Plugins for a Storage Center Use the Summary tab to view the summary plugins that are currently enabled. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Summary tab.
● ● ● ● To To To To move move move move a a a a plugin plugin plugin plugin up one level, press Move Up once. down one level, press Move Down once. to the top, press Move to Top once. to the bottom, press Move to Bottom once. 4. Click OK to save changes to the plugins of the Summary tab. Viewing Summary Information for Multiple Storage Centers Storage Manager provides two ways to view summary information for multiple Storage Centers.
4. From the drop-down menu in the top right corner, select the summary plugin that you want to use to compare the Storage Centers.
Status Information The top portion of the Status plugin displays information about disk space usage. NOTE: The information displayed varies with the Storage Center version. Disk Space ● Disk—Total amount of raw disk space on all drives in the Storage Center. ● Spare—Amount of space distributed across all drives that is reserved for balancing drive usage and recovering from a drive failure. For Storage Center version 7.
Alert Type Description ● Connectivity Alerts The Current Alerts status icon indicates the highest unacknowledged alert level for the categories under Current Alerts. Threshold Alerts Displays the total number of Storage Manager threshold alerts and the number of alerts for each of the following categories: ● IO Alerts ● Storage Alerts ● Replication Alerts The Threshold Alerts status icon indicates the highest active alert level for the categories under Threshold Alerts.
Field/Option Description Available Space Amount of space allocated for volumes and the free space that can be used by the Storage Center. Calculated as: ● Storage Center version 7.4.10 and later —Used space + Free space ● Storage Center versions 7.3-7.4.2—Allocated + Free space Free Space Amount of disk space available for use by a Storage Center. Calculated as: ● Storage Center version 7.4.10 and later—Storage Type Free space + Disk Free space ● Storage Center versions 7.3-7.4.
2. Select a location to save the image and enter a name for the image in the File name field. 3. Click Save to save the image. Print the Bar Chart Print the chart if you want a paper copy. Steps 1. Right-click the bar chart and select Print. The Page Setup dialog box appears. 2. Select the paper size to print to from the Size drop-down menu. 3. Select the Landscape radio button to allow the entire bar chart to print. 4. Click OK. The Print dialog box appears. 5.
Print the Graph Print the graph if you want a paper copy. Steps 1. Right-click the graph and select Print. The Page Setup dialog box appears. 2. Select the paper size to print to from the Size drop-down menu. 3. Select the Landscape radio button to allow the entire graph to print. 4. Click OK. The Print dialog box appears. 5. Select the printer to use from the Name drop-down menu. 6. Click OK. The graph is printed to the selected printer.
Using the Top 10 Fastest Growing Volumes Plugin The Top 10 Fastest Growing Volumes plugin displays a table that lists the volumes on a Storage Center that are growing at the fastest rate. Use this plugin to monitor the growth of the ten fastest growing volumes a Storage Center. Figure 28.
Update the List of Threshold Alerts Refresh the list of threshold alerts to see an updated list of alerts. About this task Click Refresh to update the list of alerts. Viewing Detailed Storage Usage Information Detailed storage usage information is available for each Storage Type that is configured for a Storage Center. View Storage Usage by Tier and RAID Type Storage usage by tier and RAID type is displayed for each Storage Type. Steps 1.
2. Click the Storage tab. 3. In the Storage tab navigation pane, expand Storage Type, then select the individual storage type you want to examine. 4. Click the Volumes subtab to view storage usage by volume. Figure 31. Storage Type Volumes Subtab View Historical Storage Usage Allocated space and used space over time is displayed for each Storage Type. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3.
View a Data Progression Pressure Report For each storage type, the data progression pressure report displays how space is allocated, consumed, and scheduled to move across different RAID types and storage tiers. Use the data progression pressure report to make decisions about the types of disks to add to a Storage Center. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3.
Pressure Report Column Description Pressure Down Amount of data scheduled to move to a slower storage tier during the next Data Progression cycle. Indicated by an orange bar and a down arrow in the bar chart. Volume Allocated Amount of usable space for volumes after RAID is applied. Volume Used Amount of volume allocated space that is consumed. Saved vs RAID 10 Amount of space saved by moving less-accessed data to RAID 5 or RAID 6 instead of using RAID 10 for all data.
● Most Active Report: Displays a table that shows the minimum, maximum, average, and standard deviation values of the historical IO usage data. The Most Active Report tab is displayed only if the selected storage object is one of the following container objects: ○ ○ ○ ○ Volumes or a volume folder Servers or a server folder Remote Storage Centers Disks or disk speed folder 5. To refresh the displayed IO usage data, click Refresh on the IO Usage navigation pane.
6. Select the check boxes of the storage objects to compare from the IO Usage navigation pane. NOTE: The Comparison View cannot compare more than 10 objects at one time. 7. Click Update. The Total IO/Sec and Total MB/Sec charts appear by default and display the total IO usage for writes and reads, in IO/sec and MB/Sec, for the selected storage objects. 8. Select the check boxes of additional charts to display: NOTE: The charts that can be displayed depend on the storage objects that were selected in Step 6.
View Current IO Usage Data for a Storage Object Select a specific object in the Charting tab navigation pane to view current IO usage data for the object. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Charting tab. 3. Select a storage object from the from the Charting navigation pane. 4.
Display the Comparison View on the Charting tab Use the Comparison View to compare current IO usage for storage objects. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click Charting tab. 3. Click Select View on the Charting navigation pane. 4. Select Comparison View from the drop-down menu. The options on the Charting navigation pane are replaced with Comparison View options. 5.
Display Alerts on Charts You can configure charts to display the relationships between the reported data and the configured threshold alerts and Storage Center alerts. Steps 1. In the top pane of the Storage Manager Client, click Edit User Settings. The Edit User Settings dialog box appears. 2.
Combine Usage Data into One Chart You can combine IO usage data into a single chart with multiple Y axes. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the IO Usage or Charting tab. 3. Select the Combine Charts check box to combine the IO usage data into a single chart with multiple Y axes. Scale Usage Data in a Chart You can change the scale for MB/Sec, IO/Sec, and Latency. Steps 1.
● To change how often replication usage data is collected, select a different period of time from the Replication Usage drop-down menu. ● To change how often storage usage data is collected, select a different period of time from the Storage Usage drop-down menu. If Daily is selected from the Storage Usage drop-down menu, the time of day that storage usage data is collected can be selected from the Storage Usage Time drop-down menu. 5. Click OK.
8. Click OK. Export I/O Usage Data You can export I/O usage data for the most active volumes, servers. disks, remote Storage Centers, controllers, and fault domains. You can also export Chart I/O usage data for Storage Centers, volumes, servers, disks, controllers, storage profiles, and fault domains. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click theIO Usage or Charting tab. 3.
Monitoring Storage Center Hardware Use the Hardware tab of the Storage view to monitor Storage Center hardware. Figure 36.
View Summary Information for a Controller The controller node on the Hardware tab displays summary information for the controller, including name, version, status, and network settings. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Hardware tab. 3. In the Hardware tab navigation pane, select the controller. The right pane displays controller summary information.
4. Select an IO port from the Fibre Channel, iSCSI, or SAS nodes. The Port View tab in the right pane highlights the selected port in the controller diagram. View Fan Status for a Controller The Fan Sensors node on the Hardware tab displays summary and status information for fans in the controller. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Hardware tab. 3.
3. In the Hardware tab navigation pane, expand the Controllers node, expand the node for a specific controller, then click Cache Card. The right pane displays summary and status information for cache card in the controller. Monitoring a Storage Center Disk Enclosure The Hardware tab displays status information for the disk enclosure(s) in a Storage Center. NOTE: For user interface reference information, click Help.
View Alarm Status for an Enclosure The Audible Alarms node on the Hardware tab displays alarm status for the enclosure. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Hardware tab. 3. In the Hardware tab navigation pane, select Audible Alarms. The right pane displays summary information. View Disk Status for an Enclosure The Disks node in the Hardware tab displays the statuses of all disks in the enclosure. Steps 1.
4. Click the Cooling Fan Sensors node. The right pane lists the cooling fan sensors in that enclosure. 5. Select a cooling fan sensor from the Cooling Fans tab. The Fan View tab highlights the selected fan in the enclosure diagram. View IO Module Status for an Enclosure The I/O Modules node on the Hardware tab displays IO module status for the enclosure. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Hardware tab. 3.
View Temperatures for an Enclosure The Temperature Sensor node on the Hardware tab displays temperatures for the enclosure. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Hardware tab. 3. In the Hardware tab navigation pane, expand the Enclosures node, then the node for a specific enclosure. 4. Click Temperature Sensor. The right pane displays temperature sensor information.
Figure 37. SSD Endurance The Endurance Chart column shows a wear gauge that indicates the amount of wear life remaining and when an alert will be sent. The gauge indicators are: ● Red: Fail zone calculated from disk data that estimates when 120 days remain in the life of the disk. An alert is sent when the wear life moves from the green zone to the red zone. ● Green: Safe operating zone.
3. In the Hardware tab navigation pane, select UPS. The right pane displays summary information. View Summary Information for a UPS Unit that Serves the Storage Center The Hardware tab displays summary information for the UPS units that provide backup power for the Storage Center. Prerequisites A UPS must have been configured for the Storage Center. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Hardware tab. 3.
11 SMI-S The Storage Management Initiative Specification (SMI-S) is a standard interface specification developed by the Storage Networking Industry Association (SNIA). Based on the Common Information Model (CIM) and Web-Based Enterprise Management (WBEM) standards, SMI-S defines common protocols and data models that enable interoperability between storage vendor software and hardware.
● Software ● Thin Provisioning SMI-S Namespace Use the following namespace parameters to access SMI-S. ● Interop namespace - /interop ● Array namespace - /root/compellent Setting Up SMI-S To set up SMI-S, enable SMI-S for the Data Collector, then add the required SMI-S user. HTTPS is the default protocol for the SMI-S provider. Steps 1. SMI-S Prerequisites on page 331 2.
https://data_collector_host_name_or_IP_address:3033/ c. Press Enter. The Unisphere Central login page is displayed. d. Type the user name and password of a Data Collector user with Administrator privileges in the User Name and Password field. e. Click Log In. Data Collector. 2. Click The Data Collector view is displayed. 3. Click the General tab, and then select the Ports subtab. 4. Click Edit. The Edit Port dialog box opens. 5. Select SMI-S Service from the list of ports. 6. Select the Enabled checkbox. 7.
Limitations for SCVMM 2012 Review the following limitations before you use Microsoft SCVMM 2012 to discover the Dell SMI-S provider and Storage Centers. Thin Provisioning The SCVMM 2012 console limits the maximum volume size at creation time to the available capacity of the storage pool. Storage Center thin provisioning does not have this restriction.
Windows Server 2012: Select HKEY_LOCAL_MACHINE→ Software→ Microsoft→ Windows→ CurrentVersion→ Storage Management b. If the DisableHttpsCommonNameCheck entry does not exist, select Edit→ New→ DWORD (32-bit) Value, and then type DisableHttpsCommonNameCheck to create it. c. Double-click DisableHttpsCommonNameCheck. d. In the Value data box, type 1, then click OK. 4. If the server that hosts SCVMM is running Windows Server 2012, disable client certificate checking. a.
b. In the TCP/IP port field, type the connection port of the SMI-S Provider. The default port is 5989. c. Select the Use Secure Socket Layers (SSL) connection check box. d. Click Browse. The Select a Run As Account dialog box opens e. Select the SMI-S user account that you added to the SMI-S Provider and click OK. By default, Run As account users that are assigned to the Storage Device category are listed.
12 FluidFS Administration This chapter describes how to use Storage Manager to manage FluidFS clusters for file storage.
FS8600 Scale-Out NAS Terminology The following table defines terminology related to FS8600 scale-out NAS. Term Description Fluid File System (FluidFS) High-performance, scalable file system software installed on NAS controllers. Appliance (NAS appliance) A rack-mounted 2U chassis that contains two hot-swappable NAS controllers in an activeactive configuration in a FluidFS cluster. Cache data is mirrored between the paired NAS controllers within the NAS appliance.
Key Features of the Scale-Out NAS The following table summarizes key features of scale-out NAS. Feature Description Shared back-end infrastructure The Storage Center SAN and scale-out NAS leverage the same virtualized disk pool. File management Storage Center SAN and scale-out NAS management and reporting using Storage Manager. High-performance, scale-out NAS Support for a single namespace spanning up to four NAS appliances (eight NAS controllers).
Feature Description Antivirus scanning SMB antivirus scanning offloading using certified third-party, Internet Content Adaptation Protocol (ICAP)-enabled antivirus solutions. Monitoring Built-in performance monitoring and capacity planning. Overview of the FS8600 Hardware Scale-out NAS consists of one to six FS8600 appliances configured as a FluidFS cluster. Each NAS appliance is a rack-mounted 2U chassis that contains two hot-swappable NAS controllers in an active-active configuration.
○ Internal network ○ LAN/client network The following figure shows an overview of the scale-out FS8600 architecture. Figure 39. FS8600 Architecture Storage Center The Storage Center provides the FS8600 scale-out NAS storage capacity; the FS8600 cannot be used as a standalone NAS appliance. Storage Centers eliminate the need to have separate storage capacity for block and file storage.
(client VIPs) on the client network that allow clients to access the FluidFS cluster as a single entity. The client VIP also enables load balancing between NAS controllers, and ensures failover in the event of a NAS controller failure. If client access to the FluidFS cluster is not through a router (in other words, a flat network), define one client VIP per NAS controller. If clients access the FluidFS cluster through a router, define a client VIP for each client interface port per NAS controller.
Scenario System Status Data Integrity Comments Simultaneous dual-NAS Unavailable controller failure in single NAS appliance cluster Lose data in cache Data that has not been written to disk is lost Sequential dual‑NAS controller failure in multiple NAS appliance cluster, same NAS appliance Unavailable Unaffected Sequential failure assumes enough time is available between NAS controller failures to write all data from the cache to disk (Storage Center or nonvolatile internal storage) Simultaneous
Using the Storage Manager Client or CLI to Connect to the FluidFS Cluster As a storage administrator, you can use either the Storage Manager Client or command-line interface (CLI) to connect to and manage the FluidFS cluster. By default, the FluidFS cluster is accessed through the client network. Connect to the FluidFS Cluster Using the Storage Manager Client Log in to the Storage Manager Client to manage the FluidFS cluster.
Connect to the FluidFS Cluster CLI Through SSH Using a Password Log in to the CLI through SSH to manage the FluidFS cluster. Steps 1. Use either of the following options: ● From Windows using an SSH client, connect to a client VIP. From the command line, enter the following command at the login as prompt: cli ● From a UNIX/Linux system, enter the following command from a prompt: ssh cli@client_vip_or_name 2. Type the FluidFS cluster administrator user name at the login as prompt.
In FluidFS, the management ports listed in the following table do not participate in SMB/NFS communication, but are exposed on the client network by default. When you enable secured management, you can expose the management ports on a management subnet only. Service Port Web Services 80 Secure Web Services 443 FTP 44421 FTP (Passive) 44430–44439 SSH 22 Storage Manager communication 35451 Secured management can be enabled only after the system is deployed.
Change the Secured Management Subnet Interface Change the interface on which the secured management subnet is located. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Cluster Connectivity, and then click the Management Network tab. 4. In the Management Network panel, click Edit Settings. The Modify Administrative Network dialog box opens. 5.
NOTE: A secured management subnet has a single management VIP. 6. Click OK. Change the NAS Controller IP Addresses for the Secured Management Subnet To change the NAS controller IP addresses for the secured management subnet when, for example, you go from an unsecured to a secured environment or you physically relocate your equipment: Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3.
Managing the FluidFS Cluster Name The FluidFS cluster name is a unique name used to identify the FluidFS cluster in Storage Manager and the name that clients use to access the FluidFS cluster. This name is also the FluidFS cluster NetBIOS name. If clients access the FluidFS cluster by name (instead of IP address), you must add an entry in the DNS server that associates the FluidFS cluster name to the FluidFS cluster client VIPs.
9. Click OK. Managing the System Time Setting the system time accurately is critical for the proper functioning of the system.
Example: ftp://Administrator@172.22.69.32:44421/ You will be prompted for the FluidFS cluster administrator password. Enable or Disable the FTP Server You can enable or disable the FTP server. The FTP server must be enabled if you want to manually upload service packs without using Storage Manager. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Cluster Maintenance. 4. Click the Support tab. 5.
Enable or Disable SNMP Traps Enable or disable SNMP traps by category (NAS Volumes, Access Control, Performance & Connectivity, Hardware, System, or Auditing). For enabled SNMP traps, specify the severity of events for which to send SNMP traps. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Cluster Maintenance. 4. Click the SNMP tab. 5. In the Events to Send SNMP Traps panel, click Edit Settings.
4. Click the SNMP tab. 5. In the SNMP Trap panel, click Modify SNMP Trap. The Modify SNMP Trap Settings dialog box opens. 6. Change the SNMP trap system location or contact: ● To specify a description for the location of the FluidFS cluster, type a location in the System Location field. ● To specify the name of the SNMP contact person, type a contact name in the System Contact field. 7. Click OK. Add or Remove SNMP Trap Recipients Add or remove hosts that receive the FluidFS cluster-generated SNMP traps.
● Write-Through – System is serving clients using SMB and NFS protocols, but is forced to operate in journaling mode. This mode of operation might have an impact on write performance. It is recommended when, for example, you have repeated electric power failures. ● No Service – System is not serving clients using SMB or NFS protocols and allows limited management capabilities. This mode must be selected before replacing a NAS appliance.
Assign or Unassign a Client to a NAS Controller You can permanently assign one or more clients to a particular NAS controller. For effective load balancing, do not manually assign clients to NAS controllers, unless specifically directed to do so by Dell Technical Support. Assigning a client to a NAS controller disconnects the client’s connection. Clients will then automatically reconnect to the assigned NAS controller. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3.
● When a NAS controller that was down becomes available Rebalancing client connections disconnects all client connections. Clients will then automatically reconnect to the FluidFS cluster. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Cluster Connectivity. 4. In the Filters panel, click Rebalance. The Rebalance Clients dialog box opens. 5. Click OK.
Reboot a NAS Controller Only one NAS controller can be rebooted in a NAS appliance at a time. Rebooting a NAS controller disconnects client connections while clients are being transferred to other NAS controllers. Clients will then automatically reconnect to the FluidFS cluster. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the Hardware tab. 3. In the Appliances panel, select a controller. 4. Click Reboot. The Reboot dialog box opens. 5. Click OK.
Validate Storage Connections Validating storage connections gathers the latest server definitions on the FluidFS cluster and makes sure that matching server objects are defined on the Storage Centers providing the storage for the FluidFS cluster. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the Hardware tab. 3. In the toolbar, click Actions→ Storage Centers→ Validate Storage Connections. The Validate Storage Connections dialog box opens. 4. Click OK.
cluster but within the site) to be used for name resolution. A DNS suffix specifies a DNS domain name without the host part of the name (for example, west.example.com rather than computer1.west.example.com). If clients access the FluidFS cluster by name, you must add an entry in the DNS server that associates the FluidFS cluster name to the FluidFS cluster client VIPs.
Managing Static Routes To minimize hops between routers, static routes are recommended in routed networks when the FluidFS cluster has multiple direct paths to various routers. Static routes allow you to configure the exact paths through which the system communicates with various clients on a routed network. Consider the network shown in the following figure. The system can have only one default gateway. Assume that router X is designated as the default gateway.
8. In the Gateway IP Address field, type the gateway IP address through which to access the subnet (for example, 192.0.2.30). 9. Click OK. Change the Gateway for a Static Route Change the gateway through which to access the subnet for a static route. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Cluster Connectivity. 4. Click the Client Network tab. 5. In the Static Route panel, click Configure Default Gateway.
Create a Client Network Create a client network on which clients will access SMB shares and NFS exports. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Cluster Connectivity. 4. Click the Client Network tab. 5. In the Client Network panel, click Create Client Network. The Create Client Network dialog box opens. 6. In the Netmask or Prefix Length field, type a netmask or prefix for the client network. 7.
Change the Client VIPs for a Client Network Change the client VIPs through which clients will access SMB shares and NFS exports. Steps 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab. 3. In the File System tab navigation pane, expand Tenants and select a tenant. 4. Select Client Accessibility. 5. In the right pane, select the DNS and Public IPs tab. In the Public IPs pane, click Edit Settings. The Edit Public IPs Settings dialog box appears. 6. To add a client VIP: a.
View the Client Network MTU View the current maximum transmission unit (MTU) of the client network. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Cluster Connectivity. 4. Click the Network Interfaces tab. The Client Interface panel displays the MTU. Change the Client Network MTU Change the maximum transmission unit (MTU) of the client network to match your environment. Steps 1. In the Storage view, select a FluidFS cluster. 2.
About Multichannel Multichannel is a feature of the SMB 3.0 protocol which allows the client to bind a single session to multiple connections. Multichannel provides the following benefits: Increased Throughput – The file server can simultaneously transmit more data using multiple connections for high speed network adapters or multiple network adapters.
Managing iSCSI SAN Connectivity iSCSI SAN subnets (Storage Center fault domains) or "fabrics" are the network connections between the FluidFS cluster and the Storage Center. The SAN network consists of two subnets, named SAN and SANb. The FluidFS cluster iSCSI SAN configuration can be changed after deployment if your network changes. Add or Remove an iSCSI Port Add a Storage Center iSCSI control port for each connected subnet (Storage Center fault domain).At least one iSCSI port must remain configured.
Change the VLAN Tag for an iSCSI Fabric Change the VLAN tag for an iSCSI fabric. When a VLAN spans multiple switches, the VLAN tag specifies which ports and interfaces to send broadcast packets to. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the NAS Pool tab. 3. Click the Network tab. 4. In the iSCSI Fabrics panel, select an appliance and then click Edit Settings. The Modify Settings for Fabric SAN dialog box opens. 5. In the VLAN Tag field, type the new VLAN tag for the iSCSI fabric.
NOTE: ● Local and external users can be used simultaneously. ● If you configure Active Directory and either NIS or LDAP, you can set up mappings between the Windows users in Active Directory and the UNIX and Linux users in LDAP or NIS to allow one set of credentials to be used for both types of data access. Default Administrative Accounts The FluidFS cluster has the following built-in administrative accounts, each of which serves a particular purpose.
2. Click the File System tab. 3. In the File System view, select Cluster Maintenance. 4. Click the Support tab. 5. In the Local Support Access panel, click Modify Local Support Access Settings. The Modify Local Support Access Settings dialog box opens. 6. Enable or disable SupportAssist: ● To enable SupportAssist, select the Support Account (“support”) checkbox. ● To disable SupportAssist, clear the Support Account (“support”) checkbox. 7. Click OK.
CLI Account The cli account is used with an administrator account to access the command-line interface of the FluidFS cluster. Default Local User and Local Group Accounts The FluidFS cluster has the following built-in local user and local group accounts, each of which serves a particular purpose.
View Administrators View the current list of administrator accounts. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Cluster Maintenance. 4. Click the Mail & Administrators tab. The Administrators panel displays the current list of administrators. Add an Administrator Add an administrator account to manage the FluidFS cluster using the Storage Manager Client and CLI.
6. Select a volume administrator from the list and click Add. 7. In a system with multitenancy enabled, if the tenant administrators should not be allowed to access the NAS volume, clear the Tenant Administrators Access Enabled checkbox. 8. Click OK. Change the Permission Level of a Cluster Administrator NAS cluster administrators can manage any aspect of the FluidFS cluster. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3.
Delete an Administrator Delete an administrator account when it is no longer used for FluidFS cluster management. The built-in Administrator account cannot be deleted. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Client Accessibility. 4. Click the Local Users and Groups tab. 5. Select an administrator and click Delete. The Delete dialog box opens. 6. Click OK.
When prompted to authenticate to access an SMB share, local users must use the following format for the user name: client_vip_or_name\local_user_name. Add a Local User Add a local user account. Prerequisites The local group to which the local user will be assigned must have been created already. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Client Accessibility. 4. Click the Local Users and Groups tab. 5.
6. To add a secondary local group to assign the local user to: a. b. c. d. In the Additional Groups area, click Add. The Select Group dialog box opens. From the Domain drop-down list, select the domain to assign the local group to. In the Group field, type either the full name of the local group or the beginning of the local group name. (Optional) Configure the remaining local group search options as needed. These options are described in the online help.
Change a Local User Password Change the password for a local user account. Steps 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab, expand Environment and select Authentication. 3. Select a local user and click Change Password. The Change Password dialog box appears. 4. In the Password field, type a new password for the local user.
Add a Local Group Add a local group containing local users, remote users, or remote user groups. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Client Accessibility. 4. Click the Local Users and Groups tab. 5. In the Local Group area, click Create. The Create Local Group dialog box opens. 6. In the Local Group field, type a name for the local group. 7.
g. Click OK. Change the Users Assigned to a Local Group Modify which local users, remote users, or remote user groups are assigned to a local group. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Client Accessibility. 4. Click the Local Users and Groups tab. 5. Select a group and click Edit Settings. The Edit Local User Group Settings dialog box opens. 6. To assign local users to the local group: a. b. c. d.
To change the maximum number of search results to return, select the maximum number of search results from the Max Results drop-down list. 11. Click OK. Delete a Local Group Delete a local group if it is no longer used. Prerequisites Before a local group can be deleted, you must remove its members. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Client Accessibility. 4. Click the Local Users and Groups tab. 5.
Directory domain users that are in nested groups or OUs encounter Access Denied errors, and users that are not in nested OUs or groups are permitted access. ● The Active Directory server and the FluidFS cluster must use a common source of time. ● You must configure the FluidFS cluster to use DNS. The DNS servers you specify must be the same DNS servers that your Active Directory domain controllers use. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3.
Disable Active Directory Authentication Remove the FluidFS cluster from an Active Directory domain if you no longer need the FluidFS cluster to communicate with the directory service. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Client Accessibility. 4. Click the Directory Services tab. 5. Click Leave . The Leave Active Directory Domain dialog box opens. 6. Click OK. View Open Files You can view up to 1,000 open files. Steps 1.
Reduce the Number of Subtrees for Searches FluidFS allows you to narrow the number of subtrees in an LDAP tree used for searching. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Client Accessibility. 4. Click the Directory Services tab. 5. In the NFS USer Repository (NIS or LDAP) area, click Edit Settings. The Edit Active Directory Settings dialog box opens. 6. Select the LDAP radio button. 7.
Change the LDAP Base DN The LDAP base distinguished name represents where in the directory to begin searching for users. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Client Accessibility. 4. Click the Directory Services tab. 5. Click Edit Settings in the NFS User Repository section. The Edit External User Database dialog box opens. 6. In the Base DN field, type an LDAP base distinguished name.
Enable or Disable Authentication for the LDAP Connection Enable authentication for the connection from the FluidFS cluster to the LDAP server if the LDAP server requires authentication. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Client Accessibility. 4. Click the Directory Services tab. 5. Click Edit Settings in the NFS User Repository section. The Edit External User Database dialog box opens. 6.
Managing NIS In environments that use Network Information Service (NIS), you can configure the FluidFS cluster to authenticate clients using NIS for access to NFS exports. Enable or Disable NIS Authentication Configure the FluidFS cluster to communicate with the NIS directory service. Adding multiple NIS servers ensures continued authentication of users in the event of a NIS server failure.
● To add a NIS server, type the host name or IP address of a NIS server in the NIS Servers text field and click Add. ● To remove a NIS server, select an NIS server and click Remove. 7. Click OK. Change the Order of Preference for NIS Servers If the FluidFS cluster cannot establish contact with the preferred server, it will attempt to connect to the remaining servers in order. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3.
Managing the User Mapping Policy Configure the FluidFS cluster mapping policy to automatically map all users or to allow mappings between specific users only. Automatically Map Windows and UNIX/Linux Users Automatically map all Windows users in Active Directory to the identical UNIX/Linux users in LDAP or NIS, and map all UNIX/ Linux users to the identical Windows users. Mapping rules will override automatic mapping. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3.
To change the maximum number of search results to return, select the maximum number of search results from the Max Results drop-down list. d. Click Search. e. Select a user from the search results. f. Click OK. 8. In the NFS User area, click Select User. The Select User dialog box opens. 9. Select a UNIX/Linux user: a. From the Domain drop-down list, select the domain to which the user is assigned. b. In the User field, type either the full name of the user or the beginning of the user name. c.
FluidFS NAS Volumes, Shares, and Exports This section contains information about managing the FluidFS cluster from the client perspective. These tasks are performed using the Storage Manager Client. Managing the NAS Pool When configuring a FluidFS cluster, you specify the amount of raw Storage Center space to allocate to the FluidFS cluster (NAS pool). The maximum size of the NAS pool is: ● 2 PB with one Storage Center.
3. In the right pane, click Actions → Storage Centers → Expand NAS Pool. The Expand NAS Pool dialog box opens. 4. In the NAS Pool Size field, type a new size for the NAS pool in gigabytes (GB) or terabytes (TB). NOTE: The new size is bound by the size displayed in the Minimum New Size field and the Maximum New Size field. 5. Click OK. If the container has more than one storage type, a drop-down list will appear. 6.
Enable or Disable the NAS Pool Unused Space Alert You can enable or disable an alert that is triggered when the remaining unused NAS pool space is below a specified size. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the Summary tab. 3. In the Summary panel, click Edit NAS Pool Settings. The Set NAS Pool Space Settings dialog box opens. 4. Enable or disable the NAS pool unused space alert: ● To enable the NAS pool used space alert, select the Unused Space Alert checkbox.
Replication and Disaster Recovery – The cluster administrator has the ability to create a partner relationship between the tenants on the source system and the tenants on the remote system. Enable Multitenancy System administrators can enable multitenancy using Dell Storage Manager or the CLI. When multitenancy is enabled, the system administrator can no longer see or control tenants’ contents. A tenant’s content can be managed only by the tenant administrator. Steps 1.
Multitenancy – Tenant Administration Access A tenant administrator manages his or her tenants’ content. Tenant can be managed by multiple tenant administrators, and tenant administrators can manage multiple tenants. A tenant administrator can create or delete tenants, delegate administration per tenant, and view space consumption of all tenants. About this task This procedure grants tenant administrator access to a user. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab.
NOTE: Users must be added to the administrators list before they can be made a tenant administrator or a volume administrator. Only the following users can be administrators: ● ● ○ Users in the Active Directory domain or UNIX domain of the default tenant ○ Local users of the default tenant or any other tenant Create a New Tenant Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Tenants. 4. Click Create Tenant.
Create Tenant – Step 4 Steps 1. In the Create Tenant window, click Limits. NOTE: Setting any of these limits is optional. 2. Select the Restrict Tenant Capacity Enabled checkbox. 3. Type a tenant capacity limit in gigabytes (GB). 4. Select the Restrict Number of NAS Volumes in Tenant Enabled checkbox. 5. Type the maximum number of NAS volumes for this tenant. 6. Select the Restrict Number of NFS Exports in Tenant Enabled checkbox. 7. Type the maximum number of NFS exports for this tenant. 8.
Managing NAS Volumes A NAS volume is a subset of the NAS pool in which you create SMB shares and/or NFS exports to make storage space available to clients. NAS volumes have specific management policies controlling their space allocation, data protection, security style, and so on. You can either create one large NAS volume consuming the entire NAS pool or divide the NAS pool into multiple NAS volumes. In either case you can create, resize, or delete these NAS volumes.
Thick provisioning allows you to allocate storage space on the Storage Centers statically to a NAS volume (no other volumes can take the space). Thick provisioning is appropriate if your environment requires guaranteed space for a NAS volume. Managing NAS Volume Space FluidFS maintains file metadata in i-node objects. FluidFS i-nodes are 4 KB in size (before metadata replication) and can contain up to 3.5 KB of file data. When a new virtual volume is created, a portion of it is allocated as i-node area.
Department Security Style Snapshots Replication NDMP Backup Number of SMB/NFS Clients Read/Write Mix Hourly Change % of Existing Data Administration and Finance NTFS No No Weekly 10 50/50 None Broadcast Mixed No No Weekly 10 90/10 None Press NTFS Daily No No 5 10/90 5% Marketing NTFS Daily Yes No 5 50/50 None An average read/write mix is 20/80. An average hourly change rate for existing data is less than 1 percent. Example 1 Create NAS volumes based on departments.
Term Description Unused space Storage space that is physically currently available for the NAS volume. To calculate the amount of available space for a NAS volume, use: (unused NAS volume reserved space) + (NAS volume unreserved space Overcommitted space Storage space allotted to a thin-provisioned volume over and above the actually available physical capacity of the NAS pool.
Configuring NAS Volumes Configure NAS volumes to manage the volumes and volume alerts. Optimize NAS Volumes for Use as VMware vSphere Datastores When you configure a NAS volume to use VM- (virtual machine) consistent snapshots, each snapshot creation (scheduled, manual, replication, NDMP and so on) automatically creates an additional snapshot on the VMware server.
View NAS Volumes View the current NAS volumes. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, expand NAS Volumes and then select a NAS volume. The NAS Volumes panel displays all the current NAS volumes. Create a NAS Volume Create a NAS volume to allocate storage that can be shared on the network. When a NAS volume is created, default values are applied. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3.
2. Click the File System tab. 3. In the File System view, expand NAS Volumes and then select a NAS volume. 4. In the NAS Volumes panel, click Edit Settings. The Edit NAS Volume Settings dialog box opens. 5. Click Advanced Settings. 6. In the Update File Access Time area, select the interval at which file-access timestamps are updated by selecting the appropriate option: Always, Every Five Minutes, Once an Hour, and Once a Day. 7. Click OK.
4. To enable SCSI Unmap, select the Enable SCSI Unmap (TRIM) checkbox. 5. Click OK. Enable or Disable a NAS Volume Used Space Alert You can enable an alert that is triggered when a specified percentage of the NAS volume space has been used. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, expand NAS Volumes and then select a NAS volume. 4. In the NAS Volumes panel, click Edit Settings. The Edit NAS Volume Settings dialog box opens. 5.
● To enable a NAS volume snapshot space consumption threshold alert, select the Snapshot Space Aler checkbox. ● To disable a NAS volume snapshot space consumption threshold alert, clear the Snapshot Space Aler checkbox. 7. If a NAS volume snapshot space consumption threshold alert is enabled, in the Snapshot Space Threshold field, type a number (from 0 to 100) to specify the percentage of used NAS volume snapshot space that triggers an alert. 8. Click OK.
Rename a NAS Volume Folder Rename a NAS volume folder. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, expand NAS Volumes and then select a NAS volume. 4. Click Edit Settings. The Edit NAS Volume Folder Settings dialog box opens. 5. In the Name field, type a new name for the folder. 6. Click OK. Change the Parent Folder for a NAS Volume Folder Change the parent folder for a NAS volume folder. Steps 1.
Cloning a NAS Volume Cloning a NAS volume creates a writable copy of the NAS volume. This copy is useful to test against non-production data sets in a test environment without impacting the production file system environment. Most operations that can be performed on NAS volumes can also be performed on clone NAS volumes, such as resizing, deleting, and configuring SMB shares, NFS exports, snapshots, replication, NDMP, and so on.
Create a NAS Volume Clone Cloning a NAS volume creates a writable copy of the NAS volume. Prerequisites ● The snapshot from which the clone NAS volume will be created must already exist. ● Data reduction must be disabled on the base volume. ● The snapshot space consumption threshold alert must be disabled on the base volume. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, expand NAS Volumes and then select a NAS volume. 4.
Configuring SMB Shares View, add, modify, and delete SMB shares. View All SMB Shares on the FluidFS Cluster View all current SMB shares for the FluidFS cluster. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select SMB Shares. The SMB Shares panel displays the current shares. View SMB Shares on a NAS Volume View the current SMB shares for a NAS volume. Steps 1. Click the Storage view and select a FluidFS cluster. 2.
Click Select Folder. The Select Folder dialog box opens and displays the top-level folders for the NAS volume. Navigate to the folder in which to create the new folder and click Create Folder. The Create Folder dialog box opens. In the Folder Name field, type a name for the folder, then click OK to close the Create Folder dialog box. Select the new folder and click OK. ○ To drill down to a particular folder and view the subfolders, double-click the folder name.
2. Click the File System tab. 3. In the File System view, select SMB Shares. 4. In the SMB Shares panel, select an SMB share and click Edit Settings. The Edit Settings dialog box opens. 5. Click Content. 6. Enable or disable access-based share enumeration: ● To enable access-based share enumeration, select the Access Based Enumeration checkbox. ● To disable access-based share enumeration, clear the Access Based Enumeration checkbox. 7. Click OK.
Enable or Disable SMB Message Encryption SMBv3 adds the capability to make data transfers secure by encrypting data in flight. This encryption protects against tampering and eavesdropping attacks. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Client Accessibility. 4. Click the Protocols tab. 5. In the SMB Protocol panel, click Edit Settings. The Edit Settings dialog box opens. 6.
Using SMB Home Shares The FluidFS cluster enables you to create a share for a user that is limited to that user. For example, when a user "jsmith" connects to the FluidFS cluster, jsmith will be presented with any available general shares, as well as a share labeled "jsmith" that is visible only to jsmith. Automatic Creation of Home Share Folders Automatic creation of home share folders automatically creates folders for users when they log in for the first time.
NOTE: A folder name must be less than 100 characters long and cannot contain the following characters: >, ", \, |, ?, and * ● To specify an existing folder, type the path to the folder in the Initial path field. ● To browse for an existing folder: Click Select Folder. The Select Folder dialog box opens and displays the top-level folders for the NAS volume. Locate and select the folder, and then click OK. ○ To drill down to a particular folder and view the subfolders, double-click the folder name.
Change the Owner of an SMB Share Using an Active Directory Domain Account The Active Directory domain account must have its primary group set as the Domain Admins group to change the owner of an SMB share. These steps might vary slightly depending on which version of Windows you are using. Steps 1. Open Windows Explorer and in the address bar type: \\client_vip_or_name. A list of all SMB shares is displayed. 2. Right-click the required SMB share (folder) and select Properties.
NOTE: Do not attempt to create an SMB share using MMC. Use MMC only to set SLPs. Automatic ACL to UNIX Word 777 Mapping When files with Windows ACLs are displayed from NFS clients, the FluidFS mapping algorithm shows a translated UNIX access mode. Perfect translation is not possible, so a heuristic is used to translate from the rich Windows ACL to the 9 bits of the UNIX word. However, when some special SIDs are used inside ACL (for example, creator-owner ACE), the mapping can be inaccurate.
Displaying Security Audit Events Storage Manager displays a centralized view of the security audit events generated in volumes where SACL events are configured. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab and select Client Activity. 3. Click the SACL Auditing Events tab. 4. In the Events panel, select which security audit events that you want to display.
Accessing an SMB Share Using Windows Microsoft Windows offers several methods for connecting to SMB shares. To access an SMB share, the client must be a valid user (local or remote) and provide a valid password. Option 1 - net use Command Run the net use command from a command prompt: About this task net use drive_letter: \\client_vip_or_name\smb_share_name Option 2 - UNC path Use the UNC path. Steps 1. From the Start menu, select Run. The Run window opens. 2.
Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select SMB Shares. 4. In the SMB Shares panel, select an SMB share and click Edit Settings. The Edit SMB Share Settings dialog box opens. 5. Click Content. 6. Enable or disable showing files with names starting with a dot: ● To enable showing files with names starting with a dot, select the Show files with name starting with a dot checkbox.
Accessing an SMB Share Using UNIX or Linux Mount the SMB share from a UNIX or Linux operating system using one of the following commands: # mount -t smbfs -o user_name=user_name,password=password//client_vip_or_name/ smb_share_name/local_folder # smbmount //client_vip_or_name/smb_share_name/local_folder -o user_name=user_name Managing NFS Exports Network File System (NFS) exports provide an effective way of sharing files across a UNIX or Linux network with authorized clients.
Configuring NFS Exports View, add, modify, and delete NFS exports, and control the maximum NFS protocol level that the cluster will support. View All NFS Exports on a FluidFS Cluster View all current NFS exports for a FluidFS cluster. Steps 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab. 3. In the File System tab navigation pane, select NFS Exports. The NFS exports are displayed in the right pane.
○ To view the parent folders of a particular folder, click Up. 7. (Optional) Configure the remaining NFS export attributes as needed. These options are described in the online help. ● Type descriptive text for the benefit of administrators in the Notes field. This text is not displayed to NFS clients. ● To change the client access settings for the NFS export, use the Add, Remove, and Edit buttons. 8. Click OK.
Change the Client Access Permissions for an NFS Export Change the permissions for clients accessing an NFS export. Steps 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab. 3. In the File System tab navigation pane, select NFS Exports. 4. In the right pane, select an NFS export and click Edit Settings. The Edit NFS Exports Settings dialog box appears. 5. To add access permissions for clients accessing the NFS export: a. Click Add.
Delete an NFS Export If you delete an NFS export, the data in the shared directory is no longer shared but it is not removed. Steps 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab. 3. In the File System tab navigation pane, select NFS Exports. 4. In the right pane, select an NFS export and click Delete. The Delete dialog box appears. 5. Click OK. View or Select the Latest NFS Version Supported NFS v4 is enabled or disabled on a systemwide basis.
● NAS virtual volume backup, restore, replication, and snapshot operations are not supported on the remote target data. It is supported only on the redirection folders (including the redirection data information) that reside inside the local volume data. ● After the NFSv4 or SMB client is redirected to the remote server and establishes the remote connection, the client continues further communication with the remote server.
Using Symbolic Links A symbolic link is a special type of file that contains a reference to another file or directory in the form of an absolute or relative path and that affects path name resolution. Symbolic links operate transparently for most operations: programs that read or write to files named by a symbolic link behave as if operating directly on the target file.
● Data reduction does not support base clone and cloned volumes. Table 17. Data Reduction Enhancements in FluidFS v6.0 or later FluidFS v6.0 or later FluidFS v5.0 or earlier Data reduction is enabled on a per-NAS-cluster basis. Data reduction is enabled on a per-NAS-volume basis. Data reduction supports deduplication of files that are created Data reduction is applied per NAS controller, that is, the or reside on different domains.
● To enable data reduction on the FluidFS cluster, select the Enable Data Reduction Optimization checkbox. ● To disable data reduction on the FluidFS cluster, clear the Enable Data Reduction Optimization checkbox. 5. Enter the Data Reduction Optimization Start Time. 6. Enter the number of hours to run data reduction in the Data Reduction Optimization Runtimefield. 7. Click OK. Enable Data Reduction on a NAS Volume Data reduction is enabled on a per NAS volume basis.
Change the Candidates for Data Reduction for a NAS Volume Change the number of days after which data reduction is applied to files that have not been accessed or modified for a NAS volume. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, expand NAS Volumes and select a NAS volume. 4. In the NAS Volume panel, click Edit Settings. The Edit NAS Volume Settings dialog box opens. 5.
2. Click the File System tab. 3. In the File System view, expand NAS Volumes and select a NAS volume. The NAS Volume Status panel displays the data reduction savings. FluidFS Data Protection This section contains information about protecting FluidFS cluster data. Data protection is an important and integral part of any storage infrastructure. These tasks are performed using the Storage Manager Client.
Managing Snapshots Snapshots are read-only, point-in-time copies of NAS volume data. Storage administrators can restore a NAS volume from a snapshot if needed. In addition, clients can easily retrieve files in a snapshot, without storage administrator intervention. Snapshots use a redirect-on-write method to track NAS volume changes. That is, snapshots are based on a change set.
Managing Scheduled Snapshots You can create a schedule to generate snapshots regularly. To minimize the impact of snapshot processing on system performance, schedule snapshots during off-peak times. Snapshots created by a snapshot schedule are named using this format _YYYY_MM_DD__HH_MM Create a Snapshot Schedule for a NAS Volume Create a NAS volume snapshot schedule to take a scheduled point-in-time copy of the data. Steps 1. In the Storage view, select a FluidFS cluster. 2.
2. Click the File System tab. 3. In the File System view, expand NAS Volumes and select a NAS volume. 4. In the NAS Volume Status panel, click the Snapshots & Clones tab. 5. Select a snapshot schedule and click Edit Settings. The Edit Settings dialog box opens. 6. Specify the retention policy. NOTE: Replication using current snapshot – This option of the “archive” retention policy affects setting up a new replication of a volume.
5. Select a snapshot and click Edit Settings. The Edit Snapshot Settings dialog box opens. 6. Specify the retention policy: ● To retain the snapshot indefinitely, clear the Snapshot Expiration Enable checkbox. ● To expire the snapshot in the future, select the Snapshot Expiration Enable checkbox and specify a day and time on which to expire the snapshot. 7. Click OK. Delete a Snapshot Delete a snapshot if you no longer need the point-in-time copy of the data. Steps 1.
● The FluidFS cluster deletes any snapshots that were created after the snapshot from which you restored the NAS volume. Snapshots created before the snapshot from which you restored the NAS volume are not affected. ● Current SMB clients of the NAS volume are automatically disconnected. ● Current NFS clients of the NAS volume receive stale NFS file handle error messages. You must unmount and then remount the NFS exports. CAUTION: The restore operation cannot be undone.
Managing NDMP The FluidFS cluster supports Network Data Management Protocol (NDMP), which is an open standard protocol that facilitates backup operations for network attached storage, including FluidFS cluster NAS volumes. NDMP should be used for longer-term data protection, such as weekly backups with long retention periods.
Table 20. NDMP Agent Characteristics (continued) Functionality Supported Range Concurrent NDMP sessions Up to 10 DMA user-name length 1–63 bytes (accepts Unicode) DMA password length 1–32 characters Maximum number of include paths for an NDMP job 32 Maximum number of exclude paths for an NDMP 32 job NOTE: Your environment should allow ICMP (ping) traffic between the FluidFS controllers’ private IP addresses (not the access VIPs) and the backup server.
Table 21. Supported NDMP Environment Variables (continued) Variable Name Description Default is added to the backup stream during incremental backup so that the recovery operation can handle files and directories deleted between the incremental backups. During backup, if this variable is set, an additional directory listing is added to the backup data stream. Because of the additional process required, this addition could affect the backup data stream size and performance.
than just backing up the first instance of the hard link files. In this case, a selective restore will always have the file data. The disadvantage of this option is that backups might take longer and more space is required to back up a data set with hard link files. Backing Up NAS Volume Data Using NDMP The FluidFS cluster does not use a dedicated IP address for backup operations; any configured client network address can be used. Data is sent over Ethernet.
Environment Variable Description Used In Default Value TYPE Specifies the type of backup and restore application. The valid values are: ● dump – NDMP server generates inode-based file history ● tar – NDMP server generates file-based file history Backup and Restore dump FILESYSTEM Specifies the path to be used for the backup. The path must be a directory. Backup None LEVEL Specifies the dump level for the backup operation. The valid values are 0 to 9.
Environment Variable Description Used In Default Value Backup N to backups using the LEVEL environment variable. The valid values are: ● -1 – Specifies that token-based backup is disabled ● 0 – Specifies that a token-based backup is performed. After the backup completes, a token can be retrieved by using the DUMP_DATE environment variable. This token can then be passed in a subsequent backup as the value of BASE_DATE.
2. Click the File System tab. 3. In the File System view, click Cluster Connectivity. 4. Click the Backup tab. 5. In the NDMP pane, click Change Backup User Password. The Change Backup User Password dialog box opens. 6. In the Password field, type an NDMP password. The password must be at least seven characters long and contain three of the following elements: a lowercase character, an uppercase character, a digit, or a special character (such as +, ?, or ∗). 7.
(Optional) In addition, some DMA servers require more information, such as the host name of the FluidFS cluster, OS type, product name , and vendor name. ● ● ● ● Host name of the FluidFS cluster, which uses the following format:controller_number.FluidFS_cluster_name OS type – DellFluid File System Product – Compellent FS8600 Vendor – Dell Most backup applications automatically list the available NAS volumes to back up. Otherwise, you can manually type in the NAS volume path.
Viewing NDMP Jobs and Events All NDMP jobs and events can be viewed using Storage Manager. View Active NDMP Jobs View all NDMP backup and restore operations being processed by the FluidFS cluster. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, click Cluster Connectivity. 4. Select Backup. The NDMP Sessions area displays the NDMP jobs.
Replication Scenarios Description Fast backup and restore Maintains full copies of data for protection against data loss, corruption, or user mistakes Remote data access Applications can access mirrored data in read-only mode, or in read-write mode if NAS volumes are promoted or cloned Online data migration Minimizes downtime associated with data migration Disaster recovery Mirrors data to remote locations for failover during a disaster Configuring replication is a three-step process: ● Add a repl
Figure 45.
After a partner relationship is established, replication between the partners can be bidirectional. One system could hold target NAS volumes for the other system as well as source NAS volumes to replicate to that other system. A replication policy can be set up to run according to a set schedule or on demand. Replication management flows through a secure SSH tunnel from system to system over the client network.
Change the Local or Remote Networks for a Replication Partnership Change the local or remote replication network or IP address for a replication partnership. NAS volumes can be replicated only between tenants that are mapped on the local and remote FluidFS clusters. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, click Replications. 4. Click the Remote Clusters tab, select a remote cluster, and then click Edit Settings.
● ● ● ● ● The The The The The maximum maximum maximum maximum maximum number number number number number of of of of of active outgoing replications is 10. If more than 10 replications are active, they are queued. active incoming replications is 100. If more than 100 replications are active, they are queued. replication partners is 100. replicated NAS volumes or containers (source and target) on a cluster is 1024. replication schedules per system is 1024.
7. Select the Enable QoS checkbox and then select a predefined QoS node from the drop-down list. 8. Click OK. Change Replication Throttling To disable replication throttling on a QoS node: Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, click Replications. 4. Click the Replication NAS Volumes tab, select a replication, and then right-click. 5. Select Replication Actions. 6. From the drop-down list, select Edit Replication QoS. 7.
When using cascaded replication for replications that are not alike, a replication can be limited when the different replication is not a cascaded replication.
Prerequisites ● The target NAS volume must be promoted to a standalone NAS volume. ● You must remove replication schedules for the replication. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, expand NAS Volumes and select a NAS volume. 4. Click the Replications tab. 5. In the Replication Status area, click Delete. The Delete dialog box opens. 6. Click OK.
2. Click the File System tab. 3. In the File System view, expand NAS Volumes and select a NAS volume. 4. Click the Replication tab. 5. Select a replication schedule and click Edit Settings. The Edit Replication Schedule Settings dialog box opens. 6. Specify when to run replication: ● To run replication based on a period of time, select the Replicate every checkbox and type the frequency in minutes, hours, days, or weeks.
6. Click OK. Monitoring Replication Progress and Viewing Replication Events The progress of replication operations and events related to replication can be viewed using Storage Manager. Monitor Replication Progress Monitor the progress of all replication operations being processed for the FluidFS cluster. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Replications. 4. Click the Replications tab.
Demote a Target NAS Volume Demote the target NAS volume to resume the original replication operations. When you demote a target NAS volume, all data written to the recovery NAS volume while it was temporarily promoted will be lost. You can demote a target NAS volume only from the source FluidFS cluster. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System viewe, expand NAS Volumes and select a NAS volume. 4. Click the Replications tab. 5.
Setting Up and Performing Disaster Recovery This section contains a high-level overview of setting up and performing disaster recovery. In these instructions, Cluster A is the source FluidFS cluster containing the data that must be backed up and Cluster B is the target FluidFS cluster, which backs up the data from source cluster A. Prerequisites Prerequisites ● Cluster B is installed, but has no NAS volumes configured. ● Cluster A and Cluster B are at the same FluidFS version.
Ensure that the DNS server on Cluster B is the same as the DNS server or in the same DNS farm as the DNS server of Cluster A. Existing client connections might break and might need to be re-established. You must unmount and re-mount the NFS exports on the clients. b. (Single NAS volume failovers) Manually update the DNS entry for the NAS volume that was failed over.
Ensure that the DNS server on Cluster A is the same as the DNS server or in the same DNS farm as the DNS server of Cluster B. Existing client connections might break and might need to be re-established. You must unmount and re-mount the NFS Exports on the client. b. (Single NAS volume failovers) Manually update the DNS entry for the NAS volume that was failed over.
● ● ● ● Backup power supplies Fans Power supplies Temperature of the components View the Status of the Interfaces View the status of the interfaces in a NAS controller. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the Hardware tab. 3. In the Hardware view, expand Appliances to select an appliance ID and a controller ID. 4. Select Interfaces. The status of each interface is displayed.
View the Status of the Power Supplies View the status of the power supplies in a NAS appliance. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the Hardware tab. 3. In the Hardware view, expand Appliances and select an appliance ID. 4. Select Power Supply. The status of each power supply is displayed. Viewing the Status of FluidFS Cluster Services Storage Manager displays the status of services configured on a FluidFS cluster (such as Active Directory, LDAP, DNS, and NTP). Steps 1.
Viewing FluidFS Cluster Storage Usage Storage Manager displays a line chart that shows storage usage over time for a FluidFS cluster, including total capacity, unused reserved space, unused unreserved space, and used space. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the Summary tab. The Summary view displays the FluidFS cluster storage usage.
FluidFS Maintenance This section contains information about performing FluidFS cluster maintenance operations. These tasks are performed using the Storage Manager Client. Connecting Multiple Data Collectors to the Same Cluster You can have multiple data collectors connected to the same FluidFS cluster. About this task To designate the Primary data collector and/or whether it receives events: Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the Summary tab 3.
Remove a FluidFS Cluster From Storage Manager Remove a FluidFS cluster if you no longer want to manage it using Storage Manager. For example, you might want to move the FluidFS cluster to another Storage Manager Data Collector. Steps 1. Click the Storage view and select a FluidFS cluster. 2. Click the Summary tab. 3. In the right pane, click Delete. The Delete dialog box appears. 4. Click OK.
Move a FluidFS Cluster into a FluidFS Cluster Folder Move a FluidFS cluster into a folder to group it with other FluidFS clusters. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the Summary tab. 3. In the right pane, click Move. The Select Folder dialog box appears. 4. Select a parent folder. 5. Click OK. Delete a FluidFS Cluster Folder Delete a FluidFS cluster folder if it is not being used. Prerequisites The folder must be empty. Steps 1.
b. c. d. e. In the IP Address field, type an IP address for the NAS controller. Click OK. Repeat the preceding steps for each NAS controller. To specify a VLAN tag, type a VLAN tag in the VLAN Tag field. When a VLAN spans multiple switches, the VLAN tag is used to specify to which ports and interfaces to send broadcast packets. f. Click Next. 7. (iSCSI only) To configure the IP addresses for SANb / eth31, use the Configure IP Addresses for NAS Controller iSCSI HBAs page.
○ If client access to the FluidFS cluster is not through a router (in other words, a flat network), define one client VIP per FluidFS cluster. ○ If clients access the FluidFS cluster through a router, define a client VIP for each client interface port per NAS controller. ● New NAS controller IP addresses are available to be added to the new NAS appliance. Verify that there are two additional IP addresses available per NAS appliance.
c. Click OK. 11. (Optional) Configure the remaining client network attributes as needed. ● To change the netmask of the client network, type a new netmask in the Netmask field. ● To specify a VLAN tag, type a VLAN tag in the VLAN Tag field. 12. Click Next. After you are finished configuring each client network, the Connectivity Report page displays. NOTE: Adding the appliance to the cluster can take approximately 15 minutes. 13.
2. Click the Hardware tab. 3. In the Hardware view, expand Appliances to select an appliance ID and a NAS controller ID. 4. Click Detach. The Detach dialog box opens. 5. Click OK. The Detach dialog box displays the progress of the detach process. If you close the dialog box, the process will continue to run in the background. The NAS controller is detached when the state of the NAS controller changes to Detached. (Click the Hardware tab→ Appliances→ Controller to display the state of the controller.
b. Push the controller handle down until the controller disengages from the appliance. c. Use the controller handle to pull the controller out of the appliance. 5. Insert the new NAS controller in the NAS appliance chassis. a. b. c. d. Ensure that the controller cover is closed. Align the controller with the appropriate slot in the appliance. Push the controller into the appliance until the controller seats into place. Push the handle toward the front of the appliance until it locks. 6.
3. Configure email notifications to receive email notifications for available FluidFS service pack upgrades. a. b. c. d. In the top pane of the Storage Manager Client, click Edit User Settings. The Edit User Settings dialog box appears. Click the Manage Events tab. Select the checkbox for the event. Click OK. Install a Service Pack to Update the FluidFS Software Use the Upgrade FluidFS Cluster wizard to update the FluidFS software.
Step Description Verify Package Integrity The checksum of the downloaded FluidFS service pack is re-computed to verify the integrity of the service pack. Upload Package to FluidFS The FluidFS service pack is uploaded to a NAS controller in the FluidFS cluster. Register Package Storage Manager waits for FluidFS to register that the package has arrived and make the service pack available for installation. 8. Click Finish when you are ready to install the service pack.
NAS Volume Configuration Backups Whenever a change in the NAS volume's configuration is made, it is automatically saved in a format that allows you to restore it later. The configuration is stored and encrypted in the .clusterConfig folder, which is located in the NAS volume's root folder. This folder can be backed up, either individually, or with the NAS volume's user data, and later restored. The configuration of a NAS volume can be restored on another NAS volume on the same system or on another system.
● The storage administrator can copy the .clusterConfig folder to a NAS volume in the system from its backup or from another system using an NDMP restore. When using a backup from another system, the restore operation works only if the saved configuration was taken from a system using the same FluidFS version. ● The .clusterConfig folder is automatically copied to target NAS volumes during replication.
Restore Local Groups Local groups can be restored by restoring the configuration stored on the most current NAS volume in the FluidFS cluster and restoring it on the same system or on another system. About this task When you restore the local groups configuration, it overwrites and replaces the existing configuration. Clients that are currently connected to the FluidFS cluster are disconnected. Clients will then automatically reconnect. Steps 1. Ensure the.
FS Series VAAI Plugin The VAAI plugin allows ESXi hosts to offload some specific storage-related tasks to the underlying FluidFS appliances.
Plugin Verification To check if the VAAI plugin is installed in an ESXi host, type the following command in the ESXi console:# esxcli software vib list | grep Dell_FluidFSNASVAAI When running versions earlier than FluidFS v5.0.300109, a positive reply should return Dell_FluidFSNASVAAI 1.1.0-301 DELL VMwareAccepted 2015-05-17 When running versions 5.0.300109 or later, a positive reply should return: Dell_FluidFSNASVAAI 1.1.
FluidFS Troubleshooting This section contains information about troubleshooting problems with the FluidFS cluster. These tasks are performed using the Storage Manager Client. Viewing the Event Log A FluidFS cluster generates events when normal operations occur and also when problems occur. Events allow you to monitor the FluidFS cluster, detect and solve problems. Events are logged to the Event Log. View the Event Log View events contained in the Event Log. Steps 1.
3. In the Search field, type the text to search for. 4. Select search parameters as needed: ● To make the search case-sensitive, select the Match Case check box. ● To prevent the search from wrapping, clear the Wrap check box. NOTE: By default, when a search reaches the bottom of the list and Find Next is clicked, the search wraps around to the first match in the list. When a search reaches the top of the list and Find Previous search wraps around to the last match in the list.
Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System tab navigation pane, select Cluster Maintenance. 4. In the right pane, click the Support tab. 5. In the Diagnostic Tools area, click Run Diagnostic. The Run Diagnostic wizard opens. 6. Select the type of diagnostic to run. 7. Select the secondary type (authentication or file access). 8. Click Next. The Run Diagnostics dialog box opens. 9. Select a Tenant from the drop-down list. 10.
2. Press and release the recessed power button at the back of the NAS controller to turn on the NAS controller. 3. When you see the F10 = Launch Dell Embedded Diagnostics Module prompt, press F10. The ePSA Pre-boot System Assessment window is displayed, listing all devices detected in the system. The diagnostics starts executing the tests on all the detected devices. 4. After you are finished running the embedded system diagnostics, select Exit to exit the diagnostics and reboot the NAS controller.
d. In the Password field, type the iBMC password. e. Click OK. The iBMC Properties page appears. 3. Launch the iBMC virtual KVM. a. In the navigation pane, expand vKVM & vMedia and click Launch. b. In the right pane, click Launch Java KVM Client. The Video Viewer appears and displays the FluidFS cluster console. Troubleshooting Common Issues This section contains probable causes of and solutions to common problems encountered when using a FluidFS cluster.
● DNS might not be configured. ● NTP might not be configured. Workaround When configuring the FluidFS cluster to connect to an Active Directory domain: 1. Ensure that you use a FQDN and not the NetBIOS name of the domain or IP address of the domain controller. 2. Ensure that the user has permissions to add systems to the domain. 3. Use the correct password. 4. Configure DNS. 5. The FluidFS cluster and Active Directory server must use a common source of time.
2. Use the password configured in Storage Manager for the NDMP client while setting up the NDMP backup/restore in your backup application. If the backup application can log into the FluidFS cluster, but no NAS volumes are available for backup, verify that the FluidFS cluster has NAS volumes created on it. Troubleshoot SMB Issues This section contains probable causes of and solutions to common SMB problems.
Workaround 1 Update to FluidFS MR640 or later. Workaround 2 Disable SMB v3 and use SMB v2: 1. Start Dell Storage Manager. 2. In the Storage view, select a FluidFS cluster. 3. Click the File System tab. 4. In the File System view, select Client Accessibility. 5. Click the Protocols tab. 6. Select Edit SMB Protocol Settings. 7. Clear the check box next to SMBv3 Protocol. 8. Click OK to save settings. SMB ACL Corruption Description SMB ACLs are corrupt.
SMB Delete On Close Denial Description Files are deleted while they are in use. Cause If multiple users are working on the same file and one user deletes the opened file, it is marked for deletion, and is deleted after it is closed. Until then, the file appears in its original location but the system denies any attempt to open it. Workaround Notify the client who tried to open the file that the file has been deleted.
Verify that you can access the problematic SMB share using a Windows client: 1. Click Run. 2. Enter the client access VIP and share name: \ \\ SMB Share Name Truncated In Event After Mapping SMB Share Description After a client maps a SMB share, the following event is generated and the SMB share name is truncated in the event. In this example, the SMB share name is share1_av. SMB client connection failure. Un-available share \ \172.22.151.106\share1_a Cause This is a
Troubleshoot NFS Issues This section contains probable causes of and solutions to common NFS problems. Cannot Mount NFS Export Description When attempting to mount an NFS export, the mount command fails due to various reasons such as: ● Permission denied. ● FluidFS cluster is not responding due to port mapper failure - RPC timed out or input/output error. ● FluidFS cluster is not responding due to program not registered. ● Access denied. ● Not a directory.
% showmount -e Export list for : /abc 10.10.10.0 /xyz 10.10.10.0 If the NFS export is available, review the NFS export name spelling in the relevant mount command on the client. It is recommended to copy and paste the NFS export name from the showmount output to the mount command. NFS File Access Denied Description This event is issued when an NFS client does not have enough permissions for the file on a NAS volume.
Cause This error is usually the outcome of a communication error between the FluidFS cluster and the NIS/LDAP server. It can be a result of a network issue, directory server overload, or a software malfunction. Workaround Repeat the below process for each configured NIS/LDAP server, each time leaving just a single NIS/LDAP used, starting with the problematic server. 1. Inspect the NIS/LDAP server logs and see whether the reason for the error is reported in the logs. 2.
NFS Write To Read-Only NAS Volume Description A client tries to modify a file on a read-only NAS volume. Cause A NAS volume is set to read-only when it is the target of a replication. The most frequent reason for this event is either: ● The client meant to access the target system for read purposes, but also tries to modify a file by mistake. ● The client accesses the wrong system due to similarity in name/IP address.
Cause It is impossible to change only the file owner ID to UID if the original file ownership is SID/GSID. Workaround To change the file ownership to UNIX style ownership, set UID and GID at same time. Problematic SMB Access From a UNIX/Linux Client Description A UNIX/Linux client is trying to mount a FluidFS cluster SMB share using SMB (using /etc/fstab or directly using smbmount).
RX and TX Pause Warning Messages Description The following warning messages might be displayed when Storage Manager reports connectivity in a Not Optimal state: Rx_pause for eth(x) on node1 is off. Tx_pause for eth(x) on node 1 is off. Cause Flow control is not enabled on the switch(es) connected to a FluidFS cluster controller. Workaround See the switch vendor's documentation to enable flow control on the switch(es).
Replication Target is Not Optimal Description Replication between the source NAS volume and the target NAS volume fails because the target NAS volume is not optimal. Cause Replication fails because the file system of the target NAS volume is not optimal. Workaround Check the system status of the target system to understand why the file system is not optimal. The replication continues automatically after the file system recovers.
Workaround Contact technical support to resolve this issue. Replication Target Does Not Have Enough Space Description Replication between the source NAS volume and target NAS volume fails because there is not enough space in the target NAS volume. Cause Replication fails because there is not enough space in the target NAS volume. Workaround Increase the space of the target NAS volume.
Troubleshoot System Issues This section contains probable causes of and solutions to common system problems. NAS System Time Is Wrong Description Scheduled tasks are running at the wrong times. The date and time of Event Log messages is wrong. Cause ● The time on the FluidFS cluster is incorrect. ● No NTP server is defined for the FluidFS cluster. ● The NTP server servicing the FluidFS cluster is either down or has stopped providing NTP services.
Workaround If a user frequently needs to perform a cross-protocol security related activity, split the data into separate NAS volumes based on the main access protocol. Attach Operation Fails Description The operation to attach the NAS controller to the FluidFS cluster fails. Workaround ● Connect a keyboard and monitor to the NAS controller that failed the attach operation, and view the error message to determine why the attach operation failed.
13 Remote Storage Centers and Replication QoS A remote Storage Center is a Storage Center that is configured to communicate with the local Storage Center over the Fibre Channel and/or iSCSI transport protocols. Replication Quality of Service (QoS) definitions control how bandwidth is used to send replication and Live Volume data between local and remote Storage Centers.
● If you intend to use Challenge Handshake Authentication Protocol (CHAP) authentication for iSCSI replication traffic, the iSCSI fault domains that are used for replication on each Storage Center have CHAP enabled. About this task NOTE: PS Groups do not support Live Volume. Steps 1. Click the Storage view. 2. In the Storage pane, select a Storage Center or PS Group. 3. Open the Configure iSCSI Connection wizard. ● From a Storage Center: a. Click the Storage tab. b.
Related tasks Enable Bidirectional CHAP for iSCSI Replication in a Fault Domain on page 271 Remove an iSCSI Connection to a Remote Storage Center If no replications or Live Volumes are defined for a remote storage system, the iSCSI connection to the remote storage system can be removed. Prerequisites The storage system(s) for which you want to configure iSCSI connections must be added to Storage Manager. Steps 1. Click the Storage view. 2. In the Storage pane, select a Storage Center. 3.
7. When you are finished, click OK. Rename a QoS Definition Use the Edit Settings dialog box to rename a QoS Definition. Steps 1. Click the Replications & Live Volumes view. 2. Click the QoS Nodes tab, then select the QoS definition. 3. In the right pane, click Edit Settings. The Edit Replication QoS dialog box appears. 4. In the Name field, type a name for the QoS definition. 5. Click OK.
NOTE: If you select Blocked for a time range, no data is transferred during that period for all replications, Live Volumes, and Live Migrations that are associated with the QoS node. This can cause synchronous replications to become unsynchronized. Live Migrations that use only blocked QoS nodes cannot be completed. b. Limit bandwidth for other time ranges as needed. 6. When you are finished, click OK.
14 Storage Center Replications and Live Volumes A replication copies volume data from one Storage Center to another Storage Center to safeguard data against local or regional data threats. A Live Volume is a replicating volume that can be mapped and active on a source and destination Storage Center at the same time. To perform replications, a Remote Instant Replay (Replication) license must be applied to the source and destination Storage Centers.
Replication Types There are two replication types: asynchronous and synchronous. Asynchronous replication periodically copies snapshot data to the destination volume after a snapshot is frozen. Synchronous replication writes data to both the source and destination volumes simultaneously to make sure they are synchronized at all times. Asynchronous Replication Asynchronous replication copies snapshots from the source volume to the destination volume after they are frozen.
NOTE: When you enable replication deduplication, the Storage Center creates a secondary 'Delta' volume. This secondary volume adds to the overall volume memory usage and therefore will reduce the amount of configurable volume space that can be deployed. The additional volume memory usage affects the overall System Scalability Guidelines that are documented in the Storage Center Release Notes.
○ Replication 1: Storage Center A → Storage Center B ○ Replication 2: Storage Center A → Storage Center C ● Cascade mode: A source volume is replicated in series to multiple Storage Centers. Example: Two replications are created in series: ○ Replication 1: Storage Center A → Storage Center B ○ Replication 2: Storage Center B → Storage Center C Topology Limitations for Volumes Associated with Multiple Replications The following limitations apply to volumes that are associated with multiple replications.
Simulate a Replication Run a synchronous replication simulation to verify bandwidth requirements and optimal data movement. Steps 1. Click the Storage view. 2. In the Storage pane, select the Storage Center that hosts the volume for which you want to simulate replication. 3. In the Summary tab, click Actions, then select Replication→ Simulate Replicate Volumes. ● If one or more QoS definitions exist, the Create Simulation Replication wizard appears.
Replicating Volumes Create a replication to copy a volume from one Storage Center to another Storage Center to safeguard data against local or regional data threats. Create a Single Replication Create a single replication to copy one volume from a Storage Center to another Storage Center. Prerequisites The Replication Requirements on page 502 must be met. Steps 1. Click the Storage view. 2. In the Storage pane, select the Storage Center that hosts the volume you want to replicate. 3. Click the Storage tab.
NOTE: If the volume is a replication destination, Replication QoS settings are enforced. If the volume is a Live Volume secondary, the Replication QoS settings are not enforced. 3. Select the Storage Center that hosts the volumes you want to replicate, then click Next. The wizard advances to the next page. 4. Select the remote Storage Center to which you want to replicate the volumes, then click Next. ● The wizard advances to the next page.
a. From the navigation pane, select the view volume. b. Click Replicate One Time Copy of Volume. The Create Replication wizard appears. c. Select a destination Storage Center. d. Click Next. e. Modify the replication options as needed. For more information on creating a replication, see Create a Single Replication. f. Click Finish. 4. Shut down the servers mapped to the source volume. 5. Unmap servers mapped to the source volume. 6.
Change the Synchronization Mode for a Synchronous Replication The synchronization mode for a synchronous replication can be changed with no service interruption. The replication temporarily becomes unsynchronized when the synchronization mode is changed. Prerequisites The source and destination Storage Centers must be running version 6.5 or later. Steps 1. Click the Replications & Live Volumes view. 2. On the Replications tab, select the replication, then click Edit Settings.
Configure a Replication to Write Data to the Lowest Tier at the Destination The Replicate Storage To Lowest Tier option forces all data written to the destination volume to the lowest storage tier configured for the volume. By default, this option is enabled for asynchronous replications. Prerequisites The replication must be asynchronous. The Replicate Storage To Lowest Tier option is not available for synchronous replications. Steps 1. Click the Replications & Live Volumes view. 2.
Convert a Replication to a Live Volume If servers at both the local and remote site need to write to a volume that is currently being replicated, you can convert a replication to a Live Volume. Prerequisites ● The Live Volume requirements must be met. ● If the replication is synchronous, the source and destination Storage Centers must be running version 6.5 or later. Steps 1. Click the Replications & Live Volumes view. 2. On the Replications tab, select the replication, then click Convert to Live Volume.
3. (Optional) When you are finished, you can revert to the default view by clicking Select All in the Source Storage Centers pane. Filter Replications by Destination Storage Center To reduce the number of replications that are displayed on the Replications & Live Volumes view, you can filter the replications by destination Storage Center. Steps 1. Click the Replications & Live Volumes view. 2.
View IO/sec and MB/sec Charts for a Replication When a replication is selected, the IO Reports subtab displays the Replication IO/Sec and Replication MB/Sec charts. About this task The charts contain performance data for the replication of a volume from the primary Storage Center to the secondary Storage Center. Steps 1. Click the Replications & Live Volumes view. 2. On the Replications tab, select a replication. 3. In the bottom pane, click the IO Reports tab.
Managing Replications Between PS Series Groups and Storage Centers This section includes information for managing replications between PS Series groups and Storage Centers. Create a Replication From a PS Group to a Storage Center Create a replication from a PS Group to a Storage Center to setup a replication relationship. After setting up the replication, replicate a volume from a PS Group to a Storage Center using a replication schedule or Replicate Now.
Edit a Cross-Platform Replication Edit a cross-platform replication to change the settings of the replication. Setting vary based on which platform hosts the source volume. Steps 1. Click the Replications & Live Volumes view. 2. In the Replications tab, select a replication. 3. Click Edit Settings. The Edit Replication Settings dialog box appears. 4. Modify the settings. NOTE: For more information on the options on the dialog box, click Help. 5. Click OK.
Steps 1. Click the Storage view. 2. In the Storage pane, select the Storage Center that hosts the volume you want to replicate. 3. Click the Storage tab. 4. In the Storage tab navigation tree, select the volume you want to replicate. 5. In the right pane, click Replicate Volume. ● If one or more QoS definitions exist, the Create Replication wizard appears. ● If a QoS definition has not been created, the Create Replication QoS wizard appears.
Create a Daily Replication Schedule A daily replication schedule determines how often a PS Series group replicates data to the destination volume at a set time or interval on specified days. Steps 1. Click the Storage view. 2. In the Storage pane, select a PS Series group. 3. Click the Storage tab. 4. From the Storage tab navigation pane, select a volume. The volume must be the source of a replication relationship. 5. Click Create Schedule. The Create Schedule dialog box opens. 6.
2. In the Storage pane, select a PS Group. 3. Click the Storage tab. 4. From the Storage tab navigation pane, select a volume. The volume must be the source of a replication relationship. 5. From the Schedules tab, select the replication schedule to edit. 6. Click Edit. The Edit Schedule dialog box appears. 7. Modify the schedule settings as needed. NOTE: For more information on the schedule settings, click Help. 8. Click OK.
Storage Center Live Volumes A Live Volume is a replicating volume that can be mapped and active on a source and destination Storage Center at the same time. While both Storage Centers can accept writes, when a server writes to the destination volume, the writes are redirected to the source volume before being replicated back to the destination.
Synchronous Replication on page 501 Live Volume Icon The Live Volume icon indicates Live Volumes on the Storage tab of the Storage view to differentiate it from regular volumes and replicated volumes. NOTE: To determine whether a Live Volume is primary or secondary from the Storage tab, select the Live Volume, then review the Live Volume Attributes section under the Summary subtab. Live Volume Roles There are two roles for Live Volumes: primary and secondary.
Live Volume After Swap Role In the following diagram, a role swap has occurred so the secondary Storage Center is on the left and the primary Storage Center is on the right. Figure 48. Example Live Volume Configuration After Swap Role 1. Server 2. Server IO request to secondary volume (forwarded to primary Storage Center by secondary Storage Center) 3. Secondary volume 4. Live Volume replication over Fibre Channel or iSCSI 5. Primary volume 6.
● Min Secondary Percent Before Swap (%) Automatic Failover for Live Volumes With Automatic Failover applied, the secondary Live Volume will automatically be promoted to primary in the event of a failure. After the primary Live Volume comes back online, Automatic Restore optionally restores the Live Volume relationship. Live Volume Automatic Failover Requirements The following requirement must be met to enable Automatic Failover on a Live Volume.
Figure 49. Step One 2. The secondary Storage Center cannot communicate with the primary Storage Center. 3. The secondary Storage Center communicates with the tiebreaker and receives permission to activate the secondary Live Volume. 4. The secondary Storage Center activates the secondary Live Volume. Figure 50. Step Four NOTE: When the primary Storage Center recovers, Storage Center prevents the Live Volume from coming online.
1. The primary Storage Center recovers from the failure. Figure 51. Step One 2. The primary Storage Center recognizes that the secondary Live Volume is active as the primary Live Volume. 3. The Live Volume on the secondary Storage Center becomes the primary Live Volume. 4. The Live Volume on the primary Storage Center becomes the secondary Live Volume. Figure 52.
Managed Replications for Live Volumes A managed replication allows you to replicate a primary Live Volume to a third Storage Center, protecting against data loss in the event that the site where the primary and secondary Storage Centers are located goes down. When a Live Volume swap role occurs, the managed replication follows the primary volume to the other Storage Center.
Managed Replication After Live Volume Swap Role In the following diagram, a swap role has occurred so the secondary Storage Center is on the left and the primary Storage Center is located on the right. The managed replication has moved to follow the primary volume. Figure 54. Live Volume with Managed Replication Example Configuration After Swap Role 1. Server 2. Server IO request to secondary volume (forwarded to primary Storage Center by secondary Storage Center) 3. Secondary volume (Live Volume) 4.
Convert a Single Volume to a Live Volume To convert a single volume to a Live Volume, create the Live Volume from the Storage view. Prerequisites The Live Volume requirements must be met. See Live Volume Requirements. Steps 1. Click the Storage view. 2. In the Storage pane, select the Storage Center that hosts the volume you want to replicate. 3. Click the Storage tab. 4. In the Storage tab navigation tree, select the volume. 5. In the right pane, click Convert to Live Volume.
● If Fibre Channel or iSCSI connectivity is not configured between the local and remote Storage Centers, a dialog box opens. Click Yes to configure iSCSI connectivity between the Storage Centers. 5. Select the check box for each volume that you want to convert, then click Next. The wizard advances to the next page. 6. (Optional) Modify Live Volume default settings. ● In the Replication Attributes area, configure options that determine how replication behaves.
2. On the Live Volumes tab, select the Live Volume, then click Edit Settings. The Edit Live Volume dialog box appears. 3. In the Type area, select Asynchronous or Synchronous. 4. Click OK. Related concepts Live Volume Types on page 518 Change the Synchronization Mode for a Synchronous Live Volume The synchronization mode for a synchronous Live Volume can be changed with no service interruption. Prerequisites The source and destination Storage Centers must be running version 6.5 or later. Steps 1.
Managed Replication Requirements on page 525 Include Active Snapshot Data for an Asynchronous Live Volume The Active Snapshot represents the current, unfrozen volume data. Steps 1. Click the Replications & Live Volumes view. 2. On the Live Volumes tab, select the Live Volume, then click Edit Settings. The Edit Live Volume dialog box appears. 3. Select or clear the Replicate Active Snapshot check box then, click OK.
Allow Replicate Storage to Lowest Tier Selection To replicate data to the lowest storage tier, the option must be set in the Data Collector. Steps 1. In the top pane of the Storage Manager Client, click Edit Data Collector Settings. The Edit Data Collector Settings dialog box appears. 2. Click the Replication Settings tab. 3. Select the Allow Select to Lowest Tier on Live Volume Create check box. 4. Click OK.
Resume a Paused Live Volume Resume a Live Volume to allow volume data to be copied to the secondary Storage Center. Steps 1. Click the Replications & Live Volumes view. 2. On the Replications tab, select the paused replication, then click Resume. The Resuming Replication dialog box opens. 3. Click OK.
Force Delete a Live Volume Force Delete is an option for Live Volumes in a fractured state or if Storage Manager can view only one side of the Live Volume because the other side is down. A Live Volume is fractured if both secondary and primary Live Volumes are designated as primary or if Storage Manager can communicate with only the primary Live Volume. Prerequisites Both Live Volumes are inactive or Storage Manager is managing only one of the Storage Centers.
4. Select a Live Volume. 5. Click Next. 6. Select the Storage Center where the Live Volume will be activated. 7. Click Next. NOTE: A warning page appears if Storage Manager is managing only one of the Storage Centers. 8. Click Finish. Modifying Live Volumes with Automatic Failover The following tasks apply to Live Volumes with Automatic Failover. Update to the Local Tiebreaker Updating to the local tiebreaker configures the Data Collector that Storage Manager is connected to as the tiebreaker.
Live Volume ALUA Optimization Considerations Live Volume ALUA is used to control the priority of paths for the Primary and Secondary Live Volume components. By default, volume mapping is Active/Optimized on the primary volume path and Active/Non-optimized on the secondary volume path. This section provides information about the design features of Live Volume ALUA. ● ALUA is Automatically enabled: Live Volume ALUA is automatically applied when creating Live Volumes in any of the following circumstances.
NOTE: Because enabling Live Volume ALUA is a disruptive process, is should be performed during a maintenance operation. 5. Click Next. A dialog box displays a warning message and an option to enable or disable reporting non-optimized paths. ● By default, the Report Non-optimized Paths check box is selected. This will cause Live Volumes to report nonoptimized ALUA paths from the secondary system.
Filter Live Volumes By Secondary Storage Center To reduce the number of Live Volumes that are displayed on the Replications & Live Volumes view, filter the Live Volumes by secondary Storage Center. Steps 1. Click the Replications & Live Volumes view. 2. Click the Live Volumes tab. 3. In the DR Storage Centers pane, hide Live Volumes that are destined to one or more Storage Centers by clearing the corresponding check boxes. 4.
Steps 1. Click the Replications & Live Volumes view. 2. On the Live Volumes tab, select a Live Volume. 3. In the bottom pane, click the IO Reports tab.
15 Storage Center DR Preparation and Activation Activate disaster recovery to restore access to your data in the event of an unplanned disruption.
Step 2: The Source Site Goes Down When the source site goes down, the data on the source volume can no longer be accessed directly. However, the data has been replicated to the destination volume. Figure 56. Replication When the Source Site Goes Down 1. Source volume (down) 2. Replication over Fibre Channel or iSCSI (down) 3. Destination volume 4. Server mapping to source volume (down) 5.
Step 4: Connectivity is Restored to the Source Site When the outage at the source site is corrected, Storage Manager Data Collector regains connectivity to the source Storage Center. The replication cannot be restarted at this time because the destination volume contains newer data than the original source volume. Figure 58. Replication After the Source Site Comes Back Online 1. Source volume 2. Replication over Fibre Channel or iSCSI (down) 3. Destination volume (activated) 4.
Step 5B: The Activated DR Volume is Deactivated After the replication from the activated DR volume to the original source volume is synchronized, Storage Manager prompts the administrator to halt IO to the secondary volume. NOTE: IO must be halted before the destination volume is deactivated because the deactivation process unmaps the volume from the server. Figure 60. DR-Activated Volume is Deactivated 1. Source volume being recovered 2. Replication over Fibre Channel or iSCSI 3.
Disaster Recovery Administration Options Use Storage Manager to prepare for DR, activate DR, and restore failed volumes. To make sure that a site outage does not prevent you from accessing Storage Manager to perform DR operations, you can optionally install a remote Data Collector at a DR site. A remote Data Collector provides access to Storage Manager DR options when the primary Data Collector is unavailable.
4. From the Frequency drop-down menu, select how often you want restore points to be automatically saved and validated. 5. (Conditional) If you selected Daily in the previous step, select the time of day to save and validate restore points from the Time drop-down menu. 6. Click OK. Validate Replication Restore Points Validate replication restore points before testing or activating DR to make sure they can be used for DR. Steps 1. Click the Replications & Live Volumes view. 2.
a. Select the restore point that you want to modify, then click Edit Settings. The Predefine Disaster Recovery dialog box appears. b. Modify the recovery volume settings as needed, then click OK. These attributes are described in the online help. 5. When you are done, click Finish. Predefine Disaster Recovery Settings for a Single Restore Point If you need to make sure a recovery site has access to a replicated volume when DR is activated, predefine DR settings for the corresponding restore point. Steps 1.
● The Sync Data Status field displays the synchronization status for the replication at the time the restore point was validated. ● A recommendation about whether the destination volume is currently synchronized with the source volume is displayed below the Sync Data Status field in green or yellow text. NOTE: For high consistency mode synchronous replications that are current, the Use Active Snapshot check box is automatically selected. Figure 62. Test Activate Disaster Recovery Dialog Box b.
Figure 63. Test Activate Disaster Recovery Dialog Box 4. In the Name field, type the name for the activated view volume. 5. Select the server to which the activated view volume will be mapped. a. Next to the Server label, click Change. The Select Server dialog box appears. b. Select the server, then click OK. 6. Modify the remaining activation settings as needed. These attributes are described in the online help.
Disaster Recovery Activation Limitations Activating DR for a replication removes any replications that use the activated volume (original destination/secondary volume) as the source volume. Related concepts Replicating a Single Volume to Multiple Destinations on page 502 Planned vs Unplanned Disaster Recovery Activation During disaster recovery activation, you may choose whether you want to allow planned DR activation.
If the restore point corresponds to a synchronous replication, the dialog box displays additional information about the state of the replication: ● The Sync Data Status field displays the synchronization status for the replication at the time the restore point was validated. ● A recommendation about whether the destination volume is currently synchronized with the source volume is displayed below the Sync Data Status field in green or yellow text.
Activate Disaster Recovery for a Single Restore Point To activate DR for a replication or Live Volume, use the corresponding restore point. Steps 1. Click the Replications & Live Volumes view. 2. Click the Restore Points tab. 3. Right-click the restore point, then select Activate Disaster Recovery. The Activate Disaster Recovery dialog box appears.
8. (Optional) If Preserve Live Volume is not available or not selected, click Change next to Snapshot Profile List to specify which snapshot profiles will be associated with the activated volume. 9. Click OK. ● Storage Manager activates the recovery volume.
5. Select the replication from the table. 6. Click Next. 7. Modify the Volume settings for the destination volume as needed. 8. Click OK. ● Storage Manager activates the recovery volume. ● Use the Recovery Progress tab to monitor DR activation Restarting Failed Replications If a source volume is current and functional, and the destination system is available but a Replication failed or was deleted, you can restart the Replication. To see if a Replication can be restarted, validate Restore Points.
3. Right-click the restore point that corresponds to the replication, then select Restore/Restart DR Volumes. The Restore/ Restart DR Volumes dialog box appears. 4. Enable or disable the replication options as needed, then click OK. These options are described in the online help. Restoring Replications and Live Volumes A replication source volume or Live Volume primary volume can be restored from a replication destination volume or Live Volume secondary volume.
Volume Restore Procedures If DR was activated for multiple replications and/or Live Volumes hosted by a Storage Center pair, the affected volumes can be restored in a single operation. If DR was activated for a single volume, use the corresponding restore point to restore it. Restore Failed Volumes for Multiple Restore Points If multiple volumes hosted by a Storage Center pair failed, you can restore them simultaneously. Steps 1. Click the Replications & Live Volumes view. 2.
Restore a Failed Volume for a Single Restore Point If a single volume failed, you can use the corresponding restore point to restore the volume. Steps 1. Click the Replications & Live Volumes view. 2. Click the Restore Points tab. 3. Right-click the restore point that corresponds to the failed volume, then select Restore/Restart DR Volumes. The Restore/Restart DR Volumes dialog box appears. 4. (Storage Center 6.5 and later, Live Volume only) Choose a recovery method.
16 Remote Data Collector A remote Data Collector provides access to Storage Manager disaster recovery options when the primary Data Collector is unavailable.
Requirement Description DNS configuration All managed Storage Centers must be defined in DNS at the local and remote sites. The primary Data Collector host and remote Data Collector host must be defined in DNS to allow the Data Collectors to communicate. Software Requirements The software requirements that apply to the primary Data Collector also apply to the remote Data Collector. However, a remote Data Collector uses the file system to store data so there is no database requirement.
Steps 1. Download the Storage Manager Data Collector software. The Storage Manager Data Collector is available for download from the Drivers & Downloads tab of the storage system support page located at Dell.com/support. 2. Unzip the software, and double-click the Storage Manager Data Collector Setup file. The Storage Manager Data Collector - InstallShield Wizard appears. 3. Select a language from the drop-down menu, and click OK. 4. Click Next. The License Agreement page appears. 5.
The name of the zip file is DellEMCStorageManagerVA-x.x.x.x.zip, where x.x.x.x is the version number. 2. Extract the Storage Manager Virtual Appliance OVA file from the DellEMCStorageManagerVA-x.x.x.x.zip file. The filename of the OVA file is Storage Manager VA x.x.x.x.ova , where x.x.x.x is the version number. 3. Log on to the VMware vCenter server with the vSphere Web Client. 4. In the right pane, click Host and Clusters. 5. Right-click on ESXi and select Deploy OVF Template.
26. Power on the Storage Manager Virtual Appliance after it is deployed. Results After a Storage Manager Virtual Appliance is deployed using a static IP address, a different IP address might be displayed in the web console. If this issue occurs, reset the Virtual Appliance to force the correct IP address to be displayed in the web console.
Disconnecting and Reconnecting a Remote Data Collector Perform these tasks to disconnect or reconnect a remote Data Collector. NOTE: For user interface reference information, click Help. Temporarily Disconnect a Remote Data Collector Stop the Data Collector service on the remote Data Collector to temporarily disconnect it from the primary Data Collector. Steps 1. On the remote Data Collector server: a. Open the Data Collector. b.
2. If a Storage Center is selected from the drop-down list in Unisphere Central, click The Unisphere Central Home page is displayed. (Home). 3. Click Data Collector. The Data Collector view is displayed. 4. Click Restart Data Collector. 5. While the Remote Data Collector is restarting, connect to the Primary Data Collector. 6. Connect to the Data Collector. a. Open a web browser. b.
Figure 66. Storage Manager Client Welcome Screen The login screen appears. 3. Complete the following fields: ● ● ● ● User Name – Type the name of an Storage Manager user. Password – Type the password for the user. Host/IP – Type the host name or IP address of the server that is hosting the remote Data Collector. Web Server Port – If you changed the Web Server Port during installation, type the updated port number. 4. Click Log In.
Create a User Create a user account to allow a person access to Storage Manager. Steps 1. Connect to the Data Collector. a. Open a web browser. b. Type the address of the Data Collector in the web browser using the following format: https://data_collector_host_name_or_IP_address:3033/ c. Press Enter. The Unisphere Central login page is displayed. d. Type the user name and password of a Data Collector user with Administrator privileges in the User Name and Password field. e. Click Log In. 2.
Use a Remote Data Collector to Test Activate Disaster Recovery Testing disaster recovery functions the same way for primary and remote Data Collectors. Steps 1. Use the Storage Manager Client to connect to the remote Data Collector. 2. Click the Restore Points tab. 3. Click Test Activate Disaster Recovery.
17 Storage Replication Adapter for VMware SRM VMware vCenter Site Recovery Manager (SRM) supports storage vendors using Storage Replication Adapters. The Dell Storage Replication Adapter (SRA) allows sites to use the VMware vCenter SRM on Storage Centers managed by the Storage Manager software.
Component Version Requirements VMware vCenter for Photon-based SRM Version 8.2 and 8.3 Microsoft .NET Framework Version 4.5 installed on the SRM server VMware SRM and Storage Manager Prerequisites To use the Dell SRA with VMware vCenter Site Recovery Manager, the following configuration requirements must be met. Requirement Description Data Collector Deployment A Storage Manager Data Collector must be visible to all Storage Centers within the SRM configuration.
Dell SRA with Stretched Storage and vMotion SupportedDell SRAs include support for Stretched Storage with VMware Sitre Recovery Manager (SRM). Stretched Storage allows SRM to manage Storage Center Live Volume replications. vMotion, when used with stretched storage, allows virtual machines to migrate to another host without downtime.
Figure 69. SRA Configuration with a Primary and Remote Data Collector 1. Protected site 2. Recovery site 3. VMware SRM server at protected site 4. VMware SRM server at recovery site 5. Primary Data Collector at protected site 6. Remote Data Collector at recovery site 7. Storage Center at protected site 8. Storage Center at recovery site In a configuration with a Storage Manager Remote Data Collector, locate the Remote Data Collector on the Recovery Site.
18 Threshold Alerts Threshold alerts are automatically generated when user-defined threshold definitions for storage object usage are crossed. Threshold queries allow you to query historical data based on threshold criteria.
About this task Storage Manager generates threshold alerts after Storage Usage checks usage metrics and notices a threshold definition has been exceeded. Storage Usage runs daily at 12 AM by default. Steps 1. Click the Threshold Alerts view. 2. Click the Definitions tab. 3. Click Create Threshold Definition. The Create Threshold Definition dialog box opens. 4. Enter a name for the threshold definition in the Name field. 5. Select the type of threshold definition to create from the Type drop-down menu.
View an Existing Threshold Definition Select a threshold definition on the Definitions tab to view assigned objects, current threshold alerts, and historical threshold alerts. Steps 1. Click Threshold Alerts in the view pane to display the Threshold Alerts window. 2. Click the Definitions tab. 3. Select the threshold definition to view. The threshold definition is displayed in the bottom pane of the Definitions tab.
Delete Multiple Threshold Definitions If you no longer need multiple threshold definitions, you can delete them. Steps 1. Click the Threshold Alerts view. 2. Click the Definitions tab. 3. Use Shift+click or Control+click to select the threshold definitions to remove. 4. Click Delete in the bottom pane. The Delete Threshold Alert Definitions dialog box opens. 5. Click OK. Assigning Storage Objects to Threshold Definitions You can add or remove the storage objects that are monitored by threshold definitions.
Assigning Threshold Definitions to Storage Objects You can assign threshold definitions to storage objects that are accessible from Storage view. View the Threshold Definitions Assigned to a Storage Object or Storage Center View the threshold definitions assigned to a storage object or Storage Center in the Threshold Alerts tab. Steps 1. Click the Storage view. 2. Select a Storage Center in the Storage pane. 3. Click the Storage tab. 4.
Assign a Threshold Definition to a Controller or a Storage Center Select a controller or a Storage Center, then click the Set Threshold Alert Definitions to assign a threshold definition. Steps 1. Click the Storage view. 2. Select a Storage Center in the Storage pane. 3. Click the Hardware tab. 4. To display the threshold definitions assigned to the Storage Center, skip to the next step.
4. Click the Historical Threshold Alerts tab, in the bottom pane, to display past threshold alerts for the selected threshold definition. Viewing and Deleting Threshold Alerts The current and historical threshold alerts for the managed Storage Centers are displayed on the Alerts tab. The alerts are updated when the Storage Report report-gathering tasks are run. By default, IO Usage and Replication report gathering is performed every 15 minutes and Storage report gathering is performed daily at midnight.
2. Click the Alerts tab. 3. Use the Filter pane to filter threshold alerts by threshold definition properties. ● To filter the displayed threshold alerts by type (IO Usage, Storage, or Replication) select the Filter Type check box, and then select the type from the drop-down menu. ● If the Filter Type check box is selected, the Filter Alert Object Type check box can be selected to filter threshold alerts by the type of storage object selected from the drop-down menu.
Supported Threshold Definitions Threshold Alert Recommendation Type Alert Object Type Alert Definition IO Usage Volume Latency When latency for a volume exceeds the configured error threshold, the alert recommends moving the volume to a specific Storage Center, and gives you the option to act on the recommendation by creating a Live Volume.
Recommendations Based on Volume Latency If the recommendation was triggered by a threshold definition that monitors volume latency, the Recommend Storage Center dialog box displays a recommendation to move a specific volume to a specific Storage Center. Figure 70. Recommended Storage Center Dialog Box If Storage Manager identified a possible reason for the increased volume latency, the reason is displayed in the Recommend Reason field.
Creating Threshold Definitions to Recommend Volume Movement Create a threshold definition to recommend volume movement based on the rate of Storage Center front-end IO, volume latency, Storage Center controller CPU usage, or percentage of storage used for a Storage Center. Create a Threshold Definition to Monitor Front-End IO for a Storage Center When Storage Center front-end IO exceeds the value set for the error threshold, Storage Manager triggers a threshold alert with a volume movement recommendation.
b. Next to the Error Setting field, in the Iterations before email field, type the number of times the threshold must be exceeded to trigger the alert. 8. Select the Recommend Storage Center check box. 9. Configure the other options as needed. These options are described in the online help. 10. When you are finished, click OK. ● If you selected the All Objects check box, the threshold definition is created and the Create Threshold Definition dialog box closes.
c. Select the check box for each Storage Center controller that you want to monitor with the threshold definition, then click Finish. The threshold definition is added and the Create Threshold Definition dialog box closes. Create a Threshold Definition to Monitor the Percentage of Used Storage for a Storage Center When the Storage Center storage usage percentage exceeds the value set for the error threshold, Storage Manager triggers a threshold alert with a volume movement recommendation. Steps 1.
To act on the recommendation, record the Storage Center names displayed in the Current Storage Center and Recommended Storage Center fields. Automatically Create a Live Volume and Move the Volume Based on a Recommendation Use the Recommend Storage Center dialog box to automatically move a volume based on a recommendation. About this task NOTE: The option to create a Live Volume appears only for Storage Centers running version 7.0 or earlier. Steps 1.
Manually Move a Volume Based on a Recommendation If a threshold alert recommends moving volumes to a different Storage Center but does not recommend moving a specific volume, decide which volumes to move and manually create Live Volumes to move them. About this task NOTE: This method is the only way to move a volume for Storage Centers running version 7.0 or earlier. For other Storage Centers running version 7.1 or later, create a Live Migration to move the volume.
NOTE: Storage Manager can send only one threshold alert email for every 24 hour period. The number of threshold alert emails per 24 hour period cannot be configured. The combination of a 24 hour time period for threshold alert emails and a default Storage Usage collection interval of four hours might result in a day when a threshold alert email is not sent.
Configure an Email Address for Your User Account To receive email notifications, you must specify an email address for your user account. Prerequisites The SMTP server settings must be configured for the Data Collector. If these settings are not configured, the Data Collector is not able to send emails. Steps 1. In the top pane of the Storage Manager Client, click Edit User Settings. The Edit User Settings dialog box opens. 2. Type an email address for the user account in the Email Address field. 3.
View Saved Queries Saved threshold queries are displayed in the Saved Queries pane. About this task Public queries are accessible to all of the Storage Manager users.Personal queries are accessible only to the Storage Manager user that created the query. Steps 1. Click the Threshold Alerts view. 2. Click the Queries tab. The public and personal queries are displayed in in the Saved Queries pane. 3. In the Saved Queries pane, double-click the query to view.
3. In the Saved Queries pane, double-click the query to run. 4. Click Run. The results of the query are displayed in the Query Results pane. Export the Results of a Threshold Query The results of a threshold results can be exported to CSV, text, Excel, HTML, XML, or PDF file formats. Steps 1. Click the Threshold Alerts view. 2. Click the Queries tab. The public and personal queries are displayed in the Saved Queries pane. 3. Select a query from Saved Queries pane. 4. Click Run.
19 Storage Center Reports The Reports view allows users to view realtime reports generated by Storage Center and historical reports Storage Manager.
Table 25. Types of Reports (continued) Report Name Description Monthly Generated at the end of each month and displays the following information: ● Storage Center Summary - Displays information about storage space and the number of storage objects on the Storage Center. ● Volume Storage - Displays volume storage statistics. ● Disk Class - Displays information about storage space on each disk class. ● Replications - Displays information about replications.
Figure 72. Chargeback Reports 3. Select the report to view in the Reports pane or double-click on the report to view in the Automated Reports tab. Related concepts Chargeback Reports on page 588 Configuring Automated Report Generation on page 591 Viewing Chargeback Runs on page 605 Working with Reports You can update the list of reports and use the report options navigate, print, save, and delete reports.
Print a Report Perform the following steps to save a report: Steps 1. Click the Reports view. 2. Select the report to print from the Reports pane. 3. Click (Print). The Print dialog box opens. 4. Select the printer to use from the Name drop-down menu. NOTE: For best results, print reports using the Landscape orientation. 5. Click OK. Save a Report Perform the following steps to save a report: Steps 1. Click the Reports view. 2. Select the report to save from the Reports pane. 3. Click (Save).
Set Up Automated Reports for All Storage Centers Configure automated report settings on the Data Collector if you want to use the same report settings for all managed Storage Centers. Configure the global settings first, and then customize report settings for individual Storage Centers as needed. Steps 1. In the top pane of Storage Manager, click Edit Data Collector Settings The Edit Data Collector Settings page is displayed. 2. Click the Automated Reports tab. 3.
NOTE: Automated table reports can be saved in a public directory or attached to automated emails but they do not appear in the Reports view. 7. Set the Automated Report Options a. To export the reports to a public directory, select the Store report in public directory checkbox and enter the full path to the directory in the Directory field. NOTE: The directory must be located on the same server as the Data Collector.
Configure Storage Manager to Email Reports Storage Manager can be configured to send automated reports by email. About this task To send automated reports by email: Steps 1. Configure the SMTP server settings for the Data Collector. 2. Add an email address to your user account. 3. Configure email notification settings for your user account. Configure SMTP Server Settings The SMTP server settings must be configured to allow Storage Manager to send notification emails. Steps 1. Connect to the Data Collector.
2. Type an email address for the user account in the Email Address field. 3. Select the format for emails from the Email Format drop-down menu. 4. To send a test message to the email address, click Test Email and click OK. Verify that the test message is sent to the specified email address, 5. Click OK.
20 Storage Center Chargeback Chargeback monitors storage consumption and calculates data storage operating costs per department. Chargeback can be configured to charge for storage based on the amount of allocated space or the amount of configured space. When cost is based on allocated space, Chargeback can be configured to charge based on storage usage (the amount of space used), or storage consumption (the difference in the amount space used since the last automated Chargeback run).
4. If the Charge on Allocated Space check box was selected in the previous step, select the Charge on Difference check box if you want to configure Chargeback to charge based on the difference between the amount of space a volume currently uses and the amount of space that a volume used during the last automated Chargeback run. 5.
Figure 74. Storage Costs Per Disk Class 3. Click Finish to save the Chargeback settings. Assign Storage Costs for Storage Center Disk Tiers If the Edit Chargeback Settings wizard displays this page, assign storage cost for each Storage Center disk tier. Steps 1. For each storage tier, select the unit of storage on which to base the storage cost from the per drop-down menu. 2. For each storage tier, enter an amount to charge per unit of storage in the Cost field. Figure 75.
Configuring Chargeback Departments Chargeback uses departments to assign base billing prices to departments and department line items to account for individual IT‑related expenses. Volumes and volumes folder are assigned to departments for the purpose of charging departments for storage consumption. Setting Up Departments You can add, modify, and delete Chargeback departments as needed. Add a Department Add a chargeback department for each organization that you want to bill for storage usage. Steps 1.
Edit a Department You can modify the base storage price charged to a department, change the department attributes, and change the department contact information. Steps 1. Click the Chargeback view. 2. Click the Departments tab. 3. Select the department that you want to edit from the list of departments on the Chargeback pane. 4. Click Edit Settings or right-click on the department and select Edit Settings. The Edit Settings dialog box appears. 5. Modify the department options as needed.
8. Click OK to add the line item to the department. Edit a Department Line Item You can modify the name, description, and cost for a line item. Steps 1. Click the Chargeback view. 2. Click the Departments tab. 3. Select the department that contains the line item that you want to edit from the list of departments on the Chargeback pane. 4. Select the line item you want to edit from the Department Line Items pane. 5. Click Edit Settings or right-click on the line item and select Edit Settings.
Assign Volumes to a Department in the Chargeback View Use the Chargeback view to assign multiple volumes to a department simultaneously. Steps 1. Click the Chargeback view. 2. Click the Departments tab. 3. Select the department to which you want to assign the volume from the list of departments on the Chargeback pane. Information about the selected department appears on the Department tab. 4. Click Add Volumes. The Add Volumes dialog box appears. Figure 79. Add Volume Dialog Box 5.
Figure 80. Add Volume Folders Dialog Box 5. Select the volume folders to assign to the department. 6. Click Add Volume Folders to add the selected volume folders to the list of volume folders to assign to the department. 7. Click OK to assign the volume folders to the department. Remove Volumes/Volume Folders from a Department in the Chargeback View Use the Chargeback view to remove multiple volumes from a department simultaneously. Steps 1. Click the Chargeback view. 2. Click the Departments tab. 3.
7. Select the appropriate Chargeback department, then click OK. 8. Click OK to close the dialog box. Remove a Volume/Volume Folder from a Department in the Storage View Use the Storage view to remove volumes and volume folders from a department one at a time. Steps 1. Click the Storage view. 2. In the Storage pane, select a Storage Center. 3. Click the Storage tab. 4. In the Storage tab navigation pane, select the volume or volume folder. 5. In the right pane, click Edit Settings. A dialog box appears. 6.
Viewing Chargeback Runs Use the Chargeback Runs tab in the Chargeback view to view scheduled and manual Chargeback runs. Each Chargeback run is displayed in the Chargeback pane. The Chargeback runs names indicate the type of Chargeback run (Manual Run, Day Ending, Week Ending, Month Ending, or Quarter 1–4 Ending) and the date of the run. View a Chart of Department Costs for a Chargeback Run The Chart subtab displays a bar chart that shows the sum of all charges to each department for the Chargeback run.
View Cost and Storage Savings Realized by Using Data Instant Snapshots for a Chargeback Run The Data Instant Snapshot Savings subtab shows the estimated cost and storage space savings realized by using a Storage Center with Data Instant Snapshots as compared to legacy SAN point-in-time-copies. These savings are achieved because Data Instant Snapshots allocates space for a snapshot only when data is written and saves only the delta between snapshots; a legacy SAN allocates space for every point-in-time-copy.
Save the Chart as a PNG Image Save the chart as an image if you want to use it elsewhere, such as in a document or an email. Steps 1. Right-click the chart and select Save As. The Save dialog box appears. 2. Select a location to save the image and enter a name for the image in the File name field. 3. Click Save to save the chart. Print the Chart Print the chart if you want a paper copy. Steps 1. Right-click the chart and select Print. The Page Setup dialog box appears. 2.
6. Select the type of file to output: CSV, Text, Excel, HTML, XML, or PDF. 7. Click Browse to specify the name of the file and the location to which to export the file, then click Save. 8. Click OK.
21 Storage Center Monitoring Storage Manager provides a centralized location to view Storage Center and PS Series group alerts, events, indications, and logs collected by the Storage Center. System events logged by the Storage Center can also be viewed.
Viewing Storage System Alerts Use the Alerts tab in the Storage view or the Storage Alerts tab in the Monitoring view to display and search storage system alerts. Alerts represent current issues present on the storage system, which clear themselves automatically if the situation that caused them is corrected. Figure 82. Alerts Tab Display Storage Alerts on the Monitoring View Alerts for managed storage systems can be displayed on the Storage Alerts tab. Steps 1. Click the Monitoring view. 2.
● To hide alerts for all of the PS Series groups, click Unselect All. ● To display alerts for all of the PS Series groups, click Select All. Select the Date Range of Storage Alerts to Display You can view storage alerts for the last day, last 3 days, last 5 days, last week, or specify a custom time period. Steps 1. Click the Monitoring view 2. Click the Storage Alerts tab. 3.
Acknowledge Storage Center Alerts Alerts can be acknowledged to indicate to the Storage Center that you have read the alert message and are aware of the problem. Steps 1. Click the Monitoring view 2. Click the Storage Alerts tab. 3. Select the Storage Center alerts to acknowledge, then click Acknowledge. The Acknowledge Alert dialog box opens. NOTE: The option to acknowledge an alert will not appear if an alert has already been acknowledged. 4.
Event Name Description Directory Services Communication The Data Collector cannot communicate with the Directory Services. Exception Sending Email Unable to send email to configured user. Failed to Startup The Data Collector service failed to startup. Port Conflicts Required ports are not available. Remote Data Collector Down Data Collector no longer communicating with the Remote side. Replication Validation Errors Automated replication validation errors found an error.
Viewing Data Collector Events Use the Events tab in the Monitoring view to display events collected by the Data Collector. About this task Figure 83. Storage Manager Events Tab Display Storage Manager Events View Storage Manager events on the Events tab. Steps 1. Click the Monitoring view 2. Click the Events tab. 3. Select the check boxes of the storage systems to display and clear the check boxes of the storage systems to hide.
● To display events for a PS Series group that is deselected, select the check box for the group. ● To hide events for all of the PS Series groups, click Unselect All. ● To display events for all of the PS Series groups, click Select All. Select the Date Range of Storage Manager Events to Display You can view Storage Manager events for the last day, last 3 days, last 5 days, last week, last month, or specify a customtime period. Steps 1. Click the Monitoring view 2. Click the Events tab. 3.
NOTE: By default, when a search reaches the bottom of the list and Find Next is clicked, the search wraps around to the first match in the list. When a search reaches the top of the list and Find Previous is clicked, the search wraps around to the last match in the list. Configuring Email Alerts for Storage Manager Events Storage Manager can be configured to send automated reports when monitored events occur. About this task To configure Storage Manager to send automated reports by email: Steps 1.
Configure an Email Address for Your User Account To receive email notifications, you must specify an email address for your user account. Prerequisites The SMTP server settings must be configured for the Data Collector. If these settings are not configured, the Data Collector is not able to send emails. Steps 1. In the top pane of the Storage Manager Client, click Edit User Settings. The Edit User Settings dialog box opens. 2. Type an email address for the user account in the Email Address field. 3.
Storage Logs Storage logs are records of event activity on the managed storage systems. Use the Storage Logs tab to display and search for events in storage system logs. Viewing Storage Logs To display and search for events in the Storage Center logs, use the Logs tab in the Storage view or use the Storage Logs tab in the Monitoring view.
● ● ● ● To To To To hide events for a single PS Series group, clear the check box for the group. display events for a PS Series group that is deselected, select the check box for the group. hide events for all of the PS Series groups, click Unselect All. display events for all of the PS Series groups, click Select All. Select the Date Range of Log Events to Display You can view log events for a specific time period. Steps 1. Click the Monitoring view 2. Click the Storage Logs tab. 3.
If a match is not found, an Error dialog box appears and it displays the text that could not be found. NOTE: By default, when a search reaches the bottom of the list and Find Next is clicked, the search wraps around to the first match in the list. When a search reaches the top of the list and Find Previous is clicked, the search wraps around to the last match in the list. Send Storage Center Logs to a Syslog Server Modify the Storage Center settings to send logs directly to a syslog server. Steps 1.
Audit Logs Audit logs are records of logged activity that are related to the user accounts on the PS Series group. Use the Audit Logs tab to display information specific to PS Series group user accounts. Viewing Audit Logs To display and search for PS Series group events in the audit logs, use the Audit Logs node in the Storage view or use the Audit Logs tab in the Monitoring view. Figure 85. Audit Logs Node Display Audit Logs Audit logs represent user account activity on the selected PS Series groups.
Select the Date Range of Audit Logs to Display You can view audit logs for the last day, last 3 days, last 5 days, last week, or specify a custom time period. Steps 1. Click the Monitoring view 2. Click the Audit Logs tab. 3. Select the date range of the audit log data to display by clicking one of the following: ● ● ● ● ● Last Day: Displays the past 24 hours of audit log data. Last 3 Days: Displays the past 72 hours of audit log data. Last 5 Days: Displays the past 120 hours of audit log data.
Export Monitoring Data Export Storage Center alerts, indications, logs, and Storage Manager events to a file using the Save Monitoring Data dialog box. Steps 1. Click the Monitoring view 2. Click Save Monitoring Data in the Monitoring pane. The Save Monitoring Data dialog box appears. Figure 86. Save Monitoring Data Dialog Box 3. Select the Storage Centers from which to export the monitoring data. ● To select all of the listed Storage Centers, click Select All.
d. Type the user name and password of a Data Collector user with Administrator privileges in the User Name and Password field. e. Click Log In. 2. If a Storage Center is selected from the drop-down list in Unisphere Central, click The Unisphere Central Home page is displayed. (Home). 3. Click Data Collector. The Data Collector view is displayed. 4. Click the Monitoring tab, and then click the Data Collection subtab. 5. Click Edit. The Data Collection dialog box opens. 6.
22 Data Collector Management The Storage Manager Data Collector is a service that collects reporting data and alerts from managed Storage Centers. When you access the Data Collector using a web browser, the Data Collector management program Unisphere Central for SC Series opens. Unisphere Central manages most functions of the Data Collector service.
Configuring Data Collector Settings Use Unisphere Central to configure and update Data Collector properties and settings. Configuring General Settings The Data Collector General settings include a configuration summary, security, settings, port identification and database selection. Restart the Data Collector Use Unisphere Central to stop and restart the Data Collector. Steps 1. Connect to the Data Collector. a. Open a web browser. b.
5. In the License Information section, click Submit License. The License information dialog box opens. 6. To enable the Chargeback feature using a license file: a. b. c. d. Select the License File (*.lic) radio button. Click Browse and navigate to the location of the license file. Select the license file and click Open Click OK. 7. To enable the Chargeback feature using a product key: a. Select the Product Key radio button. b. Type the product key in the Product Key field. c. Click OK.
Set the Maximum Memory for a Data Collector on a Windows Server Use the Edit Advanced Settings dialog box to set the maximum amount of memory to allocate to a Data Collector on a Windows server. About this task NOTE: The Data Collector must be restarted to save changes to the maximum memory setting. Steps 1. Connect to the Data Collector. a. Open a web browser. b. Type the address of the Data Collector in the web browser using the following format: https://data_collector_host_name_or_IP_address:3033/ c.
NOTE: The Data Collector must be restarted to save network adapter changes. Steps 1. Connect to the Data Collector. a. Open a web browser. b. Type the address of the Data Collector in the web browser using the following format: https://data_collector_host_name_or_IP_address:3033/ c. Press Enter. The Unisphere Central login page is displayed. d. Type the user name and password of a Data Collector user with Administrator privileges in the User Name and Password field. e. Click Log In. 2.
2. If a Storage Center is selected from the drop-down list in Unisphere Central, click The Unisphere Central Home page is displayed. (Home). 3. Click Data Collector. The Data Collector view is displayed. 4. Click the General tab, and then click the Security subtab. 5. To generate SSL certificates: a. Click Generate Certificate. The Generate Certificate dialog box is displayed. b. Select the type of certificate to generate from the Certificate type drop-down menu.
The Data Collector view is displayed. 4. Click the General tab, and then click the Security subtab. 5. In the Login Message section, click Edit. The Login Message dialog box opens. 6. Type a message to display on the login screen in the Login Banner Message field. 7. Click OK. Configure Data Collector Ports Use the Ports tab to modify Data Collector ports to avoid port conflicts. About this task NOTE: The Data Collector must be restarted to apply port changes. Steps 1. Connect to the Data Collector. a.
Steps 1. Connect to the Data Collector. a. Open a web browser. b. Type the address of the Data Collector in the web browser using the following format: https://data_collector_host_name_or_IP_address:3033/ c. Press Enter. The Unisphere Central login page is displayed. d. Type the user name and password of a Data Collector user with Administrator privileges in the User Name and Password field. e. Click Log In. 2.
3. Click Data Collector. The Data Collector view is displayed. 4. Click the General tab, and then click the Database subtab. 5. Click Change Connection. The Change Data Connection dialog box opens. 6. Type the host name or IP address of the database server in the Database Server field. 7. Type port number of the database server in the Database Port field. 8. Type the user name and password of a user account that has database administrator rights in the User Name and Password fields. 9. Click OK.
a. Open a web browser. b. Type the address of the Data Collector in the web browser using the following format: https://data_collector_host_name_or_IP_address:3033/ c. Press Enter. The Unisphere Central login page is displayed. d. Type the user name and password of a Data Collector user with Administrator privileges in the User Name and Password field. e. Click Log In. 2. If a Storage Center is selected from the drop-down list in Unisphere Central, click The Unisphere Central Home page is displayed.
b. Type the address of the Data Collector in the web browser using the following format: https://data_collector_host_name_or_IP_address:3033/ c. Press Enter. The Unisphere Central login page is displayed. d. Type the user name and password of a Data Collector user with Administrator privileges in the User Name and Password field. e. Click Log In. (Home). 2. If a Storage Center is selected from the drop-down list in Unisphere Central, click The Unisphere Central Home page is displayed. 3.
10. If the proxy server requires a user name and password, type a user name and password in the User Name and Password fields. 11. Click OK. The Change Values dialog box opens, which states that the Data Collector service must be stopped and restarted. 12. Click Yes. The Data Collector service stops and restarts. Storage Center Automated Reports The information that Storage Center displays in an automated report depends on the configured Automated Report settings.
6. Set the Automated Report Options a. To export the reports to a public directory, select the Store report in public directory checkbox and enter the full path to the directory in the Directory field. NOTE: The directory must be located on the same server as the Data Collector. NOTE: Automated reports cannot be saved to a public directory when using a Virtual Appliance. b. To email the reports selected in the Automated Reports Settings area, select the Attach Automated Reports to email checkbox. c.
Configure Data Collection Schedules Configure the interval at which the Data Collector collects monitoring data from Storage Centers. Steps 1. Connect to the Data Collector. a. Open a web browser. b. Type the address of the Data Collector in the web browser using the following format: https://data_collector_host_name_or_IP_address:3033/ c. Press Enter. The Unisphere Central login page is displayed. d.
5. Click Edit. The Edit Support dialog box opens. 6. Select the checkboxes of the debug logs to enable. 7. Click OK. Configure Log File Limits Configure the size limits for the log files. Steps 1. Connect to the Data Collector. a. Open a web browser. b. Type the address of the Data Collector in the web browser using the following format: https://data_collector_host_name_or_IP_address:3033/ c. Press Enter. The Unisphere Central login page is displayed. d.
6. Click Yes. Export Configuration and Log Data for Troubleshooting Export configuration and log data as a compressed file if it is requested by technical support. Steps 1. Connect to the Data Collector. a. Open a web browser. b. Type the address of the Data Collector in the web browser using the following format: https://data_collector_host_name_or_IP_address:3033/ c. Press Enter. The Unisphere Central login page is displayed. d.
3. Click Data Collector. The Data Collector view is displayed. 4. Click the Virtual Appliance tab, and then click the Network subtab. 5. Click Edit. The Network Configuration dialog box opens. 6. In the Hostname field, type the host name of the Virtual Appliance. 7. In the Domain field, type the domain name of the Virtual Appliance. 8. To enable the Secure Shell (SSH), select the Enable SSH checkbox. 9. Select the network configuration type from the Configuration drop-down menu.
Managing Available Storage Centers Use the Data Collector Users & System tab to manage available Storage Centers that have been mapped to one or more Data Collector user accounts. Delete an Available Storage Center Remove a Storage Center when you no longer want to manage it from the Data Collector. If a Storage Center is removed from all Data Collector user accounts, historical data for the Storage Center is also removed. Steps 1. Connect to the Data Collector. a. Open a web browser. b.
7. Click Yes. Remove a Storage Center from a Data Collector User Account To prevent the user from viewing and managing a Storage Center, remove the Storage Center from the Data Collector user account . Steps 1. Connect to the Data Collector. a. Open a web browser. b. Type the address of the Data Collector in the web browser using the following format: https://data_collector_host_name_or_IP_address:3033/ c. Press Enter. The Unisphere Central login page is displayed. d.
5. Select the PS Series group to delete. 6. Click Delete PS Group. 7. Click Yes. Remove a PS Series Group from a Data Collector User To prevent a user from managing a PS Series group, remove the PS Series group from the Data Collector user account. Steps 1. Connect to the Data Collector. a. Open a web browser. b. Type the address of the Data Collector in the web browser using the following format: https://data_collector_host_name_or_IP_address:3033/ c. Press Enter.
6. Click (Delete System). A confirmation dialog box is displayed. 7. Click Yes. Remove a FluidFS Cluster from a Data Collector User Account To prevent a user from viewing and managing the FluidFS cluster, remove the FluidFS cluster from the Data Collector user account. Steps 1. Connect to the Data Collector. a. Open a web browser. b. Type the address of the Data Collector in the web browser using the following format: https://data_collector_host_name_or_IP_address:3033/ c. Press Enter.
Configure Virtual Appliance Settings Use the Configuration menu in the Storage Manager Virtual Appliance CLI to change network and partition settings for the Storage Manager Virtual Appliance. Configure an NTP Server A network time protocol (NTP) server provides the time and date to the Storage Manager Virtual Appliance. Prerequisites The NTP server must be accessible from the Storage Manager Virtual Appliance. Steps 1.
6. To assign a new hostname, type a hostname, then press Enter. 7. To modify the domain name used by the Storage Manager Virtual Appliance, type a new domain name, and then press Enter. 8. To add a new DNS server, type the IP address of one or more DNS servers. If there are multiple IP addresses, separate them with a comma, and then press Enter. 9. Press 1 to confirm the changes and press Enter. 10. Press Enter to complete the configuration.
● For the database partition, select Hard disk 3. 4. Modify the size of the disk to one of the suggested sizes. ● For the Data Collector partition, change the disk size to 15 GB, 20 GB, or 40 GB. ● For the database partition, change the disk size to 20 GB, 40 GB, or 80 GB. 5. Click OK. The server expands the disk size. 6. Launch the console for the Storage Manager Virtual Appliance. 7. Log in to the Storage Manager Virtual Appliance. 8. Press 2 and Enter to display the Configuration menu. 9.
View Routing Information Use the Storage Manager Virtual Appliance CLI to view routing information for the Storage Manager Virtual Appliance. Steps 1. Using the VMware vSphere Client, launch the console for the Storage Manager Virtual Appliance. 2. Log in to the Storage Manager Virtual Appliance CLI. 3. Press 3 and Enter to display the Diagnostics menu. 4. Press 3 and Enter. The Storage Manager Virtual Appliance CLI displays a table of routing information. 5. Press Enter to return to the Diagnostics menu.
Uninstalling the Data Collector On the server that hosts the Data Collector, use the Windows Programs and Features control panel item to uninstall the Storage Manager Data Collector application. Deleting Old Data Collector Databases Delete the old Data Collector database if you have migrated the database to a different database server or if you have removed the Data Collector from your environment.
23 Storage Manager User Management Use the Data Collector to add new users and manage existing users. To change preferences for your user account, use the Storage Manager Client.
Authenticating Users with an External Directory Service The Data Collector can be configured to authenticate Storage Manager users with an Active Directory or OpenLDAP directory service. If Kerberos authentication is also configured, users can log in with the Client automatically using their Windows session credentials. Storage Manager access can be granted to directory service users and groups that belong to the domain to which the Data Collector is joined.
The Data Collector view is displayed. 4. Click the Environment tab and then select the Directory Service subtab. 5. Click Edit. The Service Settings dialog box opens. 6. Configure LDAP settings. a. Select the Enabled checkbox. b. In the Domain field, type the name of the domain to search. NOTE: If the server that hosts the Data Collector belongs to a domain, the Domain field is automatically populated. c.
Troubleshoot Directory Service Discovery The Data Collector attempts to automatically discover the closest directory service based on the network environment configuration. Discovered settings are written to a text file for troubleshooting purposes. If discovery fails, confirm that the text file contains values that are correct for the network environment. Steps 1. On the server that hosts the Data Collector, use a text editor to open the file C:\Program Files\Dell EMC\Storage Manager\msaservice\directory_s
Grant Access to Directory Service Users and Groups To allow directory users to log in to Storage Manager, add directory service users and/or user groups to Storage Manager user groups. Add Directory Groups to a Storage Manager User Group Add a directory group to a Storage Manager user group to allow all users in the directory group to access Storage Manager.
a. Open a web browser. b. Type the address of the Data Collector in the web browser using the following format: https://data_collector_host_name_or_IP_address:3033/ c. Press Enter. The Unisphere Central login page is displayed. d. Type the user name and password of a Data Collector user with Administrator privileges in the User Name and Password field. e. Click Log In. 2. If a Storage Center is selected from the drop-down list in Unisphere Central, click The Unisphere Central Home page is displayed.
e. Click Log In. 2. If a Storage Center is selected from the drop-down list in Unisphere Central, click The Unisphere Central Home page is displayed. (Home). 3. Click Data Collector. The Data Collector view is displayed. 4. Click the Users & System tab and then select the Users & User Groups subtab. 5. Click the User Groups tab. 6. Select the Storage Manager user group to which the directory group is added. 7. Click the Directory Groups subtab. 8.
a. Open a web browser. b. Type the address of the Data Collector in the web browser using the following format: https://data_collector_host_name_or_IP_address:3033/ c. Press Enter. The Unisphere Central login page is displayed. d. Type the user name and password of a Data Collector user with Administrator privileges in the User Name and Password field. e. Click Log In. 2. If a Storage Center is selected from the drop-down list in Unisphere Central, click The Unisphere Central Home page is displayed.
6. Enter information for the new user. a. b. c. d. e. f. Type the user name of the user in the User Name field. (Optional) Type the email address of the user in the Email Address field. Select the role to assign to the user from the Role drop-down menu. Select a language from the Preferred Language drop-down menu. Enter a password for the user in the Password and Confirm Password fields. To force the user to change the password after the first login, select the Requires Password Change checkbox. 7.
3. Click Data Collector. The Data Collector view is displayed. 4. Click the Users & System tab, then select the Users & User Groups subtab. (Edit Settings). 5. Select the user to modify and click The User Settings dialog box opens. 6. Select the role to assign to the user from the Role drop-down menu.
3. Click Data Collector. The Data Collector view is displayed. 4. Click the Users & System tab, then select the Users & User Groups subtab. (Edit Settings). 5. Select the user to modify and click The User Settings dialog box opens. 6. Select the Requires Password Change checkbox. 7. Click OK. Change the Password for a User You can change the password for any user account using Storage Manager. Steps 1. Connect to the Data Collector. a. Open a web browser. b.
4. Click the Users & System tab, then select the Users subtab. 5. Select the Reporter user to modify. 6. In the lower pane on the Storage Centers tab, click (Select Storage Center Mappings). The Select Storage Center Mappings dialog box opens. 7. Select the checkbox of each Storage Center to map to the user. Clear the checkbox of each Storage Center to unmap from the user. 8. Click OK. Delete a User Delete a user account to prevent the user from viewing and managing the Storage Center. Steps 1.
5. Select the user for which you want to delete a Storage Center mapping. 6. Select the Storage Center to unmap from the user on the Storage Center pane. 7. Click (Delete Storage Center Map). A confirmation dialog box opens. 8. Click Yes. Unlock a Local User Account After a user enters an incorrect password beyond the Account Lockout threshold, that user account is locked. Use Storage Manager to unlock the account. Prerequisites ● Password Configuration is enabled. ● A user account is locked.
a. Open a web browser. b. Type the address of the Data Collector in the web browser using the following format: https://data_collector_host_name_or_IP_address:3033/ c. Press Enter. The Unisphere Central login page is displayed. d. Type the user name and password of a Data Collector user with Administrator privileges in the User Name and Password field. e. Click Log In. 2. If a Storage Center is selected from the drop-down list in Unisphere Central, click The Unisphere Central Home page is displayed.
Reset Password Aging Clock The password aging clock determines when a password expires based on the minimum and maximum age requirements. Reset the password aging clock to start the password aging clock from the current date and time. Prerequisites Password Configuration must be enabled. Steps 1. Connect to the Data Collector. a. Open a web browser. b. Type the address of the Data Collector in the web browser using the following format: https://data_collector_host_name_or_IP_address:3033/ c. Press Enter.
4. Click the Users & System tab, then select the Password Configuration subtab. 5. Click Edit. The Password Configuration dialog box opens. 6. Select the Requires Password Change checkbox. 7. Click OK. Related tasks Configure Local Storage Manager User Password Requirements on page 663 Managing User Settings with the Storage Manager Client Use the Storage Manager Client to change preferences for your user account.
Configure Charting Options Threshold alert levels and Storage Center alerts can be configured to appear on charts for the current user, and chart colors can be changed for the current user on the Charting Options section of the General tab. Related concepts Configuring User Settings for Charts on page 316 Configure Client Options The default view, formatting of storage units , and warning/error threshold percentages can be configured for the current user on the Client Options section of the General tab.
Change the Warning Percentage Threshold The warning percentage threshold specifies the utilization percentage at which storage objects indicate a warning. Steps 1. In the top pane of the Storage Manager Client, click Edit User Settings. The Edit User Settings dialog box opens. 2. On the General tab, enter a new utilization percentage at which storage objects indicate a warning in the Warning Percentage Threshold field. 3. Click OK to save changes and close the Edit User Settings dialog box.
24 SupportAssist Management SupportAssist sends data to technical support for monitoring and troubleshooting purposes. You can configure SupportAssist to send diagnostic data automatically, or you can send diagnostic data manually using SupportAssist when needed. SupportAssist settings can be configured for all managed Storage Centers or individually for each Storage Center.
Configure SupportAssist Settings for the Data Collector Modify the SupportAssist settings for the Data Collector. Steps 1. If a Storage Center is selected from the drop-down list in Unisphere Central, click The Unisphere Central Home page is displayed. (Home). Data Collector. 2. Click The Data Collector view is displayed. 3. Click the Monitoring tab, and then click the SupportAssist subtab. 4. Click Edit. The SupportAssist dialog box opens. a.
Manually Sending Diagnostic Data Using SupportAssist You can send diagnostic data manually using SupportAssist for multiple Storage Centers or for a specific Storage Center. If a Storage Center does not have Internet connectivity or cannot communicate with the SupportAssist servers, you can export the data to a file and send it to technical support manually. Manually Send Diagnostic Data for Multiple Storage Centers You can send diagnostic data for multiple Storage Centers from the Data Collector. Steps 1.
5. Click Send SupportAssist Information Now. The Send SupportAssist Information Now dialog box opens. 6. In the Reports area, select the checkboxes of the Storage Center reports to send. 7. In the Time Range area, specify the period of time for which you want to send data. a. In the Start Date fields, specify the start date and time. b. To specify an end date, clear the Use Current Time For End Date checkbox and specify a date and time in the End Date fields.
Saving SupportAssist Data to a USB Flash Drive If the Storage Center is not configured to send, or is unable to send SupportAssist data to the SupportAssist server, you can save the SupportAssist data to a USB flash drive and then send the data to technical support. USB Flash Drive Requirements The flash drive must meet the following requirements to be used to save SupportAssist data: ● USB 2.0 ● Minimum size of 4 GB Prepare the USB Flash Drive When the USB flash drive contains a file named phonehome.
6. Place a check next to By checking this box, you accept the above terms to accept the terms. 7. Click Next. 8. Place a check next to Detailed Logs to save this information to the USB flash drive. NOTE: Storage Manager saves the Storage Center configuration data to the USB flash drive automatically. 9. Click Finish. The dialog box displays SupportAssist progress and closes when the process is complete.
e. Select the time zone for the onsite contact from the Time Zone drop-down menu. 8. Specify the site address in the Onsite Address area. 9. Click OK. Configure SupportAssist to Automatically Download Updates Configure SupportAssist to automatically download updates to the Storage Center. Steps 1. Click the Storage view. 2. In the Storage view navigation pane, select a Storage Center. 3. In the right pane , click Edit Settings. The Edit Storage Center Settings dialog box opens. 4.
Controlling Data Sent to CloudIQ When a Storage Center has been onboarded to CloudIQ and SupportAssist is enabled, the CloudIQ Enabled option appears in the SupportAssist settings tab and is selected by default. When the CloudIQ Enabled checkbox is selected, the Storage Center sends data to CloudIQ more frequently than, and independent of, the SupportAssist schedule. You can remain connected to CloudIQ, but stop sending data by clearing the checkbox. Steps 1. Click the Storage view. 2.