Dell Storage Manager 2016 R3 Administrator’s Guide
Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. Copyright © 2017 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries.
Contents About This Guide..............................................................................................................37 How to Find Information................................................................................................................................................... 37 Contacting Dell................................................................................................................................................................. 37 Revision History....
Prepare for Disaster Recovery....................................................................................................................................56 Part II: Storage Management........................................................................................... 57 3 Storage Center Overview.............................................................................................. 59 How Storage Virtualization Works......................................................................
Open the Discover and Configure Uninitialized Storage Centers Wizard From the Dell Storage Manager Client Welcome Screen......................................................................................................................................................... 81 Open the Discover and Configure Uninitialized Storage Centers Wizard from the Dell Storage Manager Client.......... 81 Discover and Select an Uninitialized Storage Center.................................................................
Enter Key Management Server Settings.................................................................................................................... 95 Create a Storage Type................................................................................................................................................96 Configure Ports..........................................................................................................................................................
Managing Expiration Rules for Remote Snapshots....................................................................................................140 Managing Storage Profiles...............................................................................................................................................141 Create a Storage Profile (Storage Center 7.2.1 and Earlier)........................................................................................
Download the Server Agent......................................................................................................................................169 Install and Register the Server Agent........................................................................................................................169 Manage the Server Agent with Server Agent Manager............................................................................................. 171 Uninstalling the Server Agent............
Edit a Volume Folder................................................................................................................................................ 202 Delete a Volume Folder............................................................................................................................................ 202 Move a Volume to a Folder.......................................................................................................................................
View Outbound Replications..................................................................................................................................... 217 View Inbound Replications.........................................................................................................................................217 View Replication History........................................................................................................................................... 218 View Alerts...
Problems Configuring Server Clusters Defined on a Storage Center with Dell Fluid Cache for SAN.........................229 10 Storage Center Maintenance......................................................................................231 Managing Storage Center Settings................................................................................................................................. 231 Viewing and Modifying Storage Center Information.......................................................
Disk Management on SCv2000 Series Controllers................................................................................................... 282 Scan for New Disks..................................................................................................................................................282 Create a Disk Folder................................................................................................................................................. 282 Delete Disk Folder.......
Enable or Disable a Controller Indicator Light........................................................................................................... 296 Replace a Failed Cooling Fan Sensor........................................................................................................................296 Configure Back-End Ports........................................................................................................................................297 Managing IO Card Changes.......
Monitoring a Storage Center Controller....................................................................................................................324 Monitoring a Storage Center Disk Enclosure............................................................................................................ 326 Monitoring SSD Endurance...................................................................................................................................... 328 Viewing UPS Status..................
Reconnect to the FluidFS Cluster............................................................................................................................ 348 Connect to the FluidFS Cluster CLI Using a VGA Console....................................................................................... 348 Connect to the FluidFS Cluster CLI Through SSH Using a Password.......................................................................348 Connect to the FluidFS Cluster CLI Using SSH Key Authentication....
Enable or Disable NAS Controller Blinking................................................................................................................ 360 Validate Storage Connections................................................................................................................................... 361 15 FluidFS Networking................................................................................................... 363 Managing the Default Gateway....................................
Add an Administrator................................................................................................................................................376 Assign NAS Volumes to a Volume Administrator....................................................................................................... 377 Change the Permission Level of an Administrator.....................................................................................................
Managing User Mapping Rules................................................................................................................................. 391 17 FluidFS NAS Volumes, Shares, and Exports............................................................... 393 Managing the NAS Pool................................................................................................................................................. 393 View Internal Storage Reservations..................................
Configuring Branch Cache........................................................................................................................................ 419 Accessing an SMB Share Using UNIX or Linux.........................................................................................................420 Managing NFS Exports.................................................................................................................................................. 420 Configuring NFS Exports..
Target NAS Volumes................................................................................................................................................ 449 Managing Replication Partnerships.......................................................................................................................... 449 Replicating NAS Volumes.........................................................................................................................................
Managing Firmware Updates..........................................................................................................................................474 Restoring the NAS Volume Configuration....................................................................................................................... 474 NAS Volume Configuration Backups.........................................................................................................................
Internal Backup Power Supply................................................................................................................................. 506 Internal Storage....................................................................................................................................................... 506 Internal Cache..........................................................................................................................................................
Managing the Health Scan Throttling Mode................................................................................................................... 521 Change the Health Scan Settings............................................................................................................................. 521 Managing the Operation Mode......................................................................................................................................
Change the Client Network Bonding Mode..............................................................................................................533 Viewing the Fibre Channel WWNs................................................................................................................................. 533 26 FluidFS Account Management and Authentication....................................................535 Account Management and Authentication......................................................
Enable or Disable TLS Encryption for the LDAP Connection....................................................................................550 Disable LDAP Authentication................................................................................................................................... 550 Managing NIS.................................................................................................................................................................
Setting Permissions for an NFS Export.....................................................................................................................581 Accessing an NFS Export......................................................................................................................................... 581 Global Namespace..........................................................................................................................................................
NDMP Include/Exclude Path................................................................................................................................... 607 Viewing NDMP Jobs and Events..............................................................................................................................607 Managing Replication.....................................................................................................................................................
Add NAS Appliances to a FluidFS Cluster.................................................................................................................630 Delete a NAS Appliance from the FluidFS Cluster.................................................................................................... 632 Detaching, Attaching, and Replacing a NAS Controller...................................................................................................632 Detach a NAS Controller..........................
Troubleshoot Networking Problems..........................................................................................................................657 Troubleshoot Replication Issues............................................................................................................................... 658 Troubleshoot System Issues.....................................................................................................................................
Live Volume Types....................................................................................................................................................692 Live Volume Icon...................................................................................................................................................... 692 Live Volumes Roles..................................................................................................................................................
Install a Virtual Appliance as a Remote Data Collector.............................................................................................. 730 Disconnecting and Reconnecting a Remote Data Collector............................................................................................ 732 Temporarily Disconnect a Remote Data Collector..................................................................................................... 732 Reconnect a Remote Data Collector to a Storage Center...
Types of Volume Movement Recommendations.......................................................................................................750 Creating Threshold Definitions to Recommend Volume Movement.......................................................................... 752 Moving a Volume Based on a Recommendation.......................................................................................................754 Export Threshold Alert Data to a File..........................................
View a Chart of Department Costs for a Chargeback Run........................................................................................777 View the Results of the Chargeback Run in Table Format........................................................................................ 778 View Cost and Storage Savings Realized by Dynamic Capacity for a Chargeback Run.............................................778 View Cost and Storage Savings Realized by Using Data Instant Snapshots for a Chargeback Run.
Configuring SMI-S Settings..................................................................................................................................... 809 Managing Available Storage Centers........................................................................................................................809 Managing Available PS Series Groups.......................................................................................................................
Managing User Settings with the Dell Storage Manager Client...................................................................................... 835 Change User Password............................................................................................................................................835 Configure Email Settings..........................................................................................................................................835 Change the Preferred Language...
About This Guide This guide describes how to use Storage Manager to manage and monitor your storage infrastructure. For information about installing and configuring required Storage Manager components, see the Storage Manager Installation Guide. How to Find Information To Find Action A description of a field or option in the user interface In Storage Manager, click Help. Tasks that can be performed from a particular 1. area of the user interface 2. In Storage Manager, click Help.
Audience Storage administrators make up the target audience for this document. The intended reader has a working knowledge of storage and networking concepts. Related Publications The following documentation is available for Dell storage components managed using Storage Manager. Storage Manager Documents • Storage Manager Installation Guide Contains installation and setup information. • Storage Manager Administrator’s Guide Contains in-depth feature configuration and usage information.
FluidFS Cluster Documents • Dell FluidFS Version 6.0 FS8600 Appliance Pre-Deployment Requirements Provides a checklist that assists in preparing to deploy an FS8600 appliance prior to a Dell installer or certified business partner arriving on site to perform an FS8600 appliance installation. The target audience for this document is Dell installers and certified business partners who perform FS8600 appliance installations. • Dell FluidFS Version 6.
Part I Introduction to Storage Manager This section provides an overview of Storage Manager and describes how to get started.
1 Storage Manager Overview Storage Manager allows you to monitor, manage, and analyze Storage Centers, FluidFS clusters, PS Series Groups, and Fluid Cache clusters from a centralized management console. The Storage Manager Data Collector stores data and alerts it gathers from Storage Centers and FluidFS clusters in an external database or an embedded database. Dell Storage Manager Client connects to the Data Collector to perform monitoring and administrative tasks.
Product Versions Dell Storage Center Storage Center versions 6.5–7.2 PS Series group firmware 7.0–9.1 Dell FluidFS 4.0–6.0 Microsoft System Center Virtual Machine Manager (SCVMM) 2012, 2012 SP1, 2012 R2, and 2016 VMware vCenter Site Recovery Manager (SRM) 5.5, 5.8, 6.0, 6.1.1, and 6.5 Dell Storage Replication Adapter (SRA) 16.3.10 CITV 4.
Component Requirements NOTE: Other web browsers might work but are not officially supported. External database One of the following databases: • • • • • • • • • • Microsoft SQL Server 2008 R2 Microsoft SQL Server 2008 R2 Express (limited to 10 GB) Microsoft SQL Server 2012 Microsoft SQL Server 2012 Express (limited to 10 GB) Microsoft SQL Server 2014 Microsoft SQL Server 2014 Express (limited to 10 GB) Microsoft SQL Server 2016 MySQL 5.5 MySQL 5.6 MySQL 5.
Component Requirements Web browser Any of the following web browsers: • • • • Internet Explorer 11 Firefox Google Chrome Microsoft Edge NOTE: Other web browsers might work but are not officially supported. Server Agent Requirements The following table lists the requirements for the Storage Manager Server Agent for Windows-based servers.
Port Protocol Name Purpose • Alerts forwarded from Storage Center SANs Communicating with the remote Data Collector Providing automatic update functionality for previous versions of the Dell Storage Manager Client 7342 TCP Legacy Client Listener Port • • 5989 TCP SMI-S over HTTPS Receiving encrypted SMI-S communication Outbound Data Collector Ports The Data Collector initiates connections to the following ports.
Port Protocol Name Purpose 27355 TCP Server Agent Socket Listening Port Receiving communication from the Data Collector Outbound Server Agent Port The Server Agent initiates connections to the following port. Port Protocol Name Purpose 8080 TCP Legacy Web Services Port Communicating with the Data Collector IPv6 Support The Storage Manager Data Collector can use IPv6 to accept connections from the Dell Storage Manager Client and to communicate with managed Storage Center SANs.
PS Group Management Storage Manager allows you to centrally manage your PS Groups. For each PS Group, you can configure volumes, snapshots, and replications between a PS Group and Storage Center. You can also configure access policies to grant volume access to hosts. FluidFS Cluster Management Storage Manager allows you to centrally manage your FluidFS clusters and monitor FluidFS cluster status and performance.
Replications and Live Volumes As part of an overall Disaster Recovery Plan, replication copies volume data from one managed storage system to another managed storage system to safeguard data against local or regional data threats. If the source storage system or source site becomes unavailable, you can activate the destination volume to regain access to your data. A Live Volume is a pair of replicating volumes that can be mapped and active at the same time.
space, Chargeback can be configured to charge based on storage usage, which is the amount of space used, or storage consumption, which is the difference in the amount of space used since the last Chargeback run. Related link Storage Center Chargeback Log Monitoring The Log Monitoring feature provides a centralized location to view Storage Center alerts, indications, and logs collected by the Storage Manager Data Collector and system events logged by Storage Manager.
Callout Client Elements Description • About: When clicked, opens a dialog box that displays the software version of the Dell Storage Manager Client. 2 View pane Displays options specific to the view that is currently selected. For example, when the Storage view is selected, the view pane displays the Storage Centers, PS Groups, and FluidFS clusters that have been added to Storage Manager. 3 Views Displays the view buttons.
2 Getting Started Start the Dell Storage Manager Client and connect to the Data Collector. When you are finished, consider the suggested next steps. For instructions on setting up a new Storage Center, see Storage Center Deployment. Use the Client to Connect to the Data Collector Start the Dell Storage Manager Client and use it to connect to the Data Collector. By default, you can log on as a local Storage Manager user.
Figure 2. Dell Storage Manager Client Login 3. To change the language displayed in the Dell Storage Manager Client, select a language from the Display Language drop-down menu. 4. Type the user name and password in the User Name and Password fields. 5. Specify your credentials. • If you want to log on as a local Storage Manager user, Active Directory user, or OpenLDAP user, type the user name and password in the User Name and Password fields.
Figure 3. Dell Storage Manager Client Storage View Related link Authenticating Users with an External Directory Service Managing Local Users with the Data Collector Manager Next Steps This section describes some basic tasks that you may want to perform after your first log on to Storage Manager. These tasks are configuration dependent and not all tasks will be required at all sites.
Add Servers to your Storage Centers Use Storage Manager to add servers that use Storage Center volumes to your Storage Centers. To enable additional functionality, such as the ability to display operating system and connectivity information, and to manage the volumes or datastores mapped to the servers, register these servers to the Storage Manager Data Collector. Before you register Windows servers, you must first install the Storage Manager Server Agent.
Part II Storage Management This section describes how to use Storage Manager to administer, maintain, and monitor Storage Centers and PS Series groups.
3 Storage Center Overview Storage Center is a storage area network (SAN) that provides centralized, block-level storage that can be accessed by Fibre Channel, iSCSI, or Serial Attached SCSI (SAS). How Storage Virtualization Works Storage Center virtualizes storage by grouping disks into pools of storage called Storage Types, which hold small chunks (pages) of data. Block-level storage is allocated for use by defining volumes and mapping them to servers.
Disk Management Storage Center manages both physical disks and the data movement within the virtual disk pool. Disks are organized physically, logically, and virtually. • Physically: Disks are grouped by the enclosure in which they reside, as shown in the Enclosures folder. • Logically: Disks are grouped by class in disk folders. Storage Center enclosures may contain any combination of disk classes. • Virtually: All disk space is allocated into tiers.
Drive Spares Drive spares are drives that Storage Center reserves to replace a drive when one fails. When a drive fails, Storage Center restripes the data across the remaining drives using the spare drive as a replacement for the failed drive. Storage Center designates at least one drive spare for each disk class. For SCv2000 series, SC7020, SC7020F, SC5020, SC5020F, and SCv3000 storage systems, Storage Center groups drives into groups of no more than 21 drives.
Redundancy Redundancy levels provide fault tolerance for a drive failure. • Non-redundant: Uses RAID 0 in all classes, in all tiers. Data is striped but provides no redundancy. If one drive fails, all data is lost. Do not use non-redundant storage for a volume unless the data has been backed up elsewhere. • Single-redundant: Protects against the loss of any one drive. Single-redundant tiers can contain any of the following types of RAID storage.
Table 3. SSD Redundancy Recommendations and Requirements Drive Size Redundancy Level Up to 1.7 TB for WI and RI For most models of Storage Centers, single redundancy is the default when adding drives of this size to a new or existing page pool. NOTE: For drives of this size, dual-redundant the default redundancy level for SCv3000 series, SC7020, SC7020F, SC5020, and SC5020F storage systems. 1.8 TB up to 3.
Emergency Mode Storage Center enters Emergency Mode when the system can no longer operate because it does not have enough free space. In Emergency Mode, Dell Storage Manager Client responds with the following actions: • Generates an Emergency Mode alert. • Expires Snapshots at a faster rate than normal. • Prevents new volume creation. • Volumes are taken offline. Data cannot be written to or read from volumes.
Storage Center Operation Modes Storage Center operates in four modes: Installation, Pre-production, Normal, and Maintenance. Name Description Install Storage Center is in Install mode before completing the setup wizard for the Storage Center. Once setup is complete, Storage Center switches to Pre-Production mode. Pre-Production During Pre-production mode, Storage Center suppresses alerts sent to support so that support is not alerted to expected test scenarios caused by testing.
blocks of data remain on high-performance drives, while less active blocks automatically move to lower-cost, high-capacity SAS drives. Because SSDs are automatically assigned to Storage Tier 1, profiles that include Storage Tier 1 allow volumes to use SSD storage. If you have volumes that contain data that is not accessed frequently, and do not require the performance of Tier 1 SSDs, use a Medium or Low Priority Profile or create and apply a new profile that does not include Storage Tier 1.
If Tier 1 fills to within 95% of capacity, Storage Center creates a space management snapshot and moves it immediately to Tier 2 to free up space on Tier 1. The space management snapshot is moved immediately and does not wait for a scheduled Data Progression. Space management snapshots are marked as Created On Demand and cannot be modified manually or used to create View Volumes. Space management snapshots coalesce into the next scheduled or manual snapshot.
RAID Tiering for SCv2000 Series Controllers RAID Tiering for SCv2000 series controllers moves data between RAID 10 and RAID 5/6. It does not move data between Storage Tiers. RAID Tiering happens at 7 PM everyday. Data progression runs until it completes or reaches the maximum run time. Storage Profiles for SCv2000 Series Controllers The following table summarizes the Storage Profiles available to SCv2000 series controllers.
Summary Tab The Summary tab displays a customizable dashboard that summarizes Storage Center information. The Summary tab is displayed by default when a Storage Center is selected from the Storage navigation tree. Figure 4. Summary Tab Related link Managing Storage Center Settings Viewing Summary Information Storage Tab The Storage tab of the Storage view allows you to view and manage storage on the Storage Center. This tab is made up of two elements: the navigation pane and the right pane. Figure 5.
Navigation Pane TheStorage tab navigation pane shows the following nodes: • Storage Center: Shows a summary of current and historical storage usage on the selected Storage Center. • Volumes: Allows you to create and manage volumes and volume folders on the selected Storage Center, as well as create a local recovery from a volume snapshot. You can also create storage containers, which are used with virtual volumes.
Hardware Tab The Hardware tab of the Storage view displays status information for the Storage Center hardware and allows you to perform hardware-related tasks. Figure 6. Hardware Tab Related link Monitoring Storage Center Hardware Managing Disk Enclosures Shutting Down and Restarting a Storage Center IO Usage Tab The IO Usage tab of the Storage view displays historical IO performance statistics for the selected Storage Center and associated storage objects. Figure 7.
Charting Tab The Charting tab of the Storage view displays real-time IO performance statistics for the selected storage object. Figure 8. Charting Tab Related link Viewing Current IO Performance Alerts Tab The Alerts tab displays alerts for the Storage Center. Figure 9.
Logs Tab The Logs tab displays logs from the Storage Center. Figure 10.
4 Storage Center Deployment Use the Discover and Configure Uninitialized Storage Centers or Configure Storage Center wizard to set up a Storage Center to make it ready for volume creation and storage management. After configuring a Storage Center, you can set up a localhost, or a VMware vSphere or vCenter host.
The Discover and Configure Uninitialized Storage Centers wizard appears. Discover and Select an Uninitialized Storage Center The first page of the Discover and Configure Uninitialized Storage Centers wizard provides a list of prerequisite actions and information required before setting up a Storage Center. Prerequisites • The host server, on which the Storage Manager software is installed, must be on the same subnet or VLAN as the Storage Center.
Set System Information The Set System Information page allows you to enter Storage Center and storage controller configuration information to use when connecting to the Storage Center using Storage Manager. 1. Type a descriptive name for the Storage Center in the Storage Center Name field. 2. Type the system management IPv4 address for the Storage Center in the Virtual Management IPv4 Address field.
Confirm the Storage Center Configuration Make sure that the configuration information shown on the Confirm Configuration page is correct before continuing. 1. Verify that the Storage Center settings are correct. 2. If the configuration information is correct, click Apply Configuration. If the configuration information is incorrect, click Back and provide the correct information.
c. (Optional) In the Backup SMTP Mail Server field, enter the IP address or fully qualified domain name of a backup SMTP mail server. Click Test Server to verify connectivity to the backup SMTP server. d. If the SMTP server requires emails to contain a MAIL FROM address, specify an email address in the Sender Email Address field. e. (Optional) In the Common Subject Line field, enter a subject line to use for all emails sent by the Storage Center. f.
a. Select Enabled. b. Enter the proxy settings. c. Click OK. The Storage Center attempts to contact the SupportAssist Update Server to check for updates. Configure the Storage Center Update Utility The Storage Center Update Utility is used to update Storage Centers that are not connected to the SupportAssist update server. Configure Storage Center to use the Storage Center Update Utility if SupportAssist is not enabled. Prerequisite SupportAssist must be disabled.
Steps 1. Configure the fault domain and ports (embedded fault domain 1 or Flex Port Domain 1). NOTE: The Flex Port feature allows both Storage Center system management traffic and iSCSI traffic to use the same physical network ports. However, for environments where the Storage Center system management ports are mixed with network traffic from other devices, separate the iSCSI traffic from management traffic using VLANs. a. Enter the target IPv4 address, subnet mask, and gateway for the fault domain. b.
Discover and Select an Uninitialized Storage Center The first page of the Discover and Configure Uninitialized Storage Centers wizard provides a list of prerequisite actions and information required before setting up a Storage Center. Prerequisites • The host server, on which the Storage Manager software is installed, must be on the same subnet or VLAN as the Storage Center. • Temporarily disable any firewall on the host server that is running the Storage Manager.
Set Administrator Information The Set Administrator Information page allows you to set a new password and an email address for the Admin user. 1. Enter a new password for the default Storage Center administrator user in the New Admin Password and Confirm Password fields. 2. Enter the email address of the default Storage Center administrator user in the Admin Email Address field. 3. Click Next. • For a Fibre Channel or SAS storage system, the Confirm Configuration page appears.
Steps 1. Select the Storage Center whose settings you want to copy. 2. Place a check next to each setting that you want to inherit, or click Select All to inherit all settings. 3. Click Next. If you chose to inherit time and SMTP settings from another Storage Center, the Time Settings and SMTP Server Settings pages are skipped in the wizard. Configure Time Settings Configure an NTP server to set the time automatically, or set the time and date manually. 1.
Provide Contact Information Enter contact information for technical support to use when sending support-related communications from SupportAssist. 1. Specify the contact information. 2. To receive SupportAssist email messages, select Yes, I would like to receive emails from SupportAssist when issues arise, including hardware failure notifications. 3. Select the preferred contact method, language, and available times. 4. Type a shipping address where replacement Storage Center components can be sent.
Set Default Storage Profile (SCv2000 Series Controllers Only) The storage profile determines the RAID types used when creating a volume. 1. Select a profile from the Default Storage Profile drop-down menu. NOTE: It is recommended to use the Maximize Efficiency storage profile if you plan to import data to this Storage Center. 2. (Optional) To allow a different storage profile to be selected when creating a volume, place a check next to Allow Storage Profile selection when creating a volume. 3.
Steps 1. Click the Storage view. 2. In the Storage pane, click Storage Centers. 3. In the Summary tab, click Discover and Configure Uninitialized Storage Centers . The Discover and Configure Uninitialized Storage Centers wizard appears. Discover and Select an Uninitialized Storage Center The first page of the Discover and Configure Uninitialized Storage Centers wizard provides a list of prerequisite actions and information required before setting up a Storage Center.
d. Type Admin in the User Name field, type the password entered on the Set Administrator Information page in the Password field, and click Next. Set System Information The Set System Information page allows you to enter Storage Center and storage controller configuration information to use when connecting to the Storage Center using Storage Manager. 1. Type a descriptive name for the Storage Center in the Storage Center Name field. 2.
Enter Key Management Server Settings Specify key management server settings, such as hostname and port. 1. In the Hostname field, type the host name or IP address of the key management server. 2. In the Port field, type the number of a port with open communication with the key management server. 3. In the Timeout field, type the amount of time in seconds after which the Storage Center should stop attempting to reconnect to the key management server after a failure. 4.
NOTE: If the Storage Center is not cabled correctly to create fault domains, the Cable Ports page opens and explains the issue. Click Refresh after cabling more ports. Steps 1. Review the fault domains that have been created. 2. (Optional) Click Copy to clipboard to copy the fault domain information. 3. (Optional) Review the information on the Zoning, Hardware, and Cabling Diagram tabs. NOTE: The ports must already be zoned. 4. Click Next.
Inherit Settings Use the Inherit Settings page to copy settings from a Storage Center that is already configured. Prerequisite You must be connected through a Data Collector. Steps 1. Select the Storage Center whose settings you want to copy. 2. Place a check next to each setting that you want to inherit, or click Select All to inherit all settings. 3. Click Next.
• Click No to return to the SupportAssist Data Collection and Storage page and accept the agreement. • Click Yes to opt out of using SupportAssist and proceed to the Update Storage Center page. Provide Contact Information Enter contact information for technical support to use when sending support-related communications from SupportAssist. 1. Specify the contact information. 2.
Discover and Configure Uninitialized SC5020 and SC7020 Storage Centers When setting up the system, use the Discover and Configure Uninitialized Storage Centers wizard to find new SC5020, SC5020F, SC7020, or SC7020F Storage Centers. The wizard helps set up a Storage Center to make it ready for volume creation.
NOTE: If the wizard does not discover the Storage Center that you want to initialize, perform one of the following actions: 3. • Make sure that the Storage Center hardware is physically attached to all necessary networks. • Click Rediscover. • Click Troubleshoot Storage Center Hardware Issue to learn more about reasons why the Storage Center is not discoverable. • Follow the steps in Deploy the Storage Center Using the Direct Connect Method. Select the Storage Center to initialize. 4.
Set Administrator Information The Set Administrator Information page allows you to set a new password and an email address for the Admin user. 1. Enter a new password for the default Storage Center administrator user in the New Admin Password and Confirm Password fields. 2. Enter the email address of the default Storage Center administrator user in the Admin Email Address field. 3. Click Next. • For a Fibre Channel or SAS storage system, the Confirm Configuration page appears.
Create a Storage Type Select the datapage size and redundancy level for the Storage Center. 1. Select a datapage size. • Standard (2 MB Datapage Size): Default datapage size, this selection is appropriate for most applications. • High Performance (512 KB Datapage Size): Appropriate for applications with high performance needs, or in environments in which snapshots are taken frequently under heavy IO. Selecting this size increases overhead and reduces the maximum available space in the Storage Type.
• If you are setting up iSCSI fault domains, the Configure iSCSI Fault Domain page opens. • If you are setting up SAS back-end ports but not iSCSI fault domains, the Configure Back-End Ports page opens. • If you are not setting up iSCSI fault domains or SAS back-end ports, the Inherit Settings or Time Settings page opens. Configure iSCSI Ports (Configure Storage Center Wizard) Create an iSCSI fault domain to group ports for failover purposes. 1.
If you chose to inherit time and SMTP settings from another Storage Center, the Time Settings and SMTP Server Settings pages are skipped in the wizard. Configure Time Settings Configure an NTP server to set the time automatically, or set the time and date manually. 1. From the Region and Time Zone drop-down menus, select the region and time zone used to set the time. 2.
4. Type a shipping address where replacement Storage Center components can be sent. 5. Click Next. Update Storage Center The Storage Center attempts to contact the SupportAssist Update Server to check for updates. If you are not using SupportAssist, you must use the Storage Center Update Utility to update the Storage Center operating system before continuing. • • • If no update is available, the Storage Center Up to Date page appears. Click Next.
• The Storage Center must be added to Storage Manager using a Storage Center user with the Administrator or Volume Manager privilege. • On a Storage Center with Fibre Channel IO ports, configure the Fibre Channel zoning. Steps 1. On the Configuration Complete page of the Discover and Configure Storage Center wizard, click Configure this host to access a Storage Center. The Set up localhost on Storage Center wizard appears. 2. Click Next. • 3.
Set Up a VMware vCenter Host from Initial Setup Configure a VMware vCenter host to access block-level storage on the Storage Center. Prerequisites • Client must be running on a system with a 64-bit operating system. • The Dell Storage Manager Client must be run by a Dell Storage Manager Client user with the Administrator privilege. • The Storage Center must be added to Storage Manager using a Storage Center user with the Administrator or Volume Manager privilege.
5 Storage Center Administration Storage Center provides centralized, block-level storage that can be accessed by Fibre Channel, iSCSI, or SAS. Adding and Organizing Storage Centers An individual Storage Manager user can view and manage only the Storage Centers that have been mapped to his or her account. This restriction means that the Storage Centers that are visible to one Storage Manager user are not necessarily visible to another user.
Adding and Removing Storage Centers Use the Dell Storage Manager Client to add or remove Storage Centers. NOTE: For user interface reference information, click Help. Add a Storage Center Add a Storage Center to Storage Manager to manage and monitor the Storage Center from the Dell Storage Manager Client. Prerequisites • You must have the user name and password for a Storage Center user account.
Figure 12. Add Storage Center page 4. (Conditional) If the dialog box is displaying a list of Storage Centers, select a Storage Center from the list or add a new one. • To add a Storage Center that does not appear in the list, make sure the Add a new Storage Center to the Data Collector check box is selected, then click Next. • 5. To add a Storage Center that appears in the list, clear the Add a new Storage Center to the Data Collector check box, select the appropriate Storage Center, then click Next.
Reconnect to a Storage Center If Storage Manager cannot communicate with or log in to a Storage Center, Storage Manager marks the Storage Center as down. Reconnect to the Storage Center to provide the updated connectivity information or credentials. 1. Click the Storage view. 2. In the Storage pane, select the Storage Center. 3. In the Summary tab, click Reconnect to Storage Center. The Reconnect to Storage Center dialog box appears. 4. Enter Storage Center logon information.
Rename a Storage Center Folder Use the Edit Settings dialog box to change the name of a Storage Center folder. 1. Click the Storage view. 2. In the Storage pane, select the Storage Center folder you want to modify. 3. In the Summary tab, click Edit Settings. The Edit Settings dialog box opens. 4. In the Name field, type a name for the folder. 5. Click OK. Move a Storage Center Folder Use the Edit Settings dialog box to move a Storage Center folder. 1. Click the Storage view. 2.
Related link Managing Storage Profiles Managing Snapshot Profiles Managing QoS Profiles Volume Icons The following table describes the volume icons that appear in the Storage tab navigation pane. Icon Description The volume is not mapped to any servers. The volume is mapped to one or more servers. The volume is the source for a replication to a remote Storage Center. NOTE: This icon is also displayed for volumes that have been configured to Copy, Mirror, or Migrate in the Storage Center Manager.
9. • If more than one Storage Type is defined on the Storage Center, select the Storage Type to provide storage from the Storage Type drop-down menu. • To set a Volume QoS Profile, either accept the default QoS Profile or click Change across from Volume QoS Profile. Then select a Volume QoS profile from the resulting list, and click OK. • To set a Group QoS Profile, click Change across from Group QoS Profile. Then select a Group QoS profile from the resulting list, and click OK.
• To create a volume as a replication, select Replication Volume to Another Storage Center. • To create the volume as a Live Volume, select Create as Live Volume. 16. Click Next. The Volume Summary page appears. 17. Click Finish. Create Multiple Volumes Simultaneously Using Single-Step Dialog If you need to create many volumes, you can streamline the process by creating multiple volumes at a time. 1. Select a Storage Center from the Storage view.
Create Multiple Volumes Simultaneously Using the Multiple-step Wizard If you need to create many volumes, you can streamline the process by creating multiple volumes at a time. The multiple-step wizard is the default way to create volumes for the SCv2000 series controllers, and the only method available for direct connect SCv2000 series controllers to create multiple volumes simultaneously. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2.
The Set Snapshot Profiles page appears. 11. Select a Snapshot Profile. • (Optional) To create a new Snapshot Profile, click Create New Snapshot Profile. 12. Click Next. The Map to Server page appears. 13. Select a server. For more detailed options, click Advanced Mapping. To create a volume without selecting a server, click Yes to the No Server Specified dialog. To create a new server, click New Server. 14. Click Next. The Replication Tasks page appears. This step appears only if Replication is licensed.
3. In the Storage tab navigation pane, select the volume you want to modify. 4. In the right pane, click Edit Settings. The Edit Volume dialog box opens. 5. In the Name field, type a new name for the volume. 6. When you are finished, click OK. Move a Volume to a Different Volume Folder Volumes can be organized by placing them in folders. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Storage tab. 3.
Steps 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Make sure Allow Cache Selection is enabled for volumes in the Storage Center user preferences. a. b. c. d. In the Summary tab, click Edit Settings. The Edit Settings dialog box opens. Click the Preferences tab. Make sure the Allow Cache Selection check box is selected. Click OK. 3. Click the Storage tab. 4. In the Storage tab navigation pane, select the volume you want to modify. 5.
Assign a Different Storage Profile to a Volume The Storage Profile determines the RAID type and storage tiers used by the volume. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Storage tab. 3. In the Storage tab navigation pane, select the volume you want to modify. 4. In the right pane, click Storage Profile. The Set Storage Profile dialog box opens. 5. From the Storage Profile drop-down menu, select a Storage Profile. 6.
8. Click OK to close the Edit Volume dialog box. Configure a Space Consumption Limit for a Volume Set a space consumption limit to specify the maximum space that can be used on the volume. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Storage tab. 3. In the Storage tab navigation pane, select the volume you want to modify. 4. In the right pane, click Edit Settings. The Edit Volume dialog box appears. 5.
Copying Volumes Copy a volume to create an identical volume for back-up or reuse of the data. The destination volume of a copy, mirror, or migrate must meet the following requirements: • Must not be mapped to a server. • Must be the same size or larger than the source volume. • Cannot be active on another controller. Copy a Volume Copying a volume copies the data from a source volume to a destination volume.
Migrate a Volume Migrating a volume copies a source volume with its server to volume mappings to a destination volume. After migrating the volume, the destination volume is mapped to all servers previously mapped to the source volume. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Storage tab. 3. In the Storage tab navigation pane, select a volume. 4. In the right pane, select Local Copy → Migrate Volume.
7. Click OK. Delete a Copy, Mirror or Migrate Relationship Delete a copy, mirror, or migrate relationship to prevent the source volume from copying to the destination volume. Deleting a relationship deletes the relationship from the source and destination volumes. Prerequisite The volume must be involved in a copy, mirror, or migrate relationship. Steps 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Storage tab. 3.
Before Live Migration Before a Live Migration, the server sends I/O requests only to the volume to be migrated. Figure 13. Example of Configuration Before Live Migration 1. Server 3. Volume to be migrated 2. Server I/O request to volume over Fibre Channel or iSCSI Live Migration Before Swap Role In the following diagram, the source Storage Center is on the left and the destination Storage Center is on the right. Figure 14. Example of Live Migration Configuration Before Swap Role 1. Server 2.
Live Migration After Swap Role In the following diagram, a role swap has occurred. The destination Storage Center is on the left and the new source Storage Center is on the right. Figure 15. Example of Live Migration Configuration After Swap Role 1. Server 2. Server I/O request to destination volume (forwarded to source Storage Center by destination Storage Center) 3. Destination volume 4. New source volume Live Migration After Complete In the following diagram, the Live Migration is complete.
Create a Live Migration for a Single Volume Use Live Migration to move a volume from one Storage Center to another Storage Center with limited or no downtime. Prerequisites • The volume to be migrated must be mapped to a server. • The volume cannot be part of a replication, Live Volume, or Live Migration. About this task NOTE: Live Migration is not supported on SCv2000 series storage systems. Steps 1. Select a Storage Center from the Storage view.
NOTE: If Fibre Channel or iSCSI connectivity is not configured between the local and remote Storage Centers, a dialog box opens. Click Yes to configure iSCSI connectivity between the Storage Centers. 7. (Optional) Modify Live Migration default settings. • In the Replication Attributes area, configure options that determine how replication behaves. • In the Destination Volume Attributes area, configure storage options for the destination volume and map the destination volume to a server. • 8. 9.
Complete a Live Migration Complete a Live Migration to stop server I/O requests to the old source Storage Center and send all I/O requests only to the destination Storage Center. The old destination Storage Center is now the new source Storage Center. You can complete a single Live Migration or multiple Live Migrations at one time. Prerequisites • Swap roles must be complete for the Live Migration. • The Live Migration must be in the Ready to be Completed state. Steps 1.
About this task NOTE: It is recommended to delete a Live Migration only when both the source and destination Storage Centers show their status as Up and are connected to Dell Storage Manager. Steps 1. Click the Replications & Live Volumes view. 2. On the Live Migrations tab, select the Live Migration you want to delete. 3. Click Delete. The Delete dialog box opens. 4. Click OK to delete the Live Migration.
Rename a Volume Folder Use the Edit Settings dialog box to rename a volume folder. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Storage tab. 3. In the Storage tab navigation pane, select the volume folder you want to rename. 4. In the right pane, click Edit Settings. The Edit Settings dialog box opens. 5. In the Name field, type a new name for the volume folder. 6. Click OK.
6. • Click No to create a snapshot for the selected volume only. In the Expire Time field, type the number of minutes, hours, days, or weeks to keep the snapshot before deleting it. If you do not want the snapshot to expire, select Do Not Expire. 7. (Optional) In the Description field, type a description of the snapshot. The default descriptive text is "Manually Created." 8. Click OK.
• To change the parent folder for the volume, select a folder in the Volume Folder pane. • To schedule snapshot creation and expiration for the volume, apply one or more Snapshot Profiles by clicking Change across from Snapshot Profiles. • To add a Volume QoS profile to be applied to the volume, click Change across from Volume QoS Profile. When the list of defined QoS profiles opens, select a profile, then click OK. You can also apply the Default QoS Profile to a volume. • 8.
Expire a Snapshot Manually If you no longer need a snapshot and you do not want to wait for it to be expired based on the Snapshot Profile, you can expire it manually. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Storage tab. 3. In the Storage tab navigation pane, select the volume you want to modify. 4. Click the Storage tab. 5. In the Storage tab navigation pane, select the volume for which you want to expire a snapshot. 6.
Unmap a Volume from a Server Unmap a volume from a server if the server no longer needs to access the volume. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Storage tab. 3. In the Storage tab navigation pane, select the volume you want to unmap from a server. 4. In the right pane, click Remove Mappings. The Remove Mappings dialog box opens. 5. Select the server(s) to unmap from the volume, then click OK.
Deploy a Bootable Volume Image to a New Server Copy a bootable volume image and map it to a new server to streamline the server deployment process. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Storage tab. 3. In the Storage tab navigation pane, select the volume you want to copy. 4. In the right pane, click Create Boot from SAN Copy. The Create Boot from SAN Copy dialog box opens. 5.
Limit the Number of Paths That Can Be Used for a Volume/Server Mapping You can specify the maximum number of paths used by servers that support multipath IO. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. In the Storage tab navigation pane, select the volume. 3. In the right pane, click the Mappings tab. 4. In the right pane, select the server for which you want to modify mapping settings, then click Edit Settings.
3. In the Storage tab navigation pane, select the volume in the Recycle Bin that you want to restore. 4. In the right pane, click Restore Volume. The volume is moved from the Recycle Bin to its previous location. Empty the Recycle Bin Empty the Recycle Bin if you are sure you want to delete the recycled volume(s). About this task CAUTION: After the Recycle Bin is emptied, data on a recycled volume(s) cannot be recovered. Steps 1. Select a Storage Center from the Storage view.
5. Click Edit Advanced Volume Settings. The Edit Advanced Volume Settings dialog box opens. 6. From the Data Reduction Input drop-down menu, select a Data Reduction input. • Inaccessible Snapshot Pages – Data frozen by a snapshot that has become inaccessible because other data has been written over it 7. • All Snapshot Pages – Data frozen by a snapshot Click OK to close the Edit Advanced Volume Settings dialog box. 8. Click OK.
Deduplication Deduplication reduces the space used by a volume by identifying and deleting duplicate pages. Deduplication requires SSD drives. Apply Deduplication With Compression to a Volume Apply Deduplication with Compression to reduce the size of the volume. Deduplication and compression run during daily Data Progression. Prerequisite Allow Data Reduction Selection must be enabled in the Preferences tab of the Edit Storage Center Settings dialog box.
3. In the Storage tab navigation pane, select a volume. 4. In the right pane, click the Statistics tab. The amount of space saved by Data Reduction on that volume is displayed at the bottom of the Statistics tab. Change the Default Data Reduction Profile The default Data Reduction profile determines what type of Data Reduction is applied to new volumes created by that Storage Manager user. Allow Data Reduction Selection allows the Data Reduction options to appear when creating volumes. 1.
Disable Data Reduction for a Volume Disabling Data Reduction on a volume permanently uncompresses the reduced data starting the next Data Progression cycle. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Storage tab. 3. In the Storage tab navigation pane, select the volume you want to modify. 4. In the right pane, click Edit Settings. The Edit Volume dialog box opens. 5.
Consistent Snapshot Profile Non-Consistent Snapshot Profile Can set an Alert if snapshots cannot be completed within a All snapshots are taken defined time. Snapshots not completed before alert is generated are not taken. (This suspension can lead to incomplete groups of snapshots across volumes.
Apply a Snapshot Profile to a Server To add snapshot creation and expiration schedules to all volumes mapped to a server, associate a Snapshot Profile with the server. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Storage tab. 3. In the Storage tab navigation pane, select the Snapshot Profile. 4. In the right pane, click Apply to Server. The Apply to Server dialog box opens. 5.
3. In the Storage tab navigation pane, select the Snapshot Profile that you want to modify. 4. In the right pane, click Edit Settings. The Edit Snapshot Profile dialog box opens. 5. (Optional) Add a rule to the Snapshot Profile. a. Click Add Rule. The Add Rule dialog box appears. b. From the drop-down menu, select the frequency at which the rule runs. c. Configure the dates and times at which you want snapshots to be created. d.
5. Configure the remote snapshot expiration rule. a. Select the remote Storage Center(s) for which you want to specify an expiration rule for the snapshots. b. In the Remote Expiration field, type the number of minutes, hours, days, or weeks to keep the remote snapshot before deleting it. c. Click OK. Modify a Snapshot Profile Expiration Rule for Remote Snapshots Modify a remote expiration rule for a Snapshot Profile to change the time at which remote snapshots are expired. 1.
About this task NOTE: SCv2000 series controllers cannot create Storage Profiles. Steps 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Storage tab. 3. From the Storage Center Actions menu, select Storage Profile → . The Create Storage Profile dialog box opens. 4. Configure the Storage Profile. a. In the Name field, type a name for the Storage Profile. b.
5. Click OK. Related link User Interface for Storage Center Management Managing QoS Profiles QoS profiles describe QoS settings that can be applied to volumes. By defining QoS profiles to apply to volumes, you potentially limit I/Os that the volumes can perform, and also define their relative priority during times of congestion. You can also define a group QoS profile that can be applied to multiple volumes to limit the I/Os that the volumes can do in aggregate.
Steps 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. In the Storage tab navigation tab, expand QoS Profiles and select the profile to be deleted. 3. Right-click the profile and select Delete. A confirmation dialog box opens to request approval for the deletion. 4. Click OK. Apply a QoS Profile to a Volume Apply a previously defined QoS profile to a volume. Prerequisite The QoS profile must already exist. Steps 1.
Steps 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Storage tab. 3. From the Storage tab navigation pane, select an iSCSI fault domain from the Fault Domains node. 4. Click Create Remote Connection. The Create Remote Connection dialog box appears. 5. In the Remote IPv4 Address field, type the IPv4 address of the external device. 6. From the iSCSI Network Type drop-down menu, select the general speed of the network. 7.
• Oracle Linux 7.0 • VMware ESXi 5.5 or later • Windows Server 2008 R2 or later Performing an Offline Import from an External Device Importing data from an external device copies data from the external device to a new destination volume in Storage Center. Complete the following task to import data from an external device.
6 Storage Center Server Administration Storage Manager allows you to allocate storage on each Storage Center for the servers in your environment. Servers that are connected to Storage Centers can also be registered to Storage Manager to streamline storage management and to run Space Recovery for Windows servers. Server Management Options To present storage to a server, a corresponding server object must be added to the Storage Center.
Managing Servers Centrally Using Storage Manager Servers that are registered to Storage Manager are managed from the Servers view. Registered servers are centrally managed regardless of which Storage Centers they are connected to. Figure 18. Servers View The following additional features are available for servers that are registered to Storage Manager: • Storage Manager gathers operating system and connectivity information from registered servers.
• Fibre Channel – Configure Fibre Channel zoning to allow the server HBAs and Storage Center HBAs to communicate. • 2. SAS (SCv2000 series controllers only) – Directly connect the controller to a server using SAS ports configured as front-end connections. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 3. Click the Storage tab. 4. Select Servers in the Storage tab navigation pane. 5. In the right pane, click Create Server.
Steps 1. Make sure the server HBAs have connectivity to the Storage Center HBAs. • iSCSI – Configure the iSCSI initiator on the server to use the Storage Center HBAs as the target. • Fibre Channel – Configure Fibre Channel zoning to allow the server HBAs and Storage Center HBAs to communicate. • 2. SAS (SCv2000 series controllers only) – Directly connect the controller to a server using SAS ports configured as front-end connections. Select a Storage Center from the Storage view.
Create a Server Cluster Create a server cluster object to represent a cluster of servers in your environment. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Storage tab. 3. Select Servers in the Storage tab navigation pane. 4. In the right pane, click Create Server Cluster. The Create Server Cluster dialog box opens. Figure 21. Create Server Cluster Dialog Box 5. Configure the server cluster attributes.
Steps 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Storage tab. 3. In the Storage tab, click Servers. 4. Click Create Server from localhost. The Set up localhost for Storage Center wizard opens. • 5. If the Storage Center has iSCSI ports and the host is not connected to any interface, the Log into Storage Center via iSCSI page appears. Select the target fault domains, and then click Log In.
Create a Server from a VMware vCenter Host Configure a VMware vCenter cluster to access block level storage on the Storage Center. Prerequisites • Client must be running on a system with a 64-bit operating system. • The Dell Storage Manager Client must be run by a user with the Administrator privilege. • The Storage Center must be added to Storage Manager using a Storage Center user with the Administrator or Volume Manager privilege.
Add a Server to a Server Cluster You can add a server object to a server cluster at any time. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Storage tab. 3. In the Storage tab navigation pane, select the server you want to add to a cluster. 4. In the right pane, click Add Server to Cluster. The Add Server to Cluster dialog box opens. 5. Select the server cluster to which you want to add the server and click OK.
Change the Operating System of a Server If you installed a new operating system or upgraded the operating system on a server, update the corresponding server object accordingly. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Storage tab. 3. Select the server in the Storage tab navigation pane. 4. In the right pane, click Edit Settings. The Edit Settings dialog box opens. 5.
5. Select the HBAs that you want to remove. 6. When you are finished, click OK. If the HBA is used by one or more mapped volumes, a confirmation dialog box opens. Figure 22. Remove HBAs from Server Confirmation Dialog Box 7. If a confirmation dialog box opens: • Click Cancel to keep the HBA. • Click OK to remove the HBA, which might interfere with the mapped volume. Mapping Volumes to Servers Map a volume to a server to allow the server to use the volume for storage.
Create a Volume and Map it to a Server If a server requires additional storage and you do not want to use an existing volume, you can create and map a volume to the server in a single operation. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Storage tab. 3. Select the server on which to map a new volume in the Storage tab navigation pane. 4. In the right pane, click Create Volume. The Create Volume dialog box opens. 5.
• To use specific disk tiers and RAID levels for volume data, select the appropriate Storage Profile from the Storage Profile drop-down menu. Using the Recommended Storage Profile allows the volume to take full advantage of data progression. • 9. If more than one Storage Type is defined on the Storage Center, select the Storage Type to provide storage from the Storage Type drop-down menu. Click OK. The Create Multiple Volumes dialog box appears and displays the newly created volume. 10.
Deleting Servers and Server Folders Delete servers and server folders when they no longer utilize storage on the Storage Center. NOTE: For user interface reference information, click Help. Delete a Server Delete a server if it no longer utilizes storage on the Storage Center. When a server is deleted, all volume mappings to the server are also deleted. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Storage tab. 3.
Storage Manager Server Agent for Windows Servers To register a Windows server to Storage Manager, the Storage Manager Server Agent must be installed on the server. The Server Agent allows Storage Manager to communicate with the Windows server to retrieve information, streamline storage management for the server, and perform Space Recovery. The Server Agent is required for Windows servers only. Other supported server types do not require the Server Agent.
Register a VMware vCenter Server Register a VMware vCenter Server to manage it on the Servers view. 1. Click the Servers view. 2. Select the Servers folder in the Servers pane. 3. In the right pane, click Register Server and select Add VMware vCenter Server. The Register Server dialog box opens. 4. In the Host or IP Address field, enter the host name or IP address of a vCenter Server 5.
5. Select a parent folder for the new folder in the Parent navigation tree. 6. Click OK. Rename a Server Folder Select a different name for a server folder. 1. Click the Servers view. 2. In the Servers pane, select the server folder. 3. In the right pane, click Edit Settings. The Edit Folder dialog box opens. 4. Enter a new name for the folder in the Name field. 5. Click OK. Move a Server Folder Use the Edit Settings dialog box to move a server folder. 1. Click the Servers view. 2.
Delete a Server Folder Delete a server folder if it is no longer needed. Prerequisite The server folder must be empty. Steps 1. Click the Servers view. 2. In the Servers pane, select the server folder. 3. In the right pane, click Delete. The Delete Objects dialog box opens. 4. Click OK. Updating Server Information You can retrieve current information from servers and scan for new volumes on servers.
Change the Connection Timeout for a Windows Server You can configure the maximum time in seconds that Storage Manager waits for a response for queries sent to the Server Agent. 1. Click the Servers view. 2. In the Servers pane, select a Windows server. 3. In the right pane, click Edit Settings. The Edit Settings dialog box appears. 4. In the Connection Timeout field, type a new timeout in seconds. 5. • The default is 300 seconds. • The minimum value is 180 seconds.
7. Select how to format the volume from the Format Type drop-down menu: • 8. GPT: Formats the volume using the GUID Partition Table disk partitioning scheme. • MBR: Formats the volume using the master boot record disk partitioning scheme. Specify how to mount the volume in the Drive or Mount Point area: • Use Next Available Drive Letter: The volume is mounted on the server using the next unused drive letter.
• 9. To automatically choose a Storage Center based on capacity and performance, click Recommend a Storage Center. The recommended Storage Center appears in the Storage Center drop-down menu. To configure advanced volume mapping options, click Advanced Mapping. 10. To configure the volume creation settings, click Volume Settings. In the Volume Settings dialog box appears, modify the options as needed, then click OK. • To specify the name of the volume, type a name in the Name field.
6. Select the server on the Storage Center to assign to the virtual machine. 7. Click Finish. Create a Storage Center Server Object for a Virtual Machine If there is no virtual server object on the Storage Center, create one for the virtual machine. 1. Click the Servers view. 2. In the Servers pane, select the virtual machine that needs to be created on a Storage Center. 3. In the right pane, click Create Virtual Server on Storage Center. The Create SC Server for Virtual Machine dialog box appears.
Managing NAS Appliances Powered by Windows Storage Server The Servers view displays operating system and HBA connectivity information about Dell NAS appliances powered by Windows Storage Server. If the IPMI card is correctly configured, you can view hardware status, clear the system event log, and control the power. View Operating System Information about a Windows-Based NAS Appliance The Summary tab displays information about the NAS server software and hardware. 1. Click the Servers view. 2.
• IPMI card information must be configured in Storage Manager. Steps 1. Click the Servers view. 2. In the Servers pane, select a Windows-based NAS appliance. The Summary tab appears. 3. Click the IPMI tab. 4. Click Power Off. The Power Off dialog box appears. 5. Click OK. The appliance is powered off. Reset the Power for a Windows-Based NAS Appliance If the IPMI card is configured correctly, you can remotely reset power for a Windows-based NAS appliance.
Install the Server Agent on a Server Core Installation of Windows Server Install Microsoft .NET Framework 2.0, open the required TCP ports, install the Server Agent, and register the Server Agent to the Data Collector. Prerequisites • The Server Agent must be downloaded. • The server must meet the requirements listed in Server Agent Requirements. • The server must have network connectivity to the Storage Manager Data Collector.
2. • The InstallShield Wizard appears. Complete the wizard to install the Server Agent. 3. On the last page of the wizard, select the Launch Server Agent Manager check box, then click Finish. The Properties dialog box appears. 4. Register the Server Agent with the Storage Manager Data Collector. NOTE: Server Agents can also be registered using the Server view in the Dell Storage Manager Client. a. Specify the address and port of the Storage Manager Data Collector.
Callout Name 3 Control Buttons 4 Version and Port 5 Commands Start the Server Agent Manager Under normal conditions, the Server Agent Manager is minimized to the Windows system tray. To open the Server Agent Manager, perform either of the following actions on the server: • If the Server Agent Manager is minimized, double-click the Server Agent Manager icon in the Windows system tray. • If the Server Agent Manager is not running, start the Storage Manager Server Agent Manager application.
Uninstalling the Server Agent Uninstall the Server Agent if you no longer need to run Space Recovery or automate storage management for the server. Uninstall the Server Agent on a Full Installation of Windows Server Use the Windows Programs and Features control panel item to uninstall the Storage Manager Server Agent application.
Component Requirement • Windows Server 2016 (full or core installation) Software Storage Manager Server Agent Disk/Volume • • • Only disks initialized as Basic (either MBR or GPT) are supported. Dynamic disks are not supported. Only NTFS files systems supported. Cluster shared volumes and volumes that were striped or mirrored by Windows mirroring utilities are not supported. NOTE: Live Volumes are not supported.
Globally Disable Automated Space Recovery If you want to prevent Automated Space Recovery from running without changing the Space Recovery settings for individual folders, servers, and volumes, disable Space Recovery globally. 1. Click the Servers view. 2. In the Servers pane, click Servers Properties 3. Clear the Automated Space Recovery check box. 4. Click OK. . The Edit Settings dialog box appears.
Related link Globally Enable Automated Space Recovery Enable Automated Space Recovery for a Server Folder Specify a Space Recovery Schedule for an Individual Windows Volume When you enable Automated Space Recovery for a server, Space Recovery is enabled for all volumes that are hosted by a Storage Center. Prerequisites • Automated Space Recovery must be enabled globally.
Send Space Recovery Reports by Email Use the Manage Events tab to configure Storage Manager to send Space Recovery Reports to your email address. 1. In the top pane of the Dell Storage Manager Client, click Edit User Settings. 2. Click the Manage Events tab. 3. Select the Space Recovery Report check box. 4. Click OK.
7 Managing Virtual Volumes With Storage Manager VVols is VMware’s storage management and integration framework, which is designed to deliver a more efficient operational model for attached storage. This framework encapsulates the files that make up a virtual machine (VM) and natively stores them as objects on an array. The VVols architecture enables granular storage capabilities to be advertised by the underlying storage. These storage policies can be created for vSphere Storage Policy-Based Management.
The proper steps for purging data in a LAB environment only are: 1. Using VMware vCenter — Delete all respective VVols VMs 2. Using Storage Center—Perform Purge In the event the order is reversed (by accident), VVols metadata lingers in the database even if Storage Manager is uninstalled. This metadata must be deleted to ensure a robust operating environment if a new lab environment is to be set up and intended to use VVols.
NOTE: Storage containers are not supported outside of the virtual volumes context. You must use Storage Manager to create storage containers. Setting Up VVols Operations on Storage Manager To set up and run operations for virtual volumes (VVols) in Storage Manager, you must: • Register VMware vCenter Server in Storage Manager. • Register VMware vCenter Server in Storage Center either by using Auto manage Storage Center option in Storage Manager or by manually adding vCenter server in Storage Center.
Thick provisioning is not supported for operations such as creating or cloning a VVol VM. Only thin provisioning is supported. VASA Provider The VASA provider enables support for VMware VVols operations. A VASA provider is a software interface between the vSphere vCenter server and vendor storage arrays. Dell provides its own VASA provider that enables vCenter to work with Dell storage. This VASA provider supports the VMware VASA 2.0 API specifications.
5. Click OK. Using Storage Manager Certificates With VASA Provider When you run the Register VASA Provider wizard, the URL of the VASA provider is automatically generated. This URL identifies the host on which the Data Collector is installed. The host is identified as either an IP address or Fully-Qualified Domain Name (FQDN). Depending on how you installed or upgraded Storage Manager or if you changed the host for the Data Collector, you might need to take additional steps to update the certificates.
IP Change Action Required Storage Manager host (or both) to remove FQDN configuration. Restart Storage Manager for the changes to take effect and register VASA Provider again. NOTE: Failure to unregister the VASA Provider before making changes in name lookup service results in initialization errors on vCenter for certain services and causes VASA registration to fail. Managing Storage Containers You can create and use storage containers to organize VMware virtual volumes (VVols) in your environment.
Specifying one or both of these options indicates the data reduction preferences for VMs that are then created. You can also specify options for Data Reduction Input: • None • Compression • Deduplication with Compression These options are presented as checkboxes on the Create Storage Container wizard. NOTE: Even if the Compression Allowed and Deduplication Allowed checkboxes are selected, selecting the None profile option results in no action being taken.
Expected Behaviors for Data Reduction Scenarios The settings specified in both the storage container Data Reduction options and in the VMware Storage Profile determine the results of VM and VVol creation. If the storage container Data Reduction settings conflict with the settings in the VM Storage Profile, creation of VMs and virtual volumes could fail. The following table describes the expected behavior for new VM creation with the Compression option. Table 4.
Table 7. Expected Behavior for Compression and Deduplication Checkboxes on Storage Container Old Checkbox Value New Checkbox Value Expected Behavior Compression Enabled Compression Disabled Data Reduction Profile of existing volumes remains unchanged. Compliance check warns that the VM is not compliant with storage container. Clone/Fast Clone of VM to the same storage container follows rules of Table 4.
Source Datastore Destination Datastore Expected Behavior Storage Container Deduplication = Supported Storage Container Deduplication = Not Supported Default Data Reduction Policy on Container = Deduplication with Compression Destination VM Storage Policy = None Specified Migration succeeds. The volumes on the destination inherit the destination storage container's default Data Reduction Profile.
Edit Storage Containers Edit the settings of a storage container to modify its values and related profiles. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Storage tab. 3. In the navigation pane, select Volumes, then select the storage container you want to modify. 4. In the right pane, click Edit Settings. The Edit Storage Container dialog box opens. 5. Modify the fields as required. 6. Click OK.
Creating VVol Datastores Storage containers must first be defined on the Storage Center before vCenter can use the storage container. After a storage container is created, vCenter is able to create VVol-based VMs in the storage container. When you use the Create Datastore action using Storage Manager, you create datastores of the type VVOL and specify the storage container to hold the datastore.
i. 8. If you want to specify a custom LUN, restrict mapping paths, configure multipathing, or make the volume read-only, click Advanced Mapping. If you selected a VVOL datastore, continue with these steps: a. Choose an option for using a storage container: either: • Use Existing New Storage Container– if you select this option, a list of existing storage containers opens. Select a storage container and click Finish. • Create a New Storage Container b.
If the datastore was created with the type VVOL, the VVols tab identifies the virtual volumes stored in the storage container. Protocol Endpoint Monitoring You can view details about protocol endpoints that are associated with virtual volumes (VVols). Protocol endpoints are automatically created when an ESXi 6.0 server is created in Storage Manager. Storage Manager exposes protocol endpoints in the Storage view. You can use Storage Manager to view protocol endpoint details for vSphere hosts.
If the host contains VVols, the Storage view for that host includes the following details about the protocol endpoints: • Device ID • Connectivity status • Server HBA • Mapped Via • LUN Used • Read Only (Yes or No) Managing Virtual Volumes With Storage Manager 193
8 PS Series Storage Array Administration PS Series storage arrays optimize resources by automating performance and network load balancing. Additionally, PS Series storage arrays offer all-inclusive array management software, host software, and free firmware updates. To manage PS Series storage arrays using Dell Storage Manager, the storage arrays must be running PS Series firmware version 7.0 or later. About Groups A PS Series group is a fully functional iSCSI storage area network (SAN).
A group can provide both block and file access to storage data. Access to block-level storage requires direct iSCSI access to PS Series arrays (iSCSI initiator). Access to file storage requires the FS Series NAS appliance using NFS or SMB protocols and the Dell FluidFS scale-out file system. With storage data management features, you can: • Manage a group through several built-in mechanisms such as ssh, serial line, telnet, and web-based user interfaces.
Reconnect to a PS Series Group If Storage Manager cannot communicate with a PS Series group, Storage Manager marks the PS Series group as down. You can reconnect to a PS Series group that is marked as down. 1. Click the Storage view. 2. In the Storage pane, select the down PS Series group. 3. Right-click on the PS Series group and select Reconnect to PS Group. The Reconnect PS Group dialog box opens. 4. Enter PS Series group login information. • 5.
Move a PS Series Group Into a Folder A PS Series group can be moved to a PS Group folder at any time. 1. Click the Storage view. 2. In the Storage pane, select the PS Series group to move. 3. In the Summary tab, click Move. The Select Folder dialog box opens. 4. Select the folder to which to move the PS Series group. 5. Click OK. Rename a PS Group Folder Edit the settings of a PS Group folder to change the name of the folder. 1. Click the Storage view. 2.
Launch Group Manager To manage a PS Series group using the Group Manager GUI, launch Group Manager from the PS Series group Summary tab. 1. Click the Storage view. 2. In the Storage pane, select a PS Series group. 3. In the Summary tab, click Launch Group Manager. Group Manager opens in the default web browser. 4. Enter the user name and password for the PS Series group. 5. Click Log In. About Volumes Volumes provide the storage allocation structure within the PS Series group.
Callout Description 4 PS Series single-member pool A PS Series array represented as a member within a pool to which it is assigned. 5 PS Series multimember pool Multiple PS Series arrays represented as individual members within a pool to which it is assigned. 6 Storage space Space received from PS Series arrays to allocate data as needed through various structures (volumes, snapshots, thin provisioning, replicas, containers, SMB/NFS, quotas, and local users and groups).
Create a Volume Create a volume to present a local unit of storage on a PS Series group. 1. Click the Storage view. 2. In the Storage pane, select a PS Series group. 3. Click the Storage tab. 4. In the Storage tab navigation pane, select Volumes. 5. In the right pane, click Create Volume. The Create Volume dialog box opens. 6. In the Name field, type a name for the volume. 7. In the Volume Folder pane, select the Volumes node or a parent folder for the volume. 8.
– In the In-Use Warning Limit field, type the in-use space warning limit percentage of the volume. – To generate an warning event message when the in-use warning limit is exceeded, select the Generate initiator error when in-use warning limit is exceeded. checkbox. – In the Maximum In-Use Space field, type the maximum in-use space percentage of the volume. – To set the volume offline when the maximum in-use space is exceeded, select the Set offline when maximum in-use space is exceeded checkbox.
Steps 1. Click the Storage view. 2. In the Storage pane, select a PS Series group. 3. Click the Storage tab. 4. In the Storage tab navigation pane, expand the Volumes node. 5. Select the volume folder to delete. 6. In the right pane, click Delete. The Delete dialog box opens. 7. Click OK. Move a Volume to a Folder Individual volumes can be organized by moving them to volume folders.
7. Click OK. Clone a Volume Clone a volume to create a copy of the volume. 1. Click the Storage view. 2. In the Storage pane, select a PS Series group. 3. Click the Storage tab. 4. In the Storage tab navigation pane, select a volume to clone. 5. In the right pane, click Clone. The Clone Volume dialog box opens. 6. In the Name field, type a name for the clone. 7. Click OK. Modify Volume Access Settings The read-write permission for a volume can be set to read-only or read-write.
Add Access Policy Groups to a Volume To control volume access for a group of servers, add one or more access policy groups to a volume. 1. Click the Storage view. 2. In the Storage pane, select a PS Series group. 3. Click the Storage tab. 4. In the Storage tab navigation pane, select a volume. 5. In the right pane, click Add Access Policy Groups. The Add Access Policy Groups to Volume dialog box opens. 6. In the Access Policy Groups area, select the access policy groups to apply to the volume.
Steps 1. Click the Storage view. 2. In the Storage pane, select a PS Series group. 3. Click the Storage tab. 4. In the Storage tab navigation pane, expand the Volumes node and select the volume to delete. 5. Click Delete. The Delete dialog box opens. 6. Click OK. • If the volume does not contain data, the volume is permanently deleted. • If the volume does contain data, the volume is moved to the recycle bin.
About Snapshots Snapshots enable you to capture volume data at a specific point in time without disrupting access to the volume. A snapshot represents the contents of a volume at the time of creation. If needed, a volume can be restored from a snapshot. Creating a snapshot does not prevent access to a volume, and the snapshot is instantly available to authorized iSCSI initiators.
Modify Snapshot Properties After a snapshot is created, you can modify the settings of the snapshot. 1. Click the Storage view. 2. In the Storage pane, select a PS Series group. 3. Click the Storage tab. 4. In the Storage tab navigation pane, expand the Volumes node and select a volume that contains a snapshot. 5. From the Snapshots tab, select a snapshot to modify. 6. Click Edit Settings. The Modify Snapshot Properties dialog box opens. 7. In the Name field, type a name for the snapshot. 8.
Restore a Volume from a Snapshot You can restore a volume to the state of a snapshot. 1. Click the Storage view. 2. In the Storage pane, select a PS Series group. 3. Click the Storage tab. 4. In the Storage tab navigation pane, select a volume that contains a snapshot. 5. From the Snapshots tab, select a snapshot to restore. 6. Click Restore Volume. The Restore Volume dialog box opens. 7.
12. Specify when to start the replication. • To start the replication at a set time each day, select At specific time, then select a time of day. • To repeat the replication over a set amount of time, select Repeat Interval, then select how often to start the replication and the start and end times. 13. From the Replica Settings field, type the maximum number of replications the schedule can initiate.
Edit a Replication Schedule After creating a replication schedule, edit it to change how often the schedule initiates replications. 1. Click the Storage view. 2. In the Storage pane, select a PS Group. 3. Click the Storage tab. 4. From the Storage tab navigation pane, select a volume. The volume must be the source of a replication relationship. 5. From the Schedules tab, select the replication schedule to edit. 6. Click Edit. The Edit Schedule dialog box appears. 7.
About Access Policies In earlier versions of the PS Series firmware, security protection was accomplished by individually configuring an access control record for each volume to which you wanted to secure access. Each volume supported up to 16 different access control records, which together constituted an access control list (ACL). However, this approach did not work well when large numbers of volumes were present.
Modify Target Authentication A PS Series group automatically enables target authentication using a default user name and password. If needed, you can change these credentials. 1. Click the Storage view. 2. In the Storage pane, select a PS Series group. 3. Click the Storage tab. 4. In the Storage tab navigation pane, select the Access node. 5. In the right pane, click Modify Target Authentication. The Modify Target Authentication dialog box opens. 6.
6. In the Name field, type a name for the access policy group. 7. (Optional) In the Description field, type a description for the access policy group. 8. In the Access Policies area, click Add to add access policies to the access policy group. To remove an access policy from the access policy group, select the access policy and click Remove. 9. Click OK. Add Volumes to an Access Policy Group You can select the volumes that you want to associate with an access policy group. 1.
8. In the Access Points area, click Create to create an access point. • 9. To edit an access point, select the access point and click Edit. The Edit Access Point dialog box opens. • To remove an access point from the access policy, select the access point and click Remove Click OK. Edit an Access Policy After an access policy is created, you can edit the settings of the access policy. 1. Click the Storage view. 2. In the Storage pane, select a PS Series group. 3. Click the Storage tab. 4.
10. In the text box in the IPv4 Addresses area, type the IPv4 addresses of the iSCSI initiators to which you want to provide access and then click + Add. You can enter a single IP address or a range of IP addresses. IP addresses can also be entered in a comma separated list. To remove an IP address from the IPv4 Address area, select the address and click – Remove. 11. Click OK. Delete an Extended Access Point You can delete an extended access point if it is no longer needed. 1. Click the Storage view. 2.
Monitoring a PS Series Group Storage Manager provides access to logs, replications, and alerts for the managed PS Series group. View Logs You can view logs for the last day, last 3 days, last 5 days, last week, last month, or a specified period of time. 1. Click the Storage view. 2. In the Storage pane, select a PS Series group. 3. Click the Monitoring tab. 4. In the Monitoring tab navigation pane, select the Logs node. 5. Select the date range of the log data to display.
View Replication History You can view the replication history for a PS Series group. 1. Click the Storage view. 2. In the Storage pane, select a PS Series group. 3. Click the Monitoring tab. 4. In the Monitoring tab navigation pane, select the Replication History node. Information about past replications is displayed in the right pane. View Alerts You can view the current alerts for a PS Series group. 1. Click the Storage view. 2. In the Storage pane, select a PS Series group. 3.
9 Dell Fluid Cache for SAN Cluster Administration Dell Fluid Cache for SAN is a server-side caching accelerator that makes high speed PCIe SSDs a shared, distributed cache resource. Fluid Cache is deployed on clusters of PowerEdge servers within a SAN, connected by RoCE-enabled network adapters. Required Components and Privileges for Fluid Cache Clusters To deploy a Fluid Cache cluster, one or more Storage Centers and Storage Manager are required plus connection to Fluid Cache servers with cache devices.
Privilege Fluid Cache Cluster Function Reporter Can only get data from the Data Collector. It cannot configure or modify the Fluid Cache cluster Adding, Deleting, and Removing Fluid Cache Clusters The following tasks describe how to add Fluid Cache clusters and how to remove them. NOTE: For user interface reference information, click Help. Create a Fluid Cache Cluster Use Storage Manager to create a Fluid Cache cluster.
CAUTION: Any data stored on the PCIe SSDs will be lost when selected to be used as cache devices. 9. Click Next. The Select Storage Centers page appears. 10. Select one or more Storage Centers to include in the Fluid Cache Cluster and click Finish. NOTE: You must have Administrator credentials for the Storage Center in order to add it to a Fluid Cache cluster. 11. Add a volume to the cluster. 12. Click Finish.
Fluid Cache Volumes A Fluid Cache volume extends a normal Storage Center volume to be contained across the cache devices in a Fluid Cache cluster as well as permanently stored in the Storage Center volume.
Delete a Volume From a Fluid Cache Cluster Use Storage Manager to completely delete a volume from a Fluid Cache cluster while maintaining just the cluster. 1. Click the Storage view. 2. In the Storage pane, expand Fluid Cache Clusters if necessary and select the cluster with the mapped volume to delete. 3. In the Cache tab, expand Volumes, select the volume to be deleted and click Delete. The Delete dialog box appears. 4.
Remove a Cache Server from a Fluid Cache Cluster Use Storage Manager to remove a cache server from a Fluid Cache cluster while keeping the cluster. 1. Click the Storage view. 2. In the Storage pane, expand Fluid Cache Clusters if necessary, click the Fluid Cache cluster and in the right pane, select the Cache or Summary tab, and click Remove Server from Cluster. The Remove Server from Cluster dialog box appears.
4. By default, all available cache devices are selected. Clear the check box next to unwanted cache devices or click Unselect All and select the cache device(s) to be added. (Click Select All to use all available cache devices again.) 5. Click OK. Add a Storage Center to a Fluid Cache Cluster Use Storage Manager to add a Storage Center to a Fluid Cache cluster while keeping the cluster. 1. Click the Storage view. 2.
5. Click OK. NOTE: Some decreased performance will be experienced. Take a Fluid Cache Cluster Out of Maintenance Mode Use Storage Manager to disable maintenance mode on a Fluid Cache cluster. 1. Click the Storage view. 2. In the Storage pane, expand Fluid Cache Clusters if necessary and select the Fluid Cache cluster. 3. Click Edit Settings. The Edit Settings dialog box appears. 4. Clear Maintenance Mode. The Disable Maintenance Mode [ cluster name ] dialog box appears. 5. Click OK.
Enable Server Load Equalizing for Storage Center Volumes Server load equalizing dynamically adjusts queue depth for volumes experiencing high IOPS to minimize the performance impact on other volumes. Enable load equalizing on a Storage Center that hosts Fluid Cache volumes to prevent cache flushing operations from adversely affecting performance for other volumes. About this task NOTE: Enable load equalizing only for environments using Fluid Cache clusters, or if directed by Dell Technical Support. Steps 1.
Fluid Cache License File is Invalid Verify that the license didn’t expire or that a system change caused the license to be invalidated. • • The Fluid Cache license status can be verified on either the Fluid Cache clusters’ Events tab or Cache tab. An evaluation license is valid for only 90 days. Contact your Dell sales representative to purchase a Dell Fluid Cache for SAN license.
If the Storage Manager client displays a red mark over the cluster in the Storage Centers view of the cluster, it means that the Storage Center is reporting that it cannot communicate with the cluster servers over the management network. • Verify that the network is operational between the cluster servers and the Storage Center by using a network tool such as ping. • Note that it may take several minutes for the Storage Center to report the cluster status (down or up).
10 Storage Center Maintenance Storage Manager can manage Storage Center settings, users and user groups, and apply settings to multiple Storage Centers. Managing Storage Center Settings Storage Manager can manage settings for individual Storage Centers and apply these settings to multiple Storage Centers.
Change the Operation Mode of a Storage Center Change the operation mode of a Storage Center before performing maintenance or install software updates to isolate alerts from those events. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 3. Click the General tab. 4. In the Operation Mode field select Normal or Maintenance.
Modify the Storage Center Shared Management IP Address(es) In a dual-controller Storage Center, the shared management IP address is hosted by the leader under normal circumstances. If the leader fails, the peer takes over the management IP, allowing management access when the normal leader is down. An IPv6 management IP address can also be assigned.
Modify iDRAC Interface Settings for a Controller The iDRAC interface provides out-of-band management for the controller. When you reach the Configuration Complete screen: 1. Scroll down to Advanced Steps. 2. Click the Modify BMC Settings link. 3. The Edit BMC Settings dialog box opens. 4. Specify the iDRAC interface settings for the bottom controller and the top controller. a. In the BMC IP Address field, type an IP address for the iDRAC interface. b. In the BMC Net Mask field, type the network mask.
Set Default Cache Settings for New Volumes The default cache settings are used when a new volume is created unless the user changes them. You can prevent the default cache settings from being changed during volume creation by clearing the Allow Cache Selection check box. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 3. Click the Preferences tab. 4.
4. From the Storage Profile drop-down menu, select the Storage Profile to use as the default for new volumes. 5. To allow users to select a Storage Profile when creating a volume, select Allow Storage Profile Selection. 6. Click OK. Set the Default Storage Type for New Volumes The default Storage Type is used when a new volume is created unless the user selects a different Storage Type.
5. Click OK. Schedule or Limit Data Progression Schedule when Data Progression runs and limit how long it is allowed to run. Prerequisite The Storage Center must be added to Storage Manager using a Storage Center user with the Administrator privilege. Steps 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 3. Click the Storage tab. 4.
3. In the Summary tab, click Edit Settings. The Edit Settings dialog box appears. 4. Click the Storage tab. 5. Select the Apply these settings to other Storage Centers check box. 6. Click Apply. The Select Storage Center dialog box appears. 7. Select the check box for each Storage Center to which you want to apply the settings. 8. When you are finished, click OK.
Apply Secure Console Settings to Multiple Storage Centers Secure Console settings that are assigned to a single Storage Center can be applied to other Storage Centers. Prerequisite The Storage Center must be added to Storage Manager using a Storage Center user with the Administrator privilege. Steps 1. Click the Storage view. 2. In the Storage pane, select the Storage Center that has the settings you want to apply to other Storage Centers. 3. In the Summary tab, click Edit Settings.
Apply SMTP Settings to Multiple Storage Centers SMTP settings that are assigned to a single Storage Center can be applied to other Storage Centers. Prerequisite The Storage Center must be added to Storage Manager using a Storage Center user with the Administrator privilege. Steps 1. Click the Storage view. 2. In the Storage pane, select the Storage Center that has the settings you want to apply to other Storage Centers. 3. In the Summary tab, click Edit Settings. The Edit Settings dialog box appears.
g. Select the user from the SNMP v3 Settings table. 7. Specify settings for the network management system to which Storage Center will send SNMP traps. a. Click Create Trap Destination. b. c. d. e. f. g. h. The Create SNMP Trap Destination dialog box opens. In the Trap Destination field, type the host name or IP address of the network management system that is collecting trap information From the Type drop-down menu, select the notification type and the SNMP version of the trap or inform to be sent.
4. Click the SNMP Server tab. 5. Select the Apply these settings to other Storage Centers check box. 6. Click Apply. The Select Storage Center dialog box appears. 7. Select the check box for each Storage Center to which you want to apply the settings. 8. When you are finished, click OK. Configuring Storage Center Time Settings Date and time settings can be configured individually for each Storage Center or applied to multiple Storage Centers.
NOTE: For user interface reference information, click Help. Create an Access Filter for a Storage Center Create an access filter to explicitly allow administrative connections from a user privilege level, specific user, IP address, or range of IP addresses. Prerequisite The Storage Center must be added to Storage Manager using a Storage Center user with the Administrator privilege. Steps 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2.
Delete an Access Filter for a Storage Center Delete an access filter if it is no longer needed or you want to revoke administrative access to the users and IP addresses that the filter matches. Prerequisite The Storage Center must be added to Storage Manager using a Storage Center user with the Administrator privilege. Steps 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. In the Summary tab, click Edit Settings.
Configuring a Storage Center to Inherit Settings A Storage Center can be configured to inherit settings from another Storage Center to save time and ensure that Storage Centers are configured consistently. Prerequisite The Storage Center must be added to Storage Manager using a Storage Center user with the Administrator privilege. About this task NOTE: For user interface reference information, click Help. Steps 1. Select a Storage Center from the Storage view.
User Account Management and Authentication Storage Center access is granted using either of the following methods: • Local users and user groups: User accounts can be created and maintained on the Storage Center. • External directory service: In environments that use Active Directory or OpenLDAP, Storage Center can authenticate directory users. Access can be granted to individual directory users and directory user groups. These users access the Storage Center using their domain credentials.
Configure the Default User Preferences for New Storage Center Users The default user preferences are applied to new Storage Center users. The preferences can be individually customized further after the user is created. Prerequisite The Storage Center must be added to Storage Manager using a Storage Center user with the Administrator privilege. Steps 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. In the Summary tab, click Edit Settings.
5. From the Preferred Language drop-down menu, select a language. 6. Click OK. Change the Session Timeout for a Local Storage Center User The session timeout controls the maximum length of time that the local user can be idle while logged in to the Storage Center System Manager before the connection is terminated. Prerequisite • The Storage Center must be running Storage Center OS version 6.7 or below.
c. Select the check box for each local user group you want to associate with the local user. d. To remove the local user from a local group, clear the check box for the group. e. When you are finished, click OK. The Select Local User Groups dialog box closes. 6. When you are finished, click OK. The Edit Local User Settings dialog box closes. 7. Click OK to close the Edit Storage Center Settings dialog box.
Steps 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 3. Click the Users and User Groups tab. 4. On the Local Users subtab, select the user, then click Change Password. The Change Password dialog box opens. 5. Enter the old password. 6. Enter and confirm a new password for the local user, then click OK.
Create a Local User Group Create a local Storage Center user group to grant access to specific volume, server, and disk folders. Prerequisite The Storage Center must be added to Storage Manager using a Storage Center user with the Administrator privilege. About this task To create a user group: Steps 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 3.
Manage Directory User Group Membership for a Local Storage Center User Group Add a directory user group to a local user group to grant access to all directory users in the directory user group. Prerequisites • The Storage Center must be configured to authenticate users with an external directory service. • The directory user group(s) you want to add to a local Storage Center user group must have been granted Volume Manager or Reporter access to the Storage Center.
• To add a disk folder, select the disk folder(s) you want to add in the upper table, then click Add Disk Folders. The disk folders move from the upper table to the lower table. • To remove a disk folder, select the disk folder(s) you want to remove from the local user group in the lower table, then click Remove Disk Folders. The disk folders move from the lower table to the upper table. b. When you are done, click Finish. The wizard closes. 8. Click OK to close the Edit Settings dialog box.
ldap://server1.example.com ldap://server2.example.com:1234 NOTE: Adding multiple servers ensures continued authorization of users in the event of a resource outage. If Storage Center cannot establish contact with the first server, Storage Center attempts to connect to the remaining servers in the order listed. • In the Directory Server Connection Timeout field, enter the maximum time (in minutes) that Storage Center waits while attempting to connect to an Active Directory server.
Example URIs for two servers: ldap://server1.example.com ldap://server2.example.com:1234 NOTE: Adding multiple servers ensures continued authorization of users in the event of a resource outage. If Storage Center cannot establish contact with the first server, Storage Center attempts to connect to the remaining servers in the order listed. • In the Base DN field, type the base distinguished name for the LDAP server. The Base DN is the starting point when searching for users.
Managing Directory Service Users Directory service users can be individually granted access to a Storage Center. NOTE: For user interface reference information, click Help. Grant Access to a Directory User Grant access to a directory user to allow the user to log in to the Storage Center using his or her directory credentials. Prerequisites • The Storage Center must be configured to authenticate users with an external directory service.
4. On the Directory Users subtab, select the user, then click Edit Settings. The Edit Settings dialog box opens. 5. From the Privilege drop-down menu, select the privilege level to assign to the user. • Administrator – When selected, the local user has full access to the Storage Center. • Volume Manager – When selected, the local user has read and write access to the folders associated with the assigned user groups. 6.
Steps 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 3. Click the Users and User Groups tab. 4. On the Directory Users subtab, select the user, then click Edit Settings. The Edit Settings dialog box opens. 5. Modify local group membership for the user. a. In the Local User Groups area, click Change. The Select Local User Groups dialog box opens. b.
Delete a Directory Service User Delete a directory service user if he or she no longer requires access. The user that was used to add the Storage Center to Storage Manager cannot be deleted. The last user with the Administrator privilege cannot be deleted because Storage Center requires at least one Administrator. Prerequisite The Storage Center must be added to Storage Manager using a Storage Center user with the Administrator privilege. Steps 1. Select a Storage Center from the Storage view.
Steps 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. In the Summary tab, click Edit Settings. The Edit Storage CenterSettings dialog box opens. 3. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 4. Click the Users and User Groups tab. 5. On the Directory User Groups subtab, click Grant Access to Directory User Groups. The Grant Access to Directory User Groups dialog box opens. 6.
Steps 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 3. Click the Users and User Groups tab. 4. On the Directory User Groups subtab, select the directory user group, then click Edit Settings. The Edit Settings dialog box appears. 5. Modify local group membership for the directory user group. a. In the Local User Groups area, click Change.
NOTE: Only administrator-level accounts can unlock other Storage Center accounts. Have more than one Storage Center administrator-level account so that other Storage Center accounts can be unlocked. • To require new passwords to follow complexity standards, select the Complexity Enabled checkbox. To disable the password complexity requirement, clear the Complexity Enabled checkbox. • To set the number of days before a user can change his or her password, type a value in the Minimum Age field.
6. Click OK. Managing Front-End IO Ports Front-end ports connect an Storage Center directly to a server using SAS connections or to the Ethernet networks and Fibre Channel (FC) fabrics that contain servers that use storage. iSCSI, FC, or SAS IO ports can be designated for use as front-end ports.
Legacy Mode Legacy mode provides controller redundancy for a dual-controller Storage Center by connecting multiple primary and reserved ports to each Fibre Channel or Ethernet switch. In legacy mode, each primary port on a controller is paired with a corresponding reserved port on the other controller. During normal conditions, the primary ports process IO and the reserved ports are in standby mode.
Fault Domains in Legacy Mode In Legacy Mode, each pair of primary and reserved ports are grouped into a fault domain. The fault domain determines which ports are allowed to fail over to each other. The following requirements apply to fault domains in legacy mode on a dual-controller Storage Center: • A fault domain must contain one type of transport media (FC or iSCSI, but not both). • A fault domain must contain one primary port and one reserved port.
NOTE: For user interface reference information, click Help. Rename a Front-End IO Port Set a display name for a physical or virtual IO port to make it more identifiable. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Hardware tab. 3. In the Hardware tab navigation pane, expand Controllers→ controller name→ IO Ports→transport type, then select the IO port. 4. In the right pane, click Edit Settings.
Test Network Connectivity for an iSCSI Port Test connectivity for an iSCSI I/O port by pinging a port or host on the network. About this task NOTE: If multiple virtual fault domains (VLANs) are associated with the port, the physical fault domain is used for ping tests issued from the Hardware tab. To test network connectivity for a VLAN, initiate a ping test from a physical port in a fault domain on the Storage tab. Steps 1. Select a Storage Center from the Storage view.
• The port cannot be already configured. Steps 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Hardware tab. 3. In the Hardware tab navigation pane, expand Controllers→ controller name→ IO Ports, then select an unconfigured SAS or Fibre Channel IO port 4. In the right pane, click Configure Port. Configure Front-End IO Ports (iSCSI) On SCv2000 series controllers, ports must be configured before they can be used as front-end ports.
3. Click Edit Settings. 4. Select a port in the fault domain. 5. Click Move Port. The Move Port dialog box opens. 6. From the New Fault Domain drop-down menu, select the Fault Domain to which the port will be moved. NOTE: If the port to be moved is in a different subnet than the destination fault domain, modify the IPv4 Address field so that the port's new address is in the same subnet as the destination fault domain.
Convert iSCSI Ports to Virtual Port Mode Use the Convert to Virtual Port Mode tool to convert all iSCSI ports on the Storage Center controllers to virtual port mode. Prerequisite The iSCSI ports must be in legacy port mode. About this task NOTE: This operation cannot be undone. After the ports are converted to virtual port mode, they cannot be converted back. Steps 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Storage tab. 3.
Rename a Fibre Channel Fault Domain The fault domain name allows administrators to identify the fault domain. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Storage tab. 3. In the Storage tab navigation pane, expand Fault Domains → Fibre Channel, then select the fault domain. 4. In the right pane, click Edit Settings. The Edit Settings dialog box opens. 5. In the Name field, type a name for the fault domain. 6. Click OK.
Multi-VLAN Tagging Requirements The following table lists the requirements that a Storage Center must meet to support multi-VLAN tagging. Requirement Description Storage Center OS Version 6.5 or later must be installed on the Storage Center. Storage Center controller model Multi-VLAN Tagging is not supported on SCv3000 or SCv2000 storage systems. Storage Center iSCSI IO card hardware Chelsio T3/T5 10G iSCSI cards must be installed in the Storage Center.
a. In the Target IPv4 Address field, type an IP address to assign to the iSCSI control port. b. In the Subnet Mask field, type the subnet mask for the well-known IP address. c. In the Gateway IPv4 Address field, type the IP address for the iSCSI network default gateway. 7. (Optional) In the Target IPv6 Address field, type an IP address to assign to the iSCSI control port. 8. (Optional) If necessary, assign a VLAN ID to the fault domain.
8. Assign a VLAN IP address to each selected port in the Ports table by editing the corresponding field in the VLAN IP Address column. Each port must have an IP address in the same network as the iSCSI control port, which is specified in the Well Known IP Address field. 9. Click OK.
4. In the right pane, click Edit Settings. The Edit Settings dialog box opens. 5. Select the VLAN Tagged check box. 6. In the VLAN ID field, type a VLAN ID for the fault domain. Allowed values are 1–4096. 7. (Optional) To assign a priority level to the VLAN, type a value from 0-7 in the Class of Service Priority field. 0 is best effort, 1 is the lowest priority, and 7 is the highest priority. 8. Click OK.
5. Click Edit Advanced Port Settings. The Edit Port Settings dialog box opens. 6. In the Digest Settings area, enable or disable iSCSI digest settings as needed. These options are described in the online help. 7. Click OK to close the Edit Port Settings dialog box, then click OK to close the Edit Settings dialog box. Modify Timeout Settings for an iSCSI Fault Domain iSCSI timeout settings determine how the Storage Center handles idle connections. 1. Select a Storage Center from the Storage view.
6. • If the host uses IPv4 addressing only, type the IPv4 address in the IPv4 Address field. From the Ping Size drop-down menu, select a size in bytes for the ping packets, not including overhead. If you select Other, type a value between 1 and 17000 bytes in the field below the menu. NOTE: The Ping Size drop-down menu might not appear depending on the hardware I/O cards used by the Storage Center. 7. Click OK. A message displays the results of the test.
iSCSI NAT Port Forwarding Requirements for Virtual Port Mode The following requirements must be met to configure NAT port forwarding for an iSCSI fault domain in virtual port mode. • For each Storage Center iSCSI control port and virtual port, a unique public IP address and TCP port pair must be reserved on the router that performs NAT.
e. Click OK. The Create iSCSI NAT Port Forward dialog box closes. 6. Repeat the preceding steps for each additional iSCSI control port and physical port in the fault domain. 7. In the Public Networks/Initiators area, define an iSCSI initiator IP address or subnet that requires port forwarding to reach the Storage Center because it is separated from the Storage Center by a router performing NAT. a. Click Add. The Create iSCSI NAT Initiator Configuration dialog box opens. b.
5. Select the CHAP Enabled check box. 6. Define the CHAP configuration for each server in the fault domain that initiates iSCSI connections to the Storage Center. a. b. c. d. Click Add. The Add Remote CHAP Initiator dialog box opens. In the iSCSI Name field, type the iSCSI name of the remote initiator. In the Remote CHAP Name field, type the CHAP name of the remote initiator.
Related link Configure an iSCSI Connection for Remote Storage Systems Grouping SAS IO Ports Using Fault Domains Front-end ports are categorized into fault domains that identify allowed port movement when a controller reboots or a port fails. Ports that belong to the same fault domain can fail over to each other because they have connectivity to the same resources. NOTE: For user interface reference information, click Help.
Disk Management for SC7020, SC5020, and SCv3000 Storage Center manages disks for SC7020, SC7020F , SC5020, SC5020F, and SCv3000 storage systems automatically. When configuring one of those storage systems, Storage Center manages the disks into folders based on function of the disk. FIPS capable drives are managed into a separate folder than other disks. When Storage Center detects new disks, it manages the disk into the appropriate folder.
Related link Create Secure Data Disk Folder Delete Disk Folder Delete a disk folder if all disks have been released from the folder and the folder is not needed. Prerequisite The disk folder does not contain disks. Steps 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Storage tab. 3. In the Storage tab navigation pane, expand the Disks node. 4. Select a disk folder. 5. Click Delete. The Delete Folder dialog box appears. 6.
Enable or Disable the Disk Indicator Light The drive bay indicator light identifies a drive bay so it can be easily located in an enclosure. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Hardware tab. 3. In the Hardware tab navigation pane, expand the enclosure and select Disks. 4. In the right pane, select the disk, then click Indicator On/Off. Release a Disk Release a disk before removing it from an enclosure.
3. In the Storage tab navigation pane, click Disks. 4. Expand a disk folder, then select a disk. 5. Click Delete. The Delete Disk dialog box appears. 6. Click OK. Related link Restore a Disk Restore a Disk After a disk fails, Storage Center does not allow that disk to be managed again. If the disk is down for testing purposes then deleted, the disk can be restored so that Storage Center can manage the disk again. Prerequisites • The Storage Center must be running version 6.
Managing Secure Data Secure Data provides data-at-rest encryption with key management for self-encrypting drives (SED). The Self-Encrypting Drives feature must be licensed to use Secure Data. How Secure Data Works Using Secure Data to manage SEDs requires an external key management server.
7. To add alternate key management servers, type the host name or IP address of another key management server in the Alternate Hostnames area. Then click Add. 8. If the key management server requires a user name to validate the Storage Center certificate, type the name in the Username field. 9. If the key management server requires a password to validate the Storage Center certificate, type the password in the Password field. 10. Configure the key management server certificates. a.
3. Click the Disks node. 4. Right-click the name of a Secure Disk disk and select Rekey Disk. A confirmation box opens. 5. Click OK. Copy Volumes to Disk Folder Copy volumes from one Secure Disk folder to another folder. The target folder can be either a secure folder or a nonsecure folder. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Storage tab. 3. Click the Disks node. 4.
Managing Data Redundancy Manage data redundancy by modifying tier redundancy, creating Storage Types, or rebalancing RAID. Managing RAID Modifying tier redundancy, or adding or removing disks can cause data to be unevenly distributed across disks. A RAID rebalance redistributes data over disks in a disk folder. Rebalance RAID Rebalancing RAID redistributes data over the disks according to the Storage Type. Rebalance the RAID after releasing a disk from a disk folder, a disk fails, or adding a disk. 1.
Check the Status of a RAID Rebalance The RAID Rebalance displays the status of an in-progress RAID rebalance and indicates whether a rebalance is needed. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Storage tab. 3. In the Storage tab navigation pane, click the Disks node. 4. Select Rebalance RAID. The RAID Rebalance dialog box shows the status of a RAID rebalance.
6. • For dual-redundant RAID levels, select Dual Redundant. Click OK. A RAID rebalance starts. Managing Disk Enclosures Storage Manager can rename an enclosure, set an asset tag, clear the swap status for replaceable hardware modules in a disk enclosure, mute alarms, reset the temperature sensors, and delete an enclosure from a Storage Center. Add an Enclosure This step-by-step wizard guides you through adding a new enclosure to the system.
7. Follow the directions to disconnect the A side chain cables connecting the enclosure to the Storage Center. Click Next. 8. Reconnect the A side chain cables by following the directions to exclude the enclosure. Click Next. 9. Follow the directions to disconnect the B side chain cables connecting the enclosure to the Storage Center. Click Next. 10. Reconnect the B side chain cables by following the directions to exclude the enclosure. Click Next to validate the cabling and delete the enclosure.
Set an Asset Tag for a Disk Enclosure An enclosure asset tag can be used to identify a specific component for company records. Storage Manager allows you to set an asset tag for enclosures that support it. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Hardware tab. 3. In the Hardware tab navigation pane, select the enclosure. 4. In the right pane, click Edit Settings. The Edit Settings dialog box appears. 5.
4. In the right pane, select the cooling fan, then click Request Swap Clear. Clear the Swap Status for an Enclosure IO Module Clear the swap status for an enclosure IO module to acknowledge that it has been replaced. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Hardware tab. 3. In the Hardware tab navigation pane, select I/O Modules. 4. In the right pane, select the IO module, then click Request Swap Clear.
Clear the Minimum and Maximum Recorded Values for Temperature Sensor Clear the minimum and maximum recorded values for a temperature sensor to reset them. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Hardware tab. 3. In the Hardware tab navigation pane, select Temperature Sensors. 4. In the right pane, right-click the sensor, then click Request Min/Max Temps Clear.
Add a Controller This step-by-step wizard guides you through adding a new controller to the storage system. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Hardware tab. 3. In the navigation pane, select Controllers . 4. Click Add Controller. The Add New Controller wizard appears. 5. Confirm the details of your current install, and click Next. 6. Insert the controller into the existing enclosure.
Steps 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Expand Enclosures in the navigation pane. Select the enclosure with the failed cooling fan sensor, then select Temperature Sensor. 3. Click Replace Failed Cooling Fan Sensor. The Replace Failed Cooling Fan Sensor wizard appears. 4. Refer to the graphic in the wizard to locate the failed cooling fan sensor. Click Next. 5.
Plan a Hardware Change Upon boot, the Storage Center searches back-end targets for the configuration. Because a controller cannot boot without configuration information, back-end access must be maintained during the controller replacement procedure. This can be done in two ways: • Keep at least one common back-end slot/port defined and connected in the same manner on the new hardware configuration as it was on the old hardware configuration.
Updating Storage Center Update a Storage Center to the latest version using the Dell Storage Manager Client connected directly to the Storage Center, or connected to a Data Collector. Updating using the Dell Storage Manager Client requires SupportAssist to be enabled or the Storage Center Update Utility. For more information on the Storage Center Update Utility, see Using the Storage Center Update Utility.
NOTE: The Dell Storage Center Update Utility supports updating Storage Centers from version 6.6 or higher. Configure Storage Center to Use the Storage Center Update Utility If the Storage Center is not connected to the internet, configure it to use the Storage Center Update Utility when checking for updates. Before Storage Center can receive an update from the Storage Center Update Utility, a Storage Center distro must be loaded and the Storage Center Update Utility service must be running.
Restart All Controllers in a Storage Center If the Storage Center has dual-controllers, the controllers can be restarted in sequence or simultaneously. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. In the right pane, click Actions→ System→ Shutdown/Restart. The Shutdown/Restart dialog box appears. 3. From the first drop-down menu, select Restart. 4.
6. To restart the controller after the reset, select Power on the Storage Center after resetting to factory defaults. 7. Click OK. The Storage Center resets to the factory default settings. Managing Field Replaceable Units (FRU) The FRU Manager maintains the status of FRUs and issues action tickets when a unit needs to be replaced. Storage Manager displays FRU tickets that contain specific information on each FRU, and provides the ability to close tickets.
11 Viewing Storage Center Information Storage Manager provides access to summary information about managed Storage Centers, including historical IO performance and hardware status. Use this information to monitor the health and status of a Storage Center. Viewing Summary Information Storage Center summary plugins provide summary information for individual Storage Centers. The summary plugins can also be used to compare multiple Storage Centers.
Viewing Summary Information for a Storage Center When a Storage Center is selected from the Storage pane, information about the Storage Center is displayed on the panes of the Summary tab. Figure 27. Summary Tab View Summary Plugins for a Storage Center Use the Summary tab to view the summary plugins that are currently enabled. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Summary tab.
• 4. To move a plugin to the top, press Move to Top once. • To move a plugin to the bottom, press Move to Bottom once. Click OK to save changes to the plugins of the Summary tab. Viewing Summary Information for Multiple Storage Centers Storage Manager provides two ways to view summary information for multiple Storage Centers.
Figure 29. Storage View Comparison Tab 3. From the drop-down menu in the top right corner, select the summary plugin that you want to use to compare the Storage Centers.
Field/Option Description Free Space Amount of disk space available for use by a Storage Center, displayed in units of data and as a percentage of Available Space. Used Space Amount of disk space used by a Storage Center, displayed in units of data and as a percentage of Available Space. Alert Information The top portion of the Status plugin displays information about the alerts for a Storage Center. The alert icons indicate the highest active alert level.
Display More Information about the Threshold Alerts Click Threshold Alerts to display the Definitions tab on the Threshold Alerts view. Using the Storage Summary Plugin The Storage Summary plugin displays a bar chart that shows detailed information about disk space on a Storage Center and a graph that shows the past four weeks of disk space usage for a Storage Center. Figure 31.
Return to the Normal View of the Bar Chart If you have changed the zoom level of the chart, you can return to the normal view. 1. Click and hold the right or left mouse button on the bar chart. 2. Drag the mouse to the left to return to the normal zoom level of the bar chart. Save the Chart as a PNG Image Save the chart as an image if you want to use it elsewhere, such as in a document or an email. 1. Right-click the bar chart and select Save As. The Save dialog box appears. 2.
Save the Graph as a PNG Image Save the graph as an image if you want to use it elsewhere, such as in a document or an email. 1. Right-click the graph and select Save As. The Save dialog box appears. 2. Select a location to save the image and enter a name for the image in the File name field. 3. Click Save to save the graph. Print the Graph Print the graph if you want a paper copy. 1. Right-click the graph and select Print. The Page Setup dialog box appears. 2.
Save the Graph as a PNG Image Save the graph as an image if you want to use it elsewhere, such as in a document or an email. 1. Right-click the graph and select Save As. The Save dialog box appears. 2. Select a location to save the image and enter a name for the image in the File name field. 3. Click Save to save the graph. Print the Graph Print the graph if you want a paper copy. 1. Right-click the graph and select Print. The Page Setup dialog box appears. 2.
Using the Replication Validation Plugin The Replication Validation plugin displays a table that lists replications and corresponding statuses. Use this plugin to monitor the status of replications from the current Storage Center to a destination Storage Center. Figure 34.
Related link Configuring Threshold Definitions Update the List of Threshold Alerts Refresh the list of threshold alerts to see an updated list of alerts. Click Refresh to update the list of alerts. Viewing Detailed Storage Usage Information Detailed storage usage information is available for each Storage Type that is configured for a Storage Center. View Storage Usage by Tier and RAID Type Storage usage by tier and RAID type is displayed for each Storage Type. 1.
Figure 38. Storage Type Volumes Subtab View Historical Storage Usage Allocated space and used space over time is displayed for each Storage Type. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Storage tab. 3. In the Storage tab navigation pane, expand Storage Type, then select the individual storage type you want to examine. 4. Click the Historical Usage subtab to view allocated space and used space over time. Figure 39.
View a Data Progression Pressure Report For each storage type, the data progression pressure report displays how space is allocated, consumed, and scheduled to move across different RAID types and storage tiers. Use the data progression pressure report to make decisions about the types of disks to add to a Storage Center. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Storage tab. 3.
Pressure Report Column Description Saved as RAID 10 Amount of space saved by moving less-accessed data to RAID 5 rather than using RAID 10 for all data. Viewing Historical IO Performance The IO Usage tab is used to view and monitor historical IO performance statistics for a Storage Center and associated storage objects. The Comparison View on the IO Usage tab is used to display and compare historical IO usage data from multiple storage objects.
Change the Period of Data to Display on the IO Usage Tab You can display data for the last day, last 3 days, last 5 days, last week, last month, or a custom time period. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the IO Usage tab. 3. Click one of the following buttons to change the period of IO usage data to display: 4. • Last Day: Displays the past 24 hours of IO usage data.
9. • Read Latency: Displays read latencies, in ms, for the selected storage objects in a single chart. • Write Latency: Displays write latencies, in ms, for the selected storage objects in a single chart. • Xfer Latency: Display data transfer latencies, in ms, for the selected servers or remote Storage Centers in a single chart. • Avg IO Size: Displays average IO sizes for the selected storage objects in a single chart.
– Disks or disk speed folder 5. To refresh the IO usage data, click Refresh 6. To stop collecting IO usage data from the Storage Center, click the Stop button. To resume collecting IO usage data, click the Start button. on the Charting navigation pane. Change the Period of Data to Display on the Charting Tab You can display data for the last 5 minutes, last 15 minutes, last 30 minutes, or last hour. 1. Select a Storage Center from the Storage view.
Configuring User Settings for Charts Modify the User Settings for your user account to display alerts on the charts and change the chart colors. NOTE: For user interface reference information, click Help. Display Alerts on Charts You can configure charts to display the relationships between the reported data and the configured threshold alerts and Storage Center alerts. 1. In the top pane of the Dell Storage Manager Client, click Edit User Settings. The Edit User Settings dialog box appears. 2.
Combine Usage Data into One Chart You can combine IO usage data into a single chart with multiple Y axes. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the IO Usage or Charting tab. 3. Select the Combine Charts check box to combine the IO usage data into a single chart with multiple Y axes. Scale Usage Data in a Chart You can change the scale for MB/Sec, IO/Sec, and Latency. 1. Select a Storage Center from the Storage view.
Exporting Usage Data You can export Storage Usage and IO Usage data to CSV, Text, Excel, HTML, XML, or PDF. Export Storage Usage Data You can export storage usage data for Storage Centers, volumes, and servers. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Storage tab. 3. Click Save Storage Usage Data on the Storage navigation pane. The Save Storage Usage Data dialog box appears. Figure 41. Save Storage Usage Dialog Box 4.
The Save IO Usage Data dialog box appears. Figure 42. Save IO Usage Data Dialog Box 4. Specify the type of IO usage data to export by selecting one of the following radio buttons: • 5. 6. Save ’Most Active Report’ IO Usage Information • Save Chart IO Usage Information If you selected the Save ’Most Active Report’ IO Usage Information radio button, select the check boxes of the IO usage data to export: • Volume Most Active: Exports IO usage data for the volumes.
Monitoring Storage Center Hardware Use the Hardware tab of the Storage view to monitor Storage Center hardware. Figure 43. Hardware Tab Related link Monitoring a Storage Center Controller Monitoring a Storage Center Disk Enclosure Monitoring SSD Endurance Viewing UPS Status Managing Disk Enclosures Shutting Down and Restarting a Storage Center Monitoring a Storage Center Controller The Hardware tab displays status information for the controller(s) in a Storage Center.
View a Diagram of a Controller The Hardware tab displays a diagram of the back of a controller selected from the Hardware tab navigation pane. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Hardware tab. 3. In the Hardware tab navigation pane, expand the Controllers node then select a controller. The right pane displays a diagram of the controller The hardware view indicates failed components with a red overlay. 4.
View Temperature Information for a Controller The Temperature Sensor node on the Hardware tab displays summary and status information for temperature sensors in the controller. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Hardware tab. 3. In the Hardware tab navigation pane, expand the Controllers node, expand the node for a specific controller, then click Temperature Sensor.
The hardware view indicates failed components with a red overlay. 5. To view more information about hardware components, mouse over a hardware component. A tool tip appears and displays information including the name and status of the hardware component. 6. To adjust the zoom on the enclosure diagram, change the position of the zoom slider located to the right of the enclosure diagram. • 7. To zoom in, click and drag the zoom slider up. • To zoom out, click and drag the zoom slider down.
View IO Module Status for an Enclosure The I/O Modules node on the Hardware tab displays IO module status for the enclosure. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Hardware tab. 3. In the Hardware tab navigation pane, expand the Enclosures node, then the node for a specific enclosure. 4. Click I/O Modules. The right pane displays status information for the IO module selected from the I/O Modules tab.
NOTE: For user interface reference information, click Help. View Current Endurance and Endurance History for an SSD The current endurance level for an SSD is displayed as a percentage. The endurance level for an SSD is also recorded over time and can be displayed in a graph. 1. Select a Storage Center from the Storage view. (Data Collector connected Storage Manager Client only) 2. Click the Storage tab. 3. In the Storage tab navigation pane, select the SSD. 4. View endurance information for the SSD.
Viewing UPS Status A UPS provides power redundancy to a Storage Center with the use of a backup battery. If the power to a Storage Center is cut off, the UPS immediately switches over to the battery giving a Storage Center administrator time to properly power down the Storage Center or fix the power issue. When the UPS switches to the battery, it sends an on battery message to the Storage Center.
12 SMI-S The Storage Management Initiative Specification (SMI-S) is a standard interface specification developed by the Storage Networking Industry Association (SNIA). Based on the Common Information Model (CIM) and Web-Based Enterprise Management (WBEM) standards, SMI-S defines common protocols and data models that enable interoperability between storage vendor software and hardware. Dell SMI-S Provider The Dell SMI-S Provider is included with the Storage Manager Data Collector.
• Server • Software • Thin Provisioning Setting Up SMI-S To set up SMI-S, enable SMI-S for the Data Collector, then add the required SMI-S user. HTTPS is the default protocol for the SMIS provider. 1. Verify SMI-S Prerequisites 2. Enable SMI-S for the Data Collector Verify SMI-S Prerequisites Before you configure SMI-S, make sure the required software is installed on the server that hosts the Storage Manager Data Collector and open the required ports. 1.
4. Use SCVMM 2012 to Discover the Dell SMI-S Provider Verify SCVMM 2012 Prerequisites Verify that the following requirements are met before you to use Microsoft SCVMM 2012 to discover the Dell SMI-S provider and Storage Centers. • Microsoft SCVMM 2012 server and the Storage Manager Data Collector must be installed on separate servers, and both servers must be members of the same Active Directory domain. • SMI-S must be enabled and configured for the Storage Manager Data Collector.
Steps 1. Start the Registry Editor application. 2. If the User Account Control dialog box appears, click Yes to continue. The Registry Editor window appears. 3. Disable CN verification for the storage provider certificate. a. In Registry Editor, navigate to the following folder: • Windows Server 2008R2: Select HKEY_LOCAL_MACHINE→ SOFTWARE→ Microsoft→ Storage Management. • Windows Server 2012: Select HKEY_LOCAL_MACHINE→ Software→ Microsoft→ Windows→ CurrentVersion→ Storage Management. b.
4. Complete the Specify the IP address or FQDN of the storage provider wizard page. a. In the IP address/FQDN and port field, enter the IP address or the FQDN of the Storage Manager server, which hosts the Dell SMI-S provider, followed by the connection port. The default port for HTTP is 5988, and the default port for HTTPS is 5989. For example, enter hostname.example.com:5989 where hostname.example.com is the FQDN of the Storage Manager server and 5989 is the default HTTPS port. b.
Part III FluidFS v6 Cluster Management This section describes how to use Storage Manager to manage FluidFS clusters running version 6.x. NOTE: FluidFS Cluster Management contains two separate sections, one for FluidFS v6 and one for FluidFS v5 because the GUI procedures are different between these two versions.
13 How FS8x00 Scale-Out NAS Works Dell FS8x00 scale-out NAS leverages the Dell Fluid File System (FluidFS) and Storage Centers to present file storage to Microsoft Windows, UNIX, and Linux clients. The FluidFS cluster supports the Windows, UNIX, and Linux operating systems installed on a dedicated server or installed on virtual systems deploying Hyper-V or VMware virtualization. The Storage Centers present a certain amount of capacity (NAS pool) to the FluidFS cluster.
Term Description Standby controller A NAS controller that is installed with the FluidFS software but is not part of a FluidFS cluster. For example, a new or replacement NAS controller from the Dell factory is considered a standby controller. Backup power supplies Each NAS controller contains a backup power supply that provides backup battery power in the event of a power failure. FluidFS cluster One to six FS8x00 scale-out NAS appliances configured as a FluidFS cluster.
Feature Description Highly available and active-active design Redundant, hot-swappable NAS controllers in each NAS appliance. Both NAS controllers in a NAS appliance process I/O. Multitenancy Multitenancy enables a single physical FluidFS cluster to be connected to several separated environments and manage each environment individually. Automatic load balancing Automatic balancing of client connections across network ports and NAS controllers, as well as back-end I/O across Storage Center volumes.
Overview of the FS8x00 Hardware Scale-out NAS consists of one to six FS8x00 appliances configured as a FluidFS cluster. Each NAS appliance is a rack-mounted 2U chassis that contains two hot-swappable NAS controllers in an active-active configuration. In a NAS appliance, the second NAS controller with which one NAS controller is paired is called the peer controller.
Figure 46. FS8600 Architecture Storage Center The Storage Center provides the FS8600 scale-out NAS storage capacity; the FS8600 cannot be used as a standalone NAS appliance. Storage Centers eliminate the need to have separate storage capacity for block and file storage. In addition, Storage Center features, such as Dynamic Capacity and Data Progression, are automatically applied to NAS volumes. SAN Network The FS8600 shares a back-end infrastructure with the Storage Center.
If client access to the FluidFS cluster is not through a router (in other words, a flat network), define one client VIP per NAS controller. If clients access the FluidFS cluster through a router, define a client VIP for each client interface port per NAS controller. Data Caching and Redundancy New and modified files are first written to the cache, and then cache data is immediately mirrored to the peer NAS controller (mirroring mode).
Scenario System Status Data Integrity Comments Simultaneous dual-NAS controller failure in single NAS appliance cluster Unavailable Lose data in cache Data that has not been written to disk is lost Sequential dual‑NAS controller failure in multiple NAS appliance cluster, same NAS appliance Unavailable Unaffected Sequential failure assumes enough time is available between NAS controller failures to write all data from the cache to disk (Storage Center or nonvolatile internal storage) Simultaneous
14 FluidFS System Management for FS Series Appliances This section contains information about basic FluidFS cluster system management. These tasks are performed using the Dell Storage Manager Client. NAS Access FluidFS v6.x supports the Unicode /UTF-8 encoding, allowing concurrent access from any UTF-8 compatible client. All NAS interfaces expect UTF-8 characters for file, folder/directory, share, and other names. Consequently, all names are internally maintained and managed in UTF-8 format.
Using the Dell Storage Manager Client or CLI to Connect to the FluidFS Cluster As a storage administrator, you can use either the Dell Storage Manager Client or command-line interface (CLI) to connect to and manage the FluidFS cluster. By default, the FluidFS cluster is accessed through the client network. Connect to the FluidFS Cluster Using the Dell Storage Manager Client Log in to the Dell Storage Manager Client to manage the FluidFS cluster.
• From Windows using an SSH client, connect to a client VIP. From the command line, enter the following command at the login as prompt: cli • From a UNIX/Linux system, enter the following command from a prompt: ssh cli@client_vip_or_name 2. Type the FluidFS cluster administrator user name at the login as prompt. The default user name is Administrator. 3. Type the FluidFS cluster administrator password at the user_name’s password prompt. The default password is Stor@ge!.
Service Port FTP (Passive) 44430–44439 SSH 22 Storage Manager communication 35451 Secured management can be enabled only after the system is deployed. To make a subnet secure: • It must exist prior to enabling the secured management feature. • It can reside on the client network (subnet-level isolation of management traffic) or the LOM (Lights Out Management) Ethernet port (physical isolation of management traffic).
Change the Prefix for the Secured Management Subnet Change the prefix for the secured management subnet. 1. the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Cluster Connectivity, and then click the Management Network tab. 4. In the Management Network panel, click Edit Settings. The Modify Administrative Network dialog box opens. 5. In the Prefix field, type a prefix for the secured management subnet. 6. Click OK.
Enable or Disable Secured Management Enable secured management to exclusively limit management traffic to one specific subnet. Prerequisites • The subnet on which you enable secured management must exist before you enable the secured management feature. • The FluidFS cluster must be managed by Storage Manager using the subnet on which secured management will be enabled.
Rename the FluidFS Cluster Changing the FluidFS cluster name changes the FluidFS cluster name that is displayed in Storage Manager and the name that clients use to access the FluidFS cluster. Prerequisites After changing the FluidFS cluster name, you must also make the following adjustments: • Change the FluidFS cluster name on the DNS server. • If the FluidFS cluster is joined to an Active Directory domain, leave and then rejoin the FluidFS cluster to the Active Directory domain.
View and Configure Time Settings Provide the correct time information for the FluidFS system. An NTP server is mandatory for working with Active Directory. An NTP server is recommended for accurate snapshot and replication scheduling and for event logging. For this procedure, the time information is copied from the Storage Center setup. 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Cluster Connectivity, and then click the General tab.
Managing SNMP Simple Network Management Protocol (SNMP) is one way to monitor the health of the system and generate alert messages (SNMP traps) for system problems. To use SNMP, the FluidFS cluster-specific Management Information Bases (MIBs) and traps must be compiled into a customer-provided SNMP management station. The MIBs are databases of information that is specific to the FluidFS cluster. FluidFS supports SNMP v3 (read requests) and v2, but does not support using both versions at the same time.
Change the SNMP Trap System Location or Contact Change the system location or contact person for FluidFS cluster-generated SNMP traps. By default, the SNMP trap system location and contact person are unknown. 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Cluster Maintenance. 4. Click the SNMP tab. 5. In the SNMP Trap panel, click Modify SNMP Trap. The Modify SNMP Trap Settings dialog box opens. 6.
NOTE: Keep the health scan throttling mode set to Normal unless specifically directed otherwise by Dell Technical Support. Change the Health Scan Settings If enabled, the Health Scan background process will scan the file system to identify potential errors.. 1. In the Storage view, select a FluidFS cluster 2. Click the File System tab. 3. In the File System view, select Cluster Maintenance. 4. Click the Internal tab. 5. In the Advanced panel, click Modify Health Scan Settings.
Display the Distribution of Clients Between NAS Controllers Display the current distribution of clients between NAS controllers. 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Cluster Connectivity. 4. Click the Clients and Routers tab. The Filters panel displays the NAS controller and interface to which each client is connected.
3. In the File System view, select Cluster Connectivity. 4. In the Filters panel, click Failback. The Failback Clients dialog box opens. 5. Click OK. Rebalance Client Connections Across NAS Controllers Rebalancing client connections evenly distributes connections across all the available NAS controllers.
3. Change the FluidFS cluster operation mode to Normal: a. b. c. d. In the File System view, select Cluster Management. Click the Internal tab. In the Advanced panel, click Modify Operation Mode. The Modify Operation Mode dialog box opens. Select Normal and then click OK. Reboot a NAS Controller Only one NAS controller can be rebooted in a NAS appliance at a time. Rebooting a NAS controller disconnects client connections while clients are being transferred to other NAS controllers.
Validate Storage Connections Validating storage connections gathers the latest server definitions on the FluidFS cluster and makes sure that matching server objects are defined on the Storage Centers providing the storage for the FluidFS cluster. 1. In the Storage view, select a FluidFS cluster. 2. Click the Hardware tab. 3. In the toolbar, click Actions→ Storage Centers→ Validate Storage Connections. The Validate Storage Connections dialog box opens. 4. Click OK.
15 FluidFS Networking This section contains information about managing the FluidFS cluster networking configuration. These tasks are performed using the Dell Storage Manager Client. Managing the Default Gateway The default gateway enables client access across subnets. Only one default gateway can be defined for each type of IP address (IPv4 or IPv6). If client access is not through a router (a flat network), a default gateway does not need to be defined.
View DNS Servers and Suffixes View the current DNS servers providing name resolution services for the FluidFS cluster and the associated DNS suffixes. 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Client Accessibility. The DNS panel displays the DNS servers and suffixes. Add or Remove DNS Servers and Suffixes Add one or more DNS servers to provide name resolution services for the FluidFS cluster and add associated DNS suffixes.
Figure 47. Routed Network The solution is to define, in addition to a default gateway, a specific gateway for certain subnets by configuring static routes. To configure these routes, you must describe each subnet in your network and identify the most suitable gateway to access that subnet. Static routes do not have to be designated for the entire network—a default gateway is most suitable when performance is not an issue. You can select when and where to use static routes to best meet performance needs.
6. In the Default Gateway IPvn Address field, type the gateway IP address through which to access the subnet (for example, 192.0.2.25). 7. Click OK. Delete a Static Route Delete a static route to send traffic for a subnet through the default gateway instead of a specific gateway. 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Cluster Connectivity. 4. Click the Client Network tab. 5.
9. In the Comment field, type any additional information. 10. Click OK. Change the Prefix for a Client Network Change the prefix for a client network. 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Cluster Connectivity. 4. Click the Client Network tab. 5. In the Client Network panel, select a client network and then click Edit Settings. The Edit Client Network Settings dialog box opens. 6.
3. In the File System view, select Cluster Connectivity. 4. Click the Client Network tab. 5. In the Client Network panel, click Edit Settings. The Edit Client Network Settings dialog box opens. 6. In the NAS Controllers IP Addresses field, select a NAS controller and then click Edit Settings. The Edit Controller IP Address dialog box opens. 7. In the IP Address field, type an IP address for the NAS controller. 8. Click OK.
Change the Client Network Bonding Mode Change the bonding mode (Adaptive Load Balancing or Link Aggregation Control Protocol) of the client network interface to match your environment. Prerequisites • If you have ALB, use one client VIP per client port in the FluidFS cluster. • If you have LACP, use one client VIP per NAS controller in the FluidFS cluster. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Cluster Connectivity. 4.
Managing iSCSI SAN Connectivity iSCSI SAN subnets (Storage Center fault domains) or "fabrics" are the network connections between the FluidFS cluster and the Storage Center. The SAN network consists of two subnets, named SAN and SANb. The FluidFS cluster iSCSI SAN configuration can be changed after deployment if your network changes. Add or Remove an iSCSI Port Add a Storage Center iSCSI control port for each connected subnet (Storage Center fault domain).At least one iSCSI port must remain configured.
Change the VLAN Tag for an iSCSI Fabric Change the VLAN tag for an iSCSI fabric. When a VLAN spans multiple switches, the VLAN tag specifies which ports and interfaces to send broadcast packets to. 1. In the Storage view, select a FluidFS cluster. 2. Click the NAS Pool tab. 3. Click the Network tab. 4. In the iSCSI Fabrics panel, select an appliance and then click Edit Settings. The Modify Settings for Fabric SAN dialog box opens. 5.
16 FluidFS Account Management and Authentication This section contains information about managing FluidFS cluster accounts and authentication. These tasks are performed using the Dell Storage Manager Client. Account Management and Authentication FluidFS clusters include two types of access: • Administrator-level access for FluidFS cluster management • Client-level access to SMB shares, NFS exports, and FTP folder Administrator accounts control administrator-level access.
Login Name Purpose SSH Access Enabled by Default SSH Access Allowed VGA Console Access Enabled by Default VGA Console Access Allowed enableescalationaccess Enable escalation account No No Yes Yes escalation FluidFS cluster troubleshooting when unable to log in with support account No Yes No Yes cli Gateway to command– line interface access Yes (can bypass password using SSH key) Yes (can bypass password using SSH key) N/A N/A Default Password N/A Administrator Account The Administr
6. In the Password field, type a password. The password must be between 8 and 14 characters long and contain three of the following elements: a lowercase character, an uppercase character, a digit, or a special character (such as +, ?, or ∗). 7. In the Confirm Password field, retype the password. 8. Click OK. Enable or Disable Dell SupportAssist You can enable Storage Client to send the FluidFS cluster diagnostics using Dell SupportAssist. 1. In the Storage view, select a FluidFS cluster. 2.
NAS Volume Setting Volume Administrator Allowed to Change Setting? NAS volume name Yes NAS volume folder to which the NAS volume is assigned Yes Access time granularity Yes Permissions interoperability Yes Report zero disk usage Yes Data reduction Yes NAS volume space settings and alert thresholds Yes SMB shares and NFS exports Yes Snapshots and snapshot schedules Yes Restore NAS volume from snapshot Yes Restore NAS volume configuration Yes Quotas Yes NAS volume clones No Replica
h. Click Search. i. Select a user from the search results and click OK. j. Click OK. 7. Select the Global Administration Permission Enabled checkbox. 8. In the Email Address field, type an email address for the administrator. 9. Click OK. Assign NAS Volumes to a Volume Administrator By default, new volume administrators cannot manage any NAS volumes. After a volume administrator is created, you can change the NAS volumes that can be managed by the volume administrator. 1.
Change an Administrator Password You can change the password for a local administrator account only. The password for remote administrators is maintained in the external database. 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Client Accessibility. 4. Click the Local Users and Groups tab. 5. Select an administrator and click Change Password. The Change Password dialog box opens. 6.
5. In the Local Users and Groups window, select Another computer and type the FluidFS cluster name (as configured in the DNS). Alternatively, you can use the client VIP. 6. Click Finish. The new local users and groups tree is displayed in the Console Root window. 7. Select Users or Groups. 8. Select a local user or group, and select an action from the Actions pane.
Change the Secondary Local Groups to Which a Local User Is Assigned Secondary groups determine Windows (SMB share) permissions. 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Client Accessibility. 4. Click the Local Users and Groups tab. 5. Select a local user and click Edit Settings. The Edit Settings dialog box opens. 6. To add a secondary local group to assign the local user to: a. b. c. d.
Change a Local User Password Change the password for a local user account. 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab, expand Environment and select Authentication. 3. Select a local user and click Change Password. The Change Password dialog box appears. 4. In the Password field, type a new password for the local user.
6. In the Local Group field, type a name for the local group. 7. In the Local Users area, select the local users that should be assigned to the local group: a. b. c. d. Click Add. The Select User dialog box opens. From the Domain drop-down list, select the domain to which the local user is assigned. In the User field, type either the full name of the local user or the beginning of the local user name. (Optional) Configure the remaining local user search options as needed.
5. Select a group and click Edit Settings. The Edit Local User Group Settings dialog box opens. 6. To assign local users to the local group: a. b. c. d. In the Local Users area, click Add. The Select User dialog box opens. From the Domain drop-down list, select the domain to which the local user is assigned. In the User field, type either the full name of the local user or the beginning of the local user name. (Optional) Configure the remaining local user search options as needed.
Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Client Accessibility. 4. Click the Local Users and Groups tab. 5. Select a group and click Delete. The Delete dialog box opens. 6. Click OK. Managing Active Directory In environments that use Active Directory (AD), you can configure the FluidFS cluster to join the Active Directory domain and authenticate Windows clients using Active Directory for access to SMB shares.
4. Click the Directory Services tab. 5. Click Edit Settings. The Edit Active Directory Settings dialog box opens. 6. Select a domain controller from the Preferred Domain Controllers list, or enter a domain controller IP Address and click Add. 7. Click OK. Modify Active Directory Authentication Settings You cannot directly modify the settings for Active Directory authentication. You must remove the FluidFS cluster from the Active Directory domain and then re-add it to the Active Directory domain. 1.
Filter Open Files You can filter open files by file name, user, protocol, or maximum number of open files to display. 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Client Activity. 4. Click Open Files. The Open Files dialog box opens. 5. In the top portion of the dialog box, fill in one or more of the fields (File name, User, Protocol, Number of Files to Display). 6. Click Apply Filter/Refresh.
Enable LDAP Authentication Configure the FluidFS cluster to communicate with the LDAP directory service. Adding multiple LDAP servers ensures continued authentication of users in the event of an LDAP server failure. If the FluidFS cluster cannot establish contact with the preferred server, it will attempt to connect to the remaining servers in order. 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Client Accessibility. 4.
Enable or Disable LDAP on Active Directory Extended Schema Enable the extended schema option if Active Directory provides the LDAP database. 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Client Accessibility. 4. Click the Directory Services tab. 5. Click Edit Settings in the NFS User Repository section. The Edit External User Database dialog box opens. 6.
Disable LDAP Authentication Disable LDAP authentication if you no longer need the FluidFS cluster to communicate with the directory service. 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Client Accessibility. 4. Click the Directory Services tab. 5. Click Edit Settings in the NFS User Repository section. The Edit External User Database dialog box opens. 6. Select None. 7. Click OK.
Add or Remove NIS Servers At least one NIS server must be configured. 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Client Accessibility. 4. Click the Directory Services tab. 5. Click Edit Settings in the NFS User Repository section. The Edit External User Database dialog box opens. 6. Add or remove NIS servers: • To add a NIS server, type the host name or IP address of a NIS server in the NIS Servers text field and click Add.
• UNIX security style – Permissions are based on the UNIX/Linux permissions. The Windows user will adhere to the permissions of the corresponding UNIX/Linux user. • Mixed security style – Both UNIX/Linux and Windows permissions are used. Each user can override the other user's permission settings; therefore, be careful when using the Mixed security style.
e. Select a user from the search results. f. Click OK. 8. In the NFS User area, click Select User. The Select User dialog box opens. 9. Select a UNIX/Linux user: a. From the Domain drop-down list, select the domain to which the user is assigned. b. In the User field, type either the full name of the user or the beginning of the user name. c. (Optional) Configure the remaining user search options as needed. These options are described in the online help.
17 FluidFS NAS Volumes, Shares, and Exports This section contains information about managing the FluidFS cluster from the client perspective. These tasks are performed using the Dell Storage Manager Client. Managing the NAS Pool When configuring a FluidFS cluster, you specify the amount of raw Storage Center space to allocate to the FluidFS cluster (NAS pool). The maximum size of the NAS pool is: • 2 PB with one Storage Center.
• 4 PB with two Storage Centers Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the Summary tab. 3. In the right pane, click Actions → Storage Centers → Expand NAS Pool. The Expand NAS Pool dialog box opens. 4. In the NAS Pool Size field, type a new size for the NAS pool in gigabytes (GB) or terabytes (TB). NOTE: The new size is bound by the size displayed in the Minimum New Size field and the Maximum New Size field. 5. Click OK.
Enable or Disable the NAS Pool Unused Space Alert You can enable or disable an alert that is triggered when the remaining unused NAS pool space is below a specified size. 1. In the Storage view, select a FluidFS cluster. 2. Click the Summary tab. 3. In the Summary panel, click Edit NAS Pool Settings. The Set NAS Pool Space Settings dialog box opens. 4. Enable or disable the NAS pool unused space alert: • To enable the NAS pool used space alert, select the Unused Space Alert checkbox. 5.
File Access Notifications – File access notifications are set at a clusterwide level in FluidFS v6. If multitenancy is in use, only one tenant can utilize the external audit server feature. Separation of file access notifications between different tenants requires multiple FluidFS clusters. Alternatively, you can use SACL auditing, which is separated between tenants for file access notifications.
Multitenancy – Tenant Administration Access A tenant administrator manages his or her tenants’ content. Tenant can be managed by multiple tenant administrators, and tenant administrators can manage multiple tenants. A tenant administrator can create or delete tenants, delegate administration per tenant, and view space consumption of all tenants. About this task This procedure grants tenant administrator access to a user. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab.
Next step NOTE: • Users must be added to the administrators list before they can be made a tenant administrator or a volume administrator. • Only the following users can be administrators: – Users in the Active Directory domain or UNIX domain of the default tenant – Local users of the default tenant or any other tenant Create a New Tenant 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Tenants. 4. Click Create Tenant.
NOTE: Setting any of these limits is optional. 2. Select the Restrict Tenant Capacity Enabled checkbox. 3. Type a tenant capacity limit in gigabytes (GB). 4. Select the Restrict Number of NAS Volumes in Tenant Enabled checkbox. 5. Type the maximum number of NAS volumes for this tenant. 6. Select the Restrict Number of NFS Exports in Tenant Enabled checkbox. 7. Type the maximum number of NFS exports for this tenant. 8. Select the Restrict Number of SMB Shares in Tenant Enabled checkbox. 9.
• Quota rules • Data reduction • Snapshots • NDMP backup • Replication File Security Styles The Windows and UNIX/Linux operating systems use different mechanisms for resource access control. Therefore, you assign each NAS volume a file security style (NTFS, UNIX, or Mixed) that controls the type of access controls (permission and ownership) for the files and directories that clients create in the NAS volume.
– A single NAS volume can contain NFS exports, SMB shares, or a combination of NFS exports and SMB shares. – The minimum size of a NAS volume is 20 MB. (If the volume has already been used, the minimum size should be more than the used space or reserved space, whichever is highest.) • Business requirements – A company or application requirement for separation or for using a single NAS volume must be considered.
Example 3 NAS volumes can be created based on a feature (snapshots, replication, NDMP backup, and so on). • Advantages – The NAS volumes are created to match the exact needs for each feature. • Disadvantage – User mapping is required. A user needs to choose one security style (either NTFS or UNIX) and then, based on the security style chosen, the correct mapping for other users is set.
The Storage Profile for each Storage Center appears in the Storage Subsystems area. Change the Storage Profile for the NAS Cluster or Pool Change the Storage Center Storage Profiles configured for the NAS cluster or pool. A unique Storage Profile can be configured for each Storage Center that provides storage for the FluidFS cluster. 1. In the Storage view, select a FluidFS cluster. 2. Click the NAS Pool tab. 3. In the Storage Subsystems panel, click Change Storage Profile. 4.
NOTE: Snapshot files and folders will continue to be accessible by backup operators and local administrators even if Access to Snapshot Contents is enabled. View NAS Volumes View the current NAS volumes. 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, expand NAS Volumes and then select a NAS volume. The NAS Volumes panel displays all the current NAS volumes.
6. In the Update File Access Time area, select the interval at which file-access timestamps are updated by selecting the appropriate option: Always, Every Five Minutes, Once an Hour, and Once a Day. 7. Click OK. Change Permissions Interoperability for a NAS Volume Change the permissions interoperability (file security style) settings of a NAS volume to change the file access security style for the NAS volume.
6. Enable or disable a NAS volume used space alert: • To enable a NAS volume used space alert, select the Used Space Alert checkbox. 7. • To disable a NAS volume used space alert, clear the Used Space Alert checkbox. If a NAS volume used space alert is enabled, in the Used Space Threshold field, type a number (from 0 to 100) to specify the percentage of used NAS volume space that triggers an alert. 8. Click OK.
Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, expand NAS Volumes and then select a NAS volume. 4. In the NAS Volumes panel, click Delete. The Delete dialog box opens. 5. Click OK. Organizing NAS Volumes in Storage Manager Using Folders By default, Storage Manager displays NAS volumes in alphabetical order. To customize the organization of NAS volumes in Storage Manager, you can create folders to group NAS volumes.
3. In the File System view, expand NAS Volumes and then select a NAS volume. 4. Click Edit Settings. The Edit NAS Volume Folder Settings dialog box opens. 5. In the Folder area, select a parent folder. 6. Click OK. Delete a NAS Volume Folder Delete a NAS volume folder if you no longer want to group NAS volumes. 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, expand NAS Volumes and then select a NAS volume. 4. Click Delete.
• Data reduction cannot be enabled on a clone NAS volume. • After a NAS volume is cloned, data reduction cannot be reenabled until all clone NAS volumes have been deleted. • A clone NAS volume contains user and group recovery information, but not the NAS volume configuration. • Clone NAS volumes count toward the total number of NAS volumes in the FluidFS cluster. View NAS Volume Clones View the current NAS volume clones. 1. In the Storage view, select a FluidFS cluster. 2.
To assign other users access to an SMB share, you must log in to the SMB share using one of these administrator accounts and set access permissions and ownership of the SMB share. Share-Level Permissions The default share-level permissions (SLP) for a new share is full control for authenticated users. This control can be modified either: • Using the MMC tool • In the Storage Manager Security tab of the Edit Settings panel Configuring SMB Shares View, add, modify, and delete SMB shares.
Click Select Folder. The Select Folder dialog box opens and displays the top-level folders for the NAS volume. Locate the folder to share, select the folder, and click OK. – To drill down to a particular folder and view the subfolders, double-click the folder name. – To view the parent folders of a particular folder, click Up. • To specify a new directory to share, type the path to the directory to create in the Path field and select the Create Folder If It Does Not Exist checkbox.
enumeration is disabled, the SMB share and its folders and files will be visible to users and groups regardless of whether they have permissions for the SMB share. 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select SMB Shares. 4. In the SMB Shares panel, select an SMB share and click Edit Settings. The Edit Settings dialog box opens. 5. Click Content. 6. Enable or disable access-based share enumeration: • 7.
Enable or Disable SMB Message Encryption SMBv3 adds the capability to make data transfers secure by encrypting data in flight. This encryption protects against tampering and eavesdropping attacks. 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Client Accessibility. 4. Click the Protocols tab. 5. In the SMB Protocol panel, click Edit Settings. The Edit Settings dialog box opens. 6. Enable or disable message encryption: • 7.
Automatic Creation of Home Share Folders Automatic creation of home share folders automatically creates folders for users when they log in for the first time. The ownership of the home share is automatically assigned to the user, and the domain administrator is automatically granted full access to the share. About this task This procedure enables the automatic creation of home share folders. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3.
Click Select Folder. The Select Folder dialog box opens and displays the top-level folders for the NAS volume. Navigate to the folder in which to create the new folder and click Create Folder. The Create Folder dialog box opens. In the Folder Name field, type a name for the folder, then click OK to close the Create Folder dialog box. Select the new folder and click OK. – To drill down to a particular folder and view the subfolders, double-click the folder name. i.
8. Click the Permissions tab and follow Microsoft’s best practices to assign ACL permissions for users and groups to the SMB share. Change the Owner of an SMB Share Using the FluidFS Cluster Administrator Account If the FluidFS cluster is not joined to Active Directory, use the Administrator account to change the owner of an SMB share. These steps might vary slightly depending on which version of Windows you are using. 1. Start the Map network drive wizard. 2. In Folder type: \\client_vip_or_name\smb_s
3. In the File System view, select a NAS volume. 4. Click Edit Settings. 5. In the Edit NAS Volume Settings panel, click Interoperability. 6. Select the Display ACL to UNIX 777 to NFS Clients Enabled checkbox. NOTE: Actual data-access checks in FluidFS are still made against the original security ACLs. This feature applies only to NAS volumes with Windows or mixed security style (for files with Windows ACLs). Setting ACLs on an SMB Share To set ACLs, use Windows Explorer procedures.
entries, the access does not generate an auditing event. Generated events for a NAS volume can be limited to successes, failures, or both. 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, expand NAS Volumes and select a NAS volume. 4. In the NAS Volumes panel, click Edit Settings. The Edit NAS Volume Settings dialog box opens. 5. Click Data protection. 6.
Option 4 - Network Connect to the share using the Windows Network. This option does not map the share. 1. From the Start menu, select Computer. The Computer window opens. 2. Click Network. 3. Locate the NAS appliance and double-click it. 4. From the SMB shares list, select the SMB share that you want to connect to. Show Dot Files to SMB Client You can enable or disable the show dot files setting for each SMB share.
Branch cache is disabled by default. This procedure enables (or disables) branch cache. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select SMB Shares. 4. In the SMB Shares panel, select an SMB share and click Edit Settings. The Edit SMB Share Settings dialog box opens. 5. Click Advanced. 6. Select or clear the Enable branch cache checkbox. 7. Click Apply → OK.
Configuring NFS Exports View, add, modify, and delete NFS exports, and control the maximum NFS protocol level that the cluster will support. View All NFS Exports on a FluidFS Cluster View all current NFS exports for a FluidFS cluster. 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab. 3. In the File System tab navigation pane, select NFS Exports. The NFS exports are displayed in the right pane.
8. • To change the client access settings for the NFS export, use the Add, Remove, and Edit buttons. Click OK. Change the Folder Path for an NFS Export Change the path to the directory that you want to share for an NFS export. 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab. 3. In the File System tab navigation pane, select NFS Exports. 4. In the right pane, select an NFS export and click Edit Settings. The Edit NFS Export Settings dialog box appears. 5.
b. In the Client Machine Trust area, select an option to specify which client machines (All Clients, Single Client, Client Machines in a Network, or Client Machines in a Netgroup) are allowed to access the NFS export. These options are described in the online help. c. Specify whether clients have read and write access or read-only access to the NFS export. • To allow read and write access, select the Allow Access for check box. • To allow read-only access, clear the Allow Access for check box. d.
3. In the right pane, click the Protocols tab, and then click Edit Settings. The Edit NFS Protocol Settings dialog box appears. 4. For the Maximum NFS Protocol Supported field, click the down-arrow and select the version of NFS that you want to use. The options are NFSv3, NFSv4.0, and NFS v4.1. 5. Click OK.
FTP User Authentication FTP users can authenticate themselves when connecting to the FTP site or to use anonymous access (if allowed by the FTP site). When authenticated using a user name and password, the connection is encrypted. Anonymous users authenticate using anonymous as the user name and a valid email address as the password. FTP Limitations • The number of concurrent FTP sessions is limited to 800 sessions per NAS appliance.
• Symbolic links are limited to 2,000 bytes. • User and directory quotas do not apply to symbolic links. • FluidFS space counting does not count symbolic link data as regular file data. • Symbolic links are not followed when accessed from snapshot view. They appear as regular files or folders. • If a relative symbolic link was moved to another location, it might become invalid. • Cloning SMB symbolic links is not supported. File Access Symbolic links are enabled by default.
FluidFS v6.0 or later FluidFS v5.0 or earlier The distributed dictionary service detects when it reaches almost full capacity and doubles in size (depending on available system storage). The dictionary size is static and limits the amount of unique data referenced by the optimization engine.
Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, expand NAS Volumes and select a NAS volume. 4. In the NAS Volume panel, click Edit Settings. The Edit NAS Volume Settings dialog box opens. 5. Click Data Reduction. 6. Select the Data Reduction Enabled checkbox. 7. For the Data Reduction Method field, select the type of data reduction (Deduplication or Deduplication and Compression) to perform.
Disable Data Reduction on a NAS Volume By default, after disabling data reduction on a NAS volume, data remains in its reduced state during subsequent read operations. You have the option to enable rehydrate-on-read when disabling data reduction, which causes a rehydration (the reversal of data reduction) of data on subsequent read operations. 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, expand NAS Volumes and select a NAS volume. 4.
18 FluidFS Data Protection This section contains information about protecting FluidFS cluster data. Data protection is an important and integral part of any storage infrastructure. These tasks are performed using the Dell Storage Manager Client. Managing Antivirus The FluidFS cluster antivirus service provides real-time antivirus scanning of files stored in SMB shares. The antivirus service applies only to SMB shares; NFS is not supported.
Configuring Antivirus Scanning To perform antivirus scanning, you must add an antivirus server and then enable antivirus scanning for each SMB share. NOTE: If any of the external services are configured with IPv6 link-local addresses, the monitor will always show these services asUnavailable. Managing Snapshots Snapshots are read-only, point-in-time copies of NAS volume data. Storage administrators can restore a NAS volume from a snapshot if needed.
Managing Scheduled Snapshots You can create a schedule to generate snapshots regularly. To minimize the impact of snapshot processing on system performance, schedule snapshots during off-peak times. Snapshots created by a snapshot schedule are named using this format _YYYY_MM_DD__HH_MM Create a Snapshot Schedule for a NAS Volume Create a NAS volume snapshot schedule to take a scheduled point-in-time copy of the data. 1. In the Storage view, select a FluidFS cluster. 2.
The Edit Settings dialog box opens. 6. Specify the retention policy. NOTE: Replication using current snapshot – This option of the “archive” retention policy affects setting up a new replication of a volume. You can replicate using the current snapshot, rather than replicating from all the previous snapshots. 7. Click OK. Delete a Snapshot Schedule Delete a snapshot schedule if you no longer want to take a scheduled point-in-time copy of the data. 1. In the Storage view, select a FluidFS cluster. 2.
Delete a Snapshot Delete a snapshot if you no longer need the point-in-time copy of the data. 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, expand NAS Volumes and select a NAS volume. 4. In the NAS Volume Status panel, click the Snapshots & Clones tab. 5. Select a snapshot and click Delete. The Delete dialog box opens. 6. Click OK.
3. In the File System view, expand NAS Volumes and select a NAS volume. 4. In the NAS Volume Status panel, click the Snapshots & Clones tab. 5. Select a snapshot and click Restore NAS Volume. The Restore NAS Volume dialog box opens. 6. Click OK. Option 1 – Restore Files Using UNIX, Linux, or Windows This restore option allows clients to restore a file from a snapshot using copy and paste. 1. Access the NFS export or SMB share. 2. Access the .snapshots directory. 3.
The NDMP server handles all communications with the DMA servers and other NDMP devices through an XDR encoded TCP (Transmission Control Protocol) data stream. The NDMP server supports two backup types: • dump: Generates inode-based NDMP file history • tar: Generates path-based NDMP file history The backup type is controlled by the NDMP environment variable TYPE.
Table 15. Supported NDMP Environment Variables describes the NDMP environmental variables that are supported by FluidFS. Refer to the Data Management Application (DMA) documentation for a listing of the variables supported by DMA. If DMA does not set any of the variables, the NDMP server operates with the default value. Table 15. Supported NDMP Environment Variables Variable Name Description Default TYPE Specifies the type of backup/restore application.
Variable Name Description Default During recovery, if this variable is set and if the backup data stream was generated with this variable turned on, the NDMP server handles deleting files and directories that are deleted between incremental backups. Setting this variable requires additional processing time and enlarges the backup data stream size (how much it changes depends on the number of elements in the backup data set). If this feature is not important to the end user, it should not be set.
Figure 49. Two-Way Configuration NOTE: If a controller loses the connectivity to the tape, the NDMP session assigned to the controller will fail. Configuring and Adjusting NDMP Two-Way Backup Tape Connectivity You must define the zoning so that the FC-attached tape drive can be seen by the HBAs on all NAS controllers. Drives must be available through every HBA port so that you can choose which port to use for each backup, and balance the load between HBA ports.
To work around this problem, change the behavior during backup. If a backup is started with the DEREF_HARD_LINK environment variable set to Y, the backup will back up all instances of the hard link files as if they were regular files, rather than just backing up the first instance of the hard link files. In this case, a selective restore will always have the file data. The disadvantage of this option is that backups might take longer and more space is required to back up a data set with hard link files.
Environment Variable Description Used In Default Value TYPE Specifies the type of backup and restore application. The valid values are: Backup and Restore dump • • dump – NDMP server generates inode-based file history tar – NDMP server generates file-based file history FILESYSTEM Specifies the path to be used for the backup. The path must be a directory. Backup None LEVEL Specifies the dump level for the backup operation. The valid values are 0 to 9.
Environment Variable Description Used In Default Value Backup -1 Backup N data set). If this feature is not important in your environment, this variable should not be set. BASE_DATE Specifies whether a token-based backup is performed. Token-based backup is used by Tivoli Storage Manager as an alternative to backups using the LEVEL environment variable. The valid values are: • • DEREF_HARD_LINK -1 – Specifies that token-based backup is disabled 0 – Specifies that a token-based backup is performed.
3. In the File System view, click Cluster Connectivity. 4. Click the Backup tab. 5. In the NDMP pane, click Change Backup User Password. The Change Backup User Password dialog box opens. 6. In the Password field, type an NDMP password. The password must be at least seven characters long and contain three of the following elements: a lowercase character, an uppercase character, a digit, or a special character (such as +, ?, or ∗). 7. In the Confirm Password field, retype the NDMP password. 8.
• Product – Compellent FS8600 • Vendor – Dell Most backup applications automatically list the available NAS volumes to back up. Otherwise, you can manually type in the NAS volume path. The FluidFS cluster exposes backup NAS volumes at the following path: /NAS_volume_name To improve data transfer speed, increase the number of concurrent backup jobs to more than one per NAS controller, distributing the load across the available NAS controllers.
Viewing NDMP Jobs and Events All NDMP jobs and events can be viewed using Storage Manager. View Active NDMP Jobs View all NDMP backup and restore operations being processed by the FluidFS cluster. 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, click Cluster Connectivity. 4. Select Backup. The NDMP Sessions area displays the NDMP jobs.
Replication Scenarios Description Fast backup and restore Maintains full copies of data for protection against data loss, corruption, or user mistakes Remote data access Applications can access mirrored data in read-only mode, or in read-write mode if NAS volumes are promoted or cloned Online data migration Minimizes downtime associated with data migration Disaster recovery Mirrors data to remote locations for failover during a disaster Configuring replication is a three-step process: • Add a rep
Figure 53.
After a partner relationship is established, replication between the partners can be bidirectional. One system could hold target NAS volumes for the other system as well as source NAS volumes to replicate to that other system. A replication policy can be set up to run according to a set schedule or on demand. Replication management flows through a secure SSH tunnel from system to system over the client network.
Change the Local or Remote Networks for a Replication Partnership Change the local or remote replication network or IP address for a replication partnership. NAS volumes can be replicated only between tenants that are mapped on the local and remote FluidFS clusters. 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, click Replications. 4. Click the Remote Clusters tab, select a remote cluster, and then click Edit Settings.
• The maximum number of active incoming replications is 100. If more than 100 replications are active, they are queued. • The maximum number of replication partners is 100. • The maximum number of replicated NAS volumes or containers (source and target) on a cluster is 1024. • The maximum number of replication schedules per system is 1024. Define a QoS Node Create a QoS (Quality of Service) definition to bind a QoS node (network level) of outgoing traffic to a replication. 1.
Change Replication Throttling To disable replication throttling on a QoS node: 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, click Replications. 4. Click the Replication NAS Volumes tab, select a replication, and then right-click. 5. Select Replication Actions. 6. From the drop-down list, select Edit Replication QoS. 7. Clear the Enable QoS checkbox to disable using a QoS node. 8. Click OK.
When using cascaded replication for replications that are not alike, a replication can be limited when the different replication is not a cascaded replication.
Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, expand NAS Volumes and select a NAS volume. 4. Click the Replications tab. 5. In the Replication Status area, click Delete. The Delete dialog box opens. 6. Click OK. Run Replication On Demand After a replication is created, you can replicate a NAS volume on demand. You can run replication only from the source FluidFS cluster. 1. In the Storage view, select a FluidFS cluster. 2.
7. • To run replication based on day and time, select the Replicate on checkbox and select one or more days and times. Click OK. Delete a Replication Schedule Delete a replication schedule if you no longer want replication to run regularly. You can delete a replication schedule only from the source FluidFS cluster. 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, expand NAS Volumes and select a NAS volume. 4. Click the Replications tab. 5.
View Replication Events Events related to replication can be viewed using Storage Manager. 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Replications. 4. Click the Replication Events tab. The replication events are displayed. You can search for specific replication events by typing search text in the box at the bottom of the Replications panel.
The following considerations apply when using replication for disaster recovery: • If the original source NAS volume is no longer available, you can configure the recovery NAS volume to replicate to another NAS volume in the original source FluidFS cluster. However, if the original source NAS volume is available, fail back to it. Failing back to the original source NAS volume usually takes less time than failing back to a new NAS volume.
NOTE: If NFS exports are used, the NAS volume names of the source and target should be the same, as the export path name includes the NAS volume name. This is not relevant for SMB shares. ………………………… Source volume An (Cluster A) to target volume Bn (Cluster B) 3. Ensure that at least one successful replication has occurred for all the source volumes in Cluster A. If the replication fails, fix the problems encountered and restart the replication process. 4. Record all Cluster A settings for future use.
Source volume B1 (Cluster B) to target volume A1 (Cluster A) Source volume B2 (Cluster B) to target volume A2 (Cluster A) ………………………… Source volume Bn (Cluster B) to target volume An (Cluster A) 5. Manually perform replication on the promoted recovery volumes in Cluster B (B1, B2, .., Bn). Proceed to the next step when replication completes. If the replication fails, fix the problems encountered and restart the replication process. Ensure that all the NAS volumes are successfully replicated to Cluster A.
Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, expand Environment and then click Data Protection. 4. In the Data Protection panel, click the Auditing tab. 5. Click Edit Settings. The Modify File Access Notification dialog box opens. 6. Select the File Access Notification Enabled checkbox. 7. Provide the information for the Subscriber Name and Auditing Server Hosts fields. 8. Click OK.
19 FluidFS Monitoring This section contains information about monitoring the FluidFS cluster. These tasks are performed using the Dell Storage Manager Client. Monitoring NAS Appliance Hardware Storage Manager displays an interactive, graphical representation of the front and rear views of NAS appliances.
View the Status of the Fans View the status of the fans in a NAS appliance. 1. In the Storage view, select a FluidFS cluster. 2. Click the Hardware tab. 3. In the Hardware view, expand Appliances and select an appliance ID. 4. Select Fans. The status of each fan is displayed. View the Status of the Power Supplies View the status of the power supplies in a NAS appliance. 1. In the Storage view, select a FluidFS cluster. 2. Click the Hardware tab. 3.
Viewing FluidFS Cluster Storage Usage Storage Manager displays a line chart that shows storage usage over time for a FluidFS cluster, including total capacity, unused reserved space, unused unreserved space, and used space. 1. In the Storage view, select a FluidFS cluster. 2. Click the Summary tab. The Summary view displays the FluidFS cluster storage usage.
20 FluidFS Maintenance This section contains information about performing FluidFS cluster maintenance operations. These tasks are performed using the Dell Storage Manager Client. Connecting Multiple Data Collectors to the Same Cluster You can have multiple data collectors connected to the same FluidFS cluster. About this task To designate the Primary data collector and/or whether it receives events: Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the Summary tab 3.
Remove a FluidFS Cluster From Storage Manager Remove a FluidFS cluster if you no longer want to manage it using Storage Manager. For example, you might want to move the FluidFS cluster to another Storage Manager Data Collector. 1. Click the Storage view and select a FluidFS cluster. 2. Click the Summary tab. 3. In the right pane, click Delete. The Delete dialog box appears. 4. Click OK.
Delete a FluidFS Cluster Folder Delete a FluidFS cluster folder if it is not being used. Prerequisite The folder must be empty. Steps 1. In the Storage view, select a FluidFS cluster folder. 2. Click the Summary tab. 3. Click Delete. The Delete dialog box opens. 4. Click OK. Adding a Storage Center to a FluidFS Cluster The back-end storage for a FluidFS cluster can be provided by one or two Storage Centers.
8. b. c. d. e. In the IP Address field, type an IP address for the NAS controller. Click OK. Repeat the preceding steps for each NAS controller. To specify a VLAN tag, type a VLAN tag in the VLAN Tag field. f. When a VLAN spans multiple switches, the VLAN tag is used to specify to which ports and interfaces to send broadcast packets. Click Next. To verify connectivity between the FluidFS cluster and the Storage Center, use the Connectivity Report page.
NOTE: Due to the complexity and precise timing required, schedule a maintenance window to add the NAS appliance(s). Steps 1. (Directly cabled internal network only) If the FluidFS cluster contains a single NAS appliance, with a direct connection on the internal network, re-cable the internal network as follows. a. b. c. d. e. Cable the new NAS appliance(s) to the internal switch. Remove just one of the internal cables from the original NAS appliance.
happens, wait 30 seconds, then click Refresh to update the Connectivity Report. When the iSCSI logins are complete and the Connectivity Report has been refreshed, the status for each FluidFS cluster iSCSI initiator shows Up. • For Fibre Channel NAS appliances, when the Connectivity Report initially appears, the FluidFS cluster HBAs show the status Not Found/Disconnected. You must record the WWNs and manually update fabric zoning on the Fibre Channel switch.
Attach a NAS Controller Attach a new NAS controller when replacing an existing NAS controller. After it is attached, the new NAS controller inherits the FluidFS cluster configuration settings of the existing NAS controller. Prerequisite Verify that the NAS controller being attached is in standby mode and powered on. A NAS controller is on and in standby mode if the power LED is flashing green at around two flashes per second. Steps 1. In the Storage view, select a FluidFS cluster. 2.
Managing Service Packs The FluidFS cluster uses a service pack methodology to upgrade the FluidFS software. Service packs are cumulative, meaning that each service pack includes all fixes and enhancements provided in earlier service packs. View the Upgrade History View a list of service pack upgrades that have been installed on the FluidFS cluster. 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System tab navigation pane, select Cluster Maintenance. 4.
Prerequisites • Contact Dell Technical Support to make service packs available for download to the FluidFS cluster. • The Storage Manager Data Collector must have enough disk space to store the service pack. If there is not enough space to store the service pack, a message will be displayed shortly after the download starts. You can delete old service packs to free up space if needed. • Installing a service pack causes the NAS controllers to reboot during the installation process.
NOTE: The installation process is a long-running operation. If you close the wizard, the installation process will continue to run in the background. You can view the installation progress using the File System tab→Maintenance → Internal→ Background Processes tab. Managing Firmware Updates Firmware is automatically updated on NAS controllers during service pack updates and after a failed NAS controller is replaced. After a firmware update is complete, the NAS controller reboots.
5. In the right pane, click Restore Settings. The Restore Settings dialog box appears. 6. Select the settings to restore from backup: 7. • To restore SMB shares, select the SMB Shares check box. • To restore NFS exports, select the NFS Exports check box. • To restore snapshot schedules, select the Snapshot Scheduling check box. • To restore quota rules, select the Quota Rules check box. Click OK.
Restoring Local Groups Restoring the local groups configuration provides an effective way to restore all local groups without having to manually reconfigure them. This is useful in the following circumstances: • After recovering a system • When failing over to a replication target NAS volume Local Groups Configuration Backups Whenever a change in the local groups configuration is made, it is automatically saved in a format that allows you to restore it later.
WARNING: Reinstalling the FluidFS software on all NAS controllers will revert your system to factory defaults. All data on the FluidFS cluster will be unrecoverable after performing the procedure. Steps 1. Press and release the recessed power button at the back of the NAS controller to shut down the NAS controller. NOTE: Power off only the NAS controller on which you are reinstalling the FluidFS software. Do not power off the remaining NAS controllers.
21 FS Series VAAI Plugin The VAAI plugin allows ESXi hosts to offload some specific storage-related tasks to the underlying FluidFS appliances.
Plugin Verification To check if the VAAI plugin is installed in an ESXi host, type the following command in the ESXi console:# esxcli software vib list | grep Dell_FluidFSNASVAAI When running versions earlier than FluidFS v5.0.300109, a positive reply should return Dell_FluidFSNASVAAI 1.1.0-301 DELL VMwareAccepted 2015-05-17 When running versions 5.0.300109 or later, a positive reply should return: Dell_FluidFSNASVAAI 1.1.
22 FluidFS Troubleshooting This section contains information about troubleshooting problems with the FluidFS cluster. These tasks are performed using the Dell Storage Manager Client. Viewing the Event Log A FluidFS cluster generates events when normal operations occur and also when problems occur. Events allow you to monitor the FluidFS cluster, detect and solve problems. Events are logged to the Event Log. View the Event Log View events contained in the Event Log. 1.
• To prevent the search from wrapping, clear the Wrap check box. NOTE: By default, when a search reaches the bottom of the list and Find Next is clicked, the search wraps around to the first match in the list. When a search reaches the top of the list and Find Previous is clicked, the search wraps around to the last match in the list. 5. • To match whole phrases within the events, select the Full Match check box. • To highlight all of the matches of the search, select the Highlight check box.
NOTE: For some of the options, there are parameters that might be required, such as Client/IP, User path. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System tab navigation pane, select Cluster Maintenance. 4. In the right pane, click the Support tab. 5. In the Diagnostic Tools area, click Run Diagnostic. The Run Diagnostic wizard opens. 6. Select the type of diagnostic to run. 7. Select the secondary type (authentication or file access).
NOTE: Power off only the NAS controller on which you are running the embedded system diagnostics. Do not power off the remaining NAS controllers. Powering off a NAS controller disconnects client connections while their clients are being transferred to other NAS controllers. Clients will then automatically reconnect to the FluidFS cluster. 2. Press and release the recessed power button at the back of the NAS controller to turn on the NAS controller. 3.
Steps 1. Connect a network cable to the LOM (Lights Out Management) Ethernet port on a NAS controller. The LOM Ethernet port is located on the lower right side of the back panel of a NAS controller. 2. Connect a Windows client to the iBMC. a. Connect a Windows client to the same network used for the LOM Ethernet port. b. Open a web browser. In the address bar of the web browser, type the iBMC IP address of the NAS controller. The iBMC login page appears. c. In the Username field, type ADMIN. d.
4. The FluidFS cluster and Active Directory server must use a common source of time. Configure NTP and verify the system time is in sync with the domain controller time. Active Directory Configuration Issues Description Unable to add Active Directory users and groups to SMB shares. Cause Probable causes might be: • • • Workaround Unable to ping the domain using a FQDN. DNS might not be configured. NTP might not be configured.
Troubleshooting an NDMP Internal Error Description Backup or restore fails with an internal error. Cause NDMP internal errors are indicators of a file system not being accessible or a NAS volume not being available. Workaround If the backup application cannot connect to a FluidFS cluster: 1. 2. Verify that NDMP is enabled. Verify that the backup application IP address is configured in NDMP. If the backup appliance can connect to a FluidFS cluster, but cannot log in: 1. 2.
Workaround Check the current ACL setting in the Windows client. Redefine the ACLs for the files by using a Windows client the same way you initially defined it. Verify that you set the ACLs as the owner of the files, directories, and SMB shares. If you cannot redefine your ACLs because you currently do not have permissions, perform the following steps: 1. 2. 3. Restore the files from snapshots or a backup.
Workaround This is an informative event. The administrator may contact the locking client and request to close the application referencing this file. It could be that the application that opened the file did not shut down gracefully. It is recommended to reboot the client if possible. SMB Locking Inconsistency Description The SMB service is interrupted due to SMB interlocking issues. Cause There are various SMB client interlocking scenarios.
• • Workaround The FluidFS cluster is restored from a backup or remote replication. During restore time, the directory structure is not complete and a few directories might not exist. When a client with an authorization to access a higher directory in the same path deletes or alters a directory that is being mounted by another client. When multiple clients are accessing the same data set, it is recommended to apply a strict permission level to avoid this scenario. 1.
Workaround If the issue is due to NFS/UDP and firewall, check whether the client mounts using UDP (this is usually the default) and there is a firewall in the path. If a firewall exists, add an appropriate exception to the firewall. If the issue is due to permissions: • • • Verify the path you provided is correct. Check that you are trying to mount as root. Check that the system's IP address, IP range, domain name, or netgroup is in the NFS exports list.
NFS Insecure Access to Secure Export Description A client tries to access a secure export from an insecure port. Cause The secure NFS export requirement means that the accessing clients must use a well-known port (below 1024), which usually means that they must be root (uid=0) on the client. Workaround Identify the relevant NFS export and verify that it is set as secure (requires secure client port).
Workaround • When a client with an authorization to access a higher directory in the same path deletes or alters a directory that is being mounted by another client. When multiple clients are accessing the same data set, it is recommended to apply a strict permission scheme to avoid this scenario. 1. If the FluidFS cluster is being restored, communicate the current status to the client and instruct the client to wait for the restore process to complete.
NFS Access Denied to a File or Directory Description A client cannot access the NFS file or directory despite the fact that the user belongs to the group owning the NFS object and the group members are permitted to perform the operation. Cause NFS servers (versions 2 and 3) use the Remote Procedure Call (RPC) protocol for authentication of NFS clients. Most RPC clients have a limitation, by design, of up to 16 groups passed to the NFS server.
smbclient /// -U user%password -c ls Workaround It is recommended that you use the NFS protocol interfaces to access the FluidFS cluster file system from UNIX/Linux clients. To work around this issue: 1. Ensure that the administrator creates NFS exports to the same locations that you use to access using SMB and connect to them using the mount command from UNIX/Linux clients. 2. Use NFS-based interfaces to access the FluidFS cluster.
Tx_pause for eth(x) on node 1 is off. Cause Flow control is not enabled on the switch(es) connected to a FluidFS cluster controller. Workaround See the switch vendor's documentation to enable flow control on the switch(es). Troubleshoot Replication Issues This section contains probable causes of and solutions to common replication problems.
Replication Target Volume is Busy Reclaiming Space Description Replication between the source NAS volume and the target NAS volume fails because the target NAS volume is busy freeing up space. Cause Replication fails because the target NAS volume is busy freeing up space. Workaround The replication continues automatically when the space is available. Verify that the replication automatically continues after a period of time (an hour).
Replication Source FluidFS Cluster is Busy Description Replication between the source NAS volume and the target NAS volume fails because the file system of the source NAS volume is busy replicating other NAS volumes. Cause Replication fails because the file system of the source NAS volume is busy replicating other NAS volumes. Workaround The replication continues automatically when the file system releases part of the resources.
4. 5. issues and possible authentication problems. In many cases the domain controller is also the NTP server. Verify that the NTP server is up and provides the NTP service. Check the network path between the FluidFS cluster and the NTP server, using ping, for example. Verify that the response time is in the millisecond range. Troubleshooting System Shutdown Description During a system shutdown using Storage Manager, the system does not stop and the NAS controllers do not shut down after 20 minutes.
Controller Taking Long Time to Boot Up After Service Pack Upgrade Description The NAS controller takes a long time to boot up after upgrading the service pack of the NAS controller firmware. Cause The upgrade process can take up to 60 minutes to complete. Workaround • • • 500 Connect a keyboard and monitor to the NAS controller that is taking a long time to boot up. If the system is booting, and is at the boot phase, let the upgrades finish. This can take up to 60 minutes to complete.
Part IV FluidFS v5 Cluster Management This section describes how to use Storage Manager to manage FluidFS clusters running version 5.x. NOTE: FluidFS Cluster Management contains two separate sections, one for FluidFS v6 and one for FluidFS v5 because the GUI procedures are different between these two versions.
23 FS8x00 Scale-Out NAS with FluidFS Overview This section contains an overview of FS8x00 scale-out Network Attached Storage (NAS). How FS8x00 Scale-Out NAS Works Dell FS8x00 scale-out NAS leverages the Dell Fluid File System (FluidFS) and Storage Centers to present file storage to Microsoft Windows, UNIX, and Linux clients. The FluidFS cluster supports the Windows, UNIX, and Linux operating systems installed on a dedicated server or installed on virtual systems deploying Hyper-V or VMware virtualization.
Term Description Controller (NAS controller) The two primary components of a NAS appliance, each of which functions as a separate member in the FluidFS cluster. Peer controller The NAS controller with which a specific NAS controller is paired in a NAS appliance. Standby controller A NAS controller that is installed with the FluidFS software but is not part of a FluidFS cluster. For example, a new or replacement NAS controller from the Dell factory is considered a standby controller.
Feature Description High-performance, scale-out NAS Support for a single namespace spanning up to four NAS appliances (eight NAS controllers). Capacity scaling Ability to scale a single namespace up to 4-PB capacity with two Storage Centers. Connectivity options Offers 1GbE and 10GbE copper and optical options for connectivity to the client network. Highly available and active-active design Redundant, hot-swappable NAS controllers in each NAS appliance.
Overview of the FS8x00 Hardware Scale-out NAS consists of one to four FS8x00 appliances configured as a FluidFS cluster. Each NAS appliance is a rack-mounted 2U chassis that contains two hot-swappable NAS controllers in an active-active configuration. In a NAS appliance, the second NAS controller with which one NAS controller is paired is called the peer controller.
– Internal network – LAN/client network The following figure shows an overview of the scale-out FS8600 architecture. Figure 55. FS8600 Architecture Storage Center The Storage Center provides the FS8600 scale-out NAS storage capacity; the FS8600 cannot be used as a standalone NAS appliance. Storage Centers eliminate the need to have separate storage capacity for block and file storage.
VIPs) on the client network that allow clients to access the FluidFS cluster as a single entity. The client VIP also enables load balancing between NAS controllers, and ensures failover in the event of a NAS controller failure. If client access to the FluidFS cluster is not through a router (in other words, a flat network), define one client VIP per NAS controller. If clients access the FluidFS cluster through a router, define a client VIP for each client interface port per NAS controller.
Scenario System Status Data Integrity Comments the cache to disk (Storage Center or nonvolatile internal storage) Simultaneous dual-NAS controller failure in single NAS appliance cluster Unavailable Lose data in cache Data that has not been written to disk is lost Sequential dual‑NAS controller failure in multiple NAS appliance cluster, same NAS appliance Unavailable Unaffected Sequential failure assumes enough time is available between NAS controller failures to write all data from the cache to
24 FluidFS System Management for FS Series Appliances This section contains information about basic FluidFS cluster system management. These tasks are performed using the Dell Storage Manager Client. Using the Dell Storage Manager Client or CLI to Connect to the FluidFS Cluster As a storage administrator, you can use either the Dell Storage Manager Client or command-line interface (CLI) to connect to and manage the FluidFS cluster. By default, the FluidFS cluster is accessed through the client network.
Connect to the FluidFS Cluster CLI Using a VGA Console Log on to the CLI using a VGA console to manage the FluidFS cluster. Connect a monitor to a NAS controller’s VGA port and connect a keyboard to one of the NAS controller’s USB ports. 1. From the command line, type the following command at the first login as prompt: cli 2. Type the FluidFS cluster administrator user name at the next login as prompt. The default user name is Administrator. 3.
Related link Connect to the FluidFS Cluster CLI Through SSH Using a Password Connect to the FluidFS Cluster CLI Through SSH Using a Password Managing Secured Management By default, all FluidFS cluster management ports are open on all subnets, along with the other ports needed for client access (SMB/ NFS), replication, and NDMP. Secured management, when enabled, exclusively limits all management traffic to one specific subnet.
8. (Optional) Configure the remaining FluidFS management subnet attributes as needed. These options are described in the online help. • To change the netmask or prefix of the network, type a netmask or prefix length in the Netmask or Prefix Length field. • 9. To specify a VLAN tag, type a VLAN tag in the VLAN Tag field. When a VLAN spans multiple switches, the VLAN tag is used to specify which ports and interfaces to send broadcast packets to. Click OK.
Change the Netmask or Prefix for the Secured Management Subnet Change the netmask (IPv4) or prefix (IPv6) for the secured management subnet. 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab. 3. In the File System panel, expand Environment, select Network, and then click the Management Network tab. 4. In the right pane, click Edit Settings. The Edit Management Network Settings dialog box appears. 5.
8. Click OK. Delete the Secured Management Subnet Delete the secured management subnet if you no longer want to exclusively limit management traffic to one specific subnet. 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab. 3. In the File System panel, expand Environment, select Network, and then click the Management Network tab. 4. In the right pane, click Delete. The Delete Management Network dialog box appears. 5. Select the Management Network to delete. 6.
Managing Licensing The license determines which NAS features are available in the FluidFS cluster. View License Information All FluidFS cluster features are automatically included in the license for FS8600 scale-out NAS. Storage Manager displays FluidFS cluster license information, but the license cannot be modified. 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab. 3. Select Maintenance in the File System panel. 4. In the right pane, click the License tab.
4. In the right pane, click Edit Settings. The Modify Time Settings dialog box appears. 5. The time zone is displayed in the Time Zone drop-down menu. 6. To set a time zone, select a time zone from the Time Zone drop-down menu. 7. Click OK. View the Time View the current time for the FluidFS cluster. 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab. 3. In the File System panel, expand Environment and select Time. 4. In the right pane, click Edit Settings.
• 6. To add an NTP server, type the host name or IP address of an NTP server in the NTP Servers text field and click Add. • To remove an NTP server, select an NTP server from the NTP Servers list and click Remove. Click OK. Enable or Disable NTP Enable NTP to add one or more NTP servers with which to synchronize the FluidFS cluster time. Disable NTP if you prefer to manually set the FluidFS cluster time. 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab. 3.
must be compiled into a customer-provided SNMP management station. The MIBs are databases of information specific to the FluidFS cluster. Obtain SNMP MIBs and Traps The SNMP MIBs and traps for the FluidFS cluster are available for download from the FluidFS cluster FTP server. To download the MIB file, use either of the following options: Prerequisite The FTP server must be enabled. Steps 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab and select Maintenance. 3.
Add or Remove SNMP Trap Recipients Add or remove hosts that receive the FluidFS cluster-generated SNMP traps. 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab and select Maintenance. 3. Click the SNMP tab in the Maintenance panel. 4. In the right pane, click Edit Settings in the SNMP Trap section. The Modify SNMP Trap Settings dialog box appears. 5. Add or remove SNMP trap recipients. • 6.
8. Click OK. Managing the Operation Mode The FluidFS cluster has three operation modes: • Normal: System is serving clients using SMB and NFS protocols and operating in mirroring mode. • Write-Through Mode: System is serving clients using SMB and NFS protocols, but is forced to operate in journaling mode. This mode of operation might have an impact on write performance, so it is recommended when, for example, you have repeated electric power failures.
Assign or Unassign a Client to a NAS Controller You can permanently assign one or more clients to a particular NAS controller. For effective load balancing, do not manually assign clients to NAS controllers, unless specifically directed to do so by your technical support representative. Assigning a client to a NAS controller disconnects the client’s connection. Clients will then automatically reconnect to the assigned NAS controller. 1. Click the Storage view and select a FluidFS cluster. 2.
Steps 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab. 3. In the File System tab navigation pane, select Client Activity. 4. In the right pane, click Rebalance Clients. The Rebalance Clients dialog box appears. 5. Click OK. View Open Files You can view up to 1,000 open files. 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab, select Client Activity.. 3. In the Client Activity tab navigation pane, select Open Files.
Start Up the FluidFS Cluster Start up a FluidFS cluster to resume operation after shutting down all NAS controllers in a FluidFS cluster. Prerequisite Before turning on the system, ensure that all cables are connected, and all components are connected to a power source. Steps 1. If previously shut down, turn the Storage Centers back on before starting the FluidFS cluster. 2. Press and release the recessed power button at the back of each NAS controller to turn on the NAS controllers.
3. In the Hardware tab navigation pane, expand Appliances and select a NAS controller. 4. In the right pane, click Blink. The Blink dialog box appears. 5. Enable or disable NAS controller blinking. • 6. To enable NAS controller blinking, select Blink controller in slot 1 or Blink controller in slot 2. • To disable NAS controller blinking, clear Blink controller in slot 1 or Blink controller in slot 2. Click OK.
25 FluidFS Networking This section contains information about managing the FluidFS cluster networking configuration. These tasks are performed using the Dell Storage Manager Client. Managing the Default Gateway The default gateway enables client access across subnets. Only one default gateway can be defined for each type of IP address (IPv4 and IPv6). If client access is not through a router (in other words, a flat network), a default gateway does not need to be defined.
View DNS Servers and Suffixes View the current DNS servers providing name resolution services for the FluidFS cluster and the associated DNS suffixes. 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab. 3. In the File System tab navigation pane, select Network. 4. In the right pane, the DNS servers and suffixes are displayed in the DNS section.
DNS Settings Dialog Box Use this dialog box to add or remove DNS servers and suffixes to a FluidFS cluster. Field/Option Description DNS Servers IP Addresses Specifies the IP address of the DNS server providing name resolution services for the FluidFS cluster and the associated DNS suffixes. DNS Suffixes Specifies the suffixes to associate with the FluidFS cluster.
Add a Static Route When adding a static route, you must specify the subnet properties and the gateway through which to access this subnet. 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab. 3. In the File System tab navigation pane, select Network. 4. In the right pane, click Create Static Route. The Create Static Route dialog box appears. 5. In the Target Network IP Address field, type a network IP address (for example, 100.10.55.00). 6.
View the Client Networks View the current client networks. 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab. 3. In the File System tab navigation pane, select Network. 4. The client networks are displayed in the right pane in the Client Networks section. Create a Client Network Create a client network on which clients will access SMB shares and NFS exports. 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab. 3.
5. In the VLAN Tag field, type a VLAN tag for the client network. 6. Click OK. Change the Client VIPs for a Client Network Change the client VIPs through which clients will access SMB shares and NFS exports. 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab. 3. In the File System tab navigation pane, select Network. 4. In the right pane, right-click a client network and select Edit Settings. The Edit Client Network Settings dialog box appears. 5.
Change the Client Network MTU Change the maximum transmission unit (MTU) of the client network to match your environment. 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab. 3. In the File System tab navigation pane, select Network. 4. In the right pane, click Edit Settings in the Client Interface section. 5. In the MTU field, type a new MTU. If your network hardware supports jumbo frames, enter 9000; otherwise, enter 1500. 6. Click OK.
26 FluidFS Account Management and Authentication This section contains information about managing FluidFS cluster accounts and authentication. These tasks are performed using the Dell Storage Manager Client. Account Management and Authentication FluidFS clusters include two types of access: • Administrator-level access for FluidFS cluster management • Client-level access to SMB shares, NFS exports, and FTP folder Administrator accounts control administrator-level access.
Login Name Purpose SSH Access Enabled by Default SSH Access Allowed VGA Console Access Enabled by Default VGA Console Access Allowed enableescalationaccess Enable escalation account No No Yes Yes escalation FluidFS cluster troubleshooting when unable to log in with support account No Yes No Yes cli Gateway to command– line interface access Yes (can bypass password using SSH key) Yes (can bypass password using SSH key) N/A N/A Default Password N/A Administrator Account The Administr
7. Click OK. Enable or Disable Dell SupportAssist You can enable Storage Client to send the FluidFS cluster diagnostics using Dell SupportAssist. 1. Click the Storage view and select a FluidFS cluster. 2. Click Maintenance. 3. In the right pane, click the Support tab. 4. In the Support Assist section, click Modify Support Assist Settings. The Modify Support Assist Settings dialog box appears. 5. Enable or disable SupportAssist. • 6.
• NAS Volume Administrator: The following table summarizes which settings a volume administrator can change for the NAS volumes to which they are assigned. They can also view, but not change, the rest of the FluidFS cluster configuration.
g. Click OK. 6. From the Privilege drop-down menu, select the permission level of the administrator: • FluidFS Cluster Administrator: These administrators can manage any aspect of the FluidFS cluster. • 7. NAS Volume Administrator: These administrators can only manage the NAS volumes to which they are assigned and view the FluidFS cluster configuration. In the Email Address field, type an email address for the administrator. 8. Click OK.
Change an Administrator Password You can change the password for a local administrator account only. The password for remote administrators is maintained in the external database. 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab, expand Environment, and select Authentication. 3. In the right pane, click the Local Users and Groups tab. 4. Select an administrator and click Change Password. The Change Password dialog box appears. 5.
6. Click Finish. The new local users and groups tree is displayed in the Console Root window. 7. Select Users or Groups. 8. Select a local user or group, and select an action from the Actions pane. Managing Local Users You can create local users that can access SMB shares and NFS exports, or that will become a FluidFS cluster administrator.
3. In the right pane, click the Local Users and Groups tab. 4. Select a local user and click Edit Settings. The Edit Settings dialog box appears. 5. To add a secondary local group to which the local user is assigned: a. b. c. d. In the Additional Groups area, click Add. The Select Group dialog box appears. From the Domain drop-down menu, select the domain to which the local group is assigned. In the Group field, type either the full name of the local group or the beginning of the local group name.
4. In the Password field, type a new password for the local user. The password must be at least seven characters long and contain three of the following elements: a lowercase character, an uppercase character, a digit, or a special character (such as +, ?, or ∗). 5. In the Confirm Password field, retype the password for the local user. 6. Click OK.
a. b. c. d. Click Add. The Select User dialog box appears. From the Domain drop-down menu, select the domain to which the remote user is assigned. In the User field, type either the full name of the remote user or the beginning of the remote user name. (Optional) Configure the remaining remote user search options as needed. These options are described in the online help.
b. From the Domain drop-down menu, select the domain to which the remote user is assigned. c. In the User field, type either the full name of the remote user or the beginning of the remote user name. d. (Optional) Configure the remaining remote user search options as needed. These options are described in the online help. To change the maximum number of search results to return, select the maximum number of search results from the Max Results drop-down menu. e. Click Search. f.
ensure that the FluidFS cluster uses a specific domain controller. Adding multiple domain controllers ensures continued authentication of users in the event of a domain controller failure. If the FluidFS cluster cannot establish contact with the preferred server, it will attempt to connect to the remaining servers in order. Prerequisites • An Active Directory directory service must be deployed in your environment. • The FluidFS cluster must have network connectivity to the directory service.
Modify Active Directory Authentication Settings You cannot directly modify the settings for Active Directory authentication. You must remove the FluidFS cluster from the Active Directory domain and then re-add it to the Active Directory domain. 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab, expand Environment and select Authentication. 3. In the right pane, click the Directory Services tab. 4. Click Leave Domain. The Leave Domain dialog box appears. 5.
Filter Open Files You can filter open files by file name, user, protocol, or maximum number of open files to display. 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab, select Client Activity. 3. In the Client Activity tab navigation pane, select Open Files. 4. The Open Files dialog box appears. 5. In the top portion of the dialog box, fill in one or more of the fields listed (File name, User, Protocol, or Number of Files to Display). 6.
4. Click Configure External User Database in the NFS User Repository section. The Edit External User Database dialog box appears. 5. Select LDAP. 6. In the Base DN field, type an LDAP base distinguished name to represent where in the directory to begin searching for users. The name is usually in this format: dc=domain, dc=com. 7. In the LDAP Servers text field, type the host name or IP address of an LDAP server and click Add. Repeat this step for any additional LDAP servers. 8.
• To indicate that Active Directory provides the LDAP database, select the Use LDAP on Active Directory Extended Schema check box. • 6. To indicate that an LDAP server provides the LDAP database, clear the Use LDAP on Active Directory Extended Schema check box. Click OK. Enable or Disable Authentication for the LDAP Connection Enable authentication for the connection from the FluidFS cluster to the LDAP server if the LDAP server requires authentication. 1.
Managing NIS In environments that use Network Information Service (NIS), you can configure the FluidFS cluster to authenticate clients using NIS for access to NFS exports. Enable or Disable NIS Authentication Configure the FluidFS cluster to communicate with the NIS directory service. Adding multiple NIS servers ensures continued authentication of users in the event of a NIS server failure.
Change the Order of Preference for NIS Servers If the FluidFS cluster cannot establish contact with the preferred server, it will attempt to connect to the remaining servers in order. 1. Click the Storage view and select a FluidFS cluster. 2. In the File System pane, expand Environment and select Authentication. 3. In the Authentication pane, click the Directory Services tab. 4. Click Configure External User Database in the NFS User Repository section.
Managing the User Mapping Policy Configure the FluidFS cluster mapping policy to automatically map all users or to allow mappings between specific users only. Automatically Map Windows and UNIX/Linux Users Automatically map all Windows users in Active Directory to the identical UNIX/Linux users in LDAP or NIS, and map all UNIX/Linux users to the identical Windows users. Mapping rules will override automatic mapping. 1. Click the Storage view and select a FluidFS cluster. 2.
To change the maximum number of search results to return, select the maximum number of search results from the Max Results drop-down menu. d. Click Search. e. Select a user from the search results. f. Click OK. 9. Select the direction of the user mapping: • The two users will have identical file access permissions (via any protocol) • Enable Unix To Windows Mapping • Enable Windows To Unix Mapping 10. Click OK.
27 FluidFS NAS Volumes, Shares, and Exports This section contains information about managing the FluidFS cluster from the client perspective. These tasks are performed using the Dell Storage Manager Client. Managing the NAS Pool When configuring a FluidFS cluster, you specify the amount of raw Storage Center space to allocate to the FluidFS cluster (NAS pool). The maximum size of the NAS pool is: • 2 PB with one Storage Center.
Steps 1. Click the Storage view and select a FluidFS cluster. 2. Click the Summary tab. 3. In the right pane, click Actions → Storage Centers → Expand NAS Pool. The Expand NAS Pool dialog box appears. 4. In the NAS Pool Size field, type a new size for the NAS pool in gigabytes (GB) or terabytes (TB). NOTE: The new size is bound by the size displayed in the Minimum New Size field and the Maximum New Size field. 5. Click OK.
3. In the Summary tab navigation pane, select Edit NAS Pool Settings. 4. The Set NAS Pool Space Settings dialog box appears. 5. Enable or disable the NAS pool unused space alert. • To enable the NAS pool used space alert, select the Unused Space Alert check box. 6. • To disable the NAS pool used space alert, clear the Unused Space Alert check box.
Thin and Thick Provisioning for NAS Volumes In addition to the thin provisioning applied to the NAS pool, NAS volumes can be thin‑provisioned. With thin provisioning (the default), storage space is consumed on the Storage Centers only when data is physically written to the NAS volume, not when the NAS volume is initially allocated. Thin provisioning offers the flexibility to modify NAS volumes to account for future increases in usage.
Department Security Style Snapshots Replication NDMP Backup Number of SMB/NFS Clients Read/Write Mix Hourly Change % of Existing Data Broadcast Mixed No No Weekly 10 90/10 None Press NTFS Daily No No 5 10/90 5% Marketing NTFS Daily Yes No 5 50/50 None An average read/write mix is 20/80. An average hourly change rate for existing data is less than 1%. Example 1 Create NAS volumes based on departments. The administrator breaks up storage and management into functional groups.
Term Description Overcommitted space Storage space allotted to a thin-provisioned volume over and above the actually available physical capacity of the NAS pool. The amount of overcommitted space for a NAS volume is: (Total volume space) – (NAS pool capacity). With thin provisioning, storage space is consumed only when data is physically written to the NAS volume, not when the NAS volume is initially allocated.
f. Click OK. 2. Migrate the data from the existing NAS product to the FluidFS cluster. 3. Configure the NAS volumes to resume normal operation and write data according to the configured Storage Profile. a. b. c. d. e. f. Click the Storage view and select a FluidFS cluster. Click the Summary tab. In the right pane, click Edit FluidFS Cluster Settings. The Edit FluidFS Cluster Settings dialog box appears. Click the SC Storage Profile tab. Clear the Import to Lowest Tier check box. Click OK.
6. 7. Enable or disable a user’s access to a snapshot. • To enable a user’s access to a NAS volume snapshot, clear the Limit Access to Specific Client Networks check box. • To disable a user’s access to a NAS volume snapshot, select the Limit Access to Specific Client Networks check box. – Enter a Network ID in the Allow Access Only to Users Coming from These Client Networks box, and click Add Click OK.
Change Permissions Interoperability for a NAS Volume Change the permissions interoperability (file security style) settings of a NAS volume to change the file access security style for the NAS volume. Modifying the file security style of a NAS volume affects only the files and directories created after the modification. 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab, expand NAS Volumes and select a NAS volume. 3. In the right pane, click Edit Settings.
Enable or Disable a NAS Volume Used Space Alert You can enable an alert that is triggered when a specified percentage of the NAS volume space has been used. 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab, expand NAS Volumes and select a NAS volume. 3. In the right pane, click Edit Settings. The Edit NAS Volume Settings dialog box appears. 4. Click Space in the left navigation pane. 5. Enable or disable a NAS volume used space alert.
• Ensure that the NAS volume is not mounted and warn affected clients that the data will be deleted. Steps 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab, expand NAS Volumes and select a NAS volume. 3. In the right pane, click Delete. The Delete dialog box appears. 4. Click OK. Organizing NAS Volumes in Storage Manager Using Folders By default Storage Manager displays NAS volumes in alphabetical order.
Delete a NAS Volume Folder Delete a NAS volume folder if you no longer want to group NAS volumes. 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab, expand NAS Volumes and select a NAS volume folder. 3. In the right pane, click Delete. The Delete dialog box appears. 4. Click OK. If the folder contains NAS volumes, they are moved into the (default) root parent folder of the NAS volume folder.
3. In the right pane, click the Snapshots & Clones tab. The NAS volume clones are displayed in the Cloned NAS Volume list. Create a NAS Volume Clone Cloning a NAS volume creates a writable copy of the NAS volume. Prerequisites • The snapshot from which the clone NAS volume will be created must already exist. • Data reduction must be disabled on the base volume. • The snapshot space consumption threshold alert must be disabled on the base volume. Steps 1.
Configuring SMB Shares View, add, modify, and delete SMB shares. View All SMB Shares on the FluidFS Cluster View all current SMB shares for the FluidFS cluster. 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab, select SMB Shares. The SMB shares are displayed in the right pane. View SMB Shares on a NAS Volume View the current SMB shares for a NAS volume. 1. Click the Storage view and select a FluidFS cluster. 2.
Delete an SMB Share If you delete an SMB share, the data in the shared directory is no longer shared but it is not removed. 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab, select SMB Shares. 3. In the right pane, select an SMB share and click Delete. The Delete dialog box appears. 4. Click OK.
3. Click the Protocols tab. 4. Click Edit SMB Security Settings in the SMB Protocol section. A dialog box appears. 5. To enable required message signing, select the Force SMB Clients Signing check box. 6. To disable required message signing, clear the Force SMB Clients Signing check box. 7. Click OK. Enable or Disable SMB Message Encryption SMBv3 adds the capability to make data transfers secure by encrypting data in-flight, to protect against tampering and eavesdropping attacks. 1.
Automatic Creation of Home Share Folders Automatic creation of home share folders automatically creates folders for users when they logs in for the first time. The ownership of the home share is automatically assigned to the user, and the domain administrator is automatically granted full access to the share. To enable automatic creation of home share folders: 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab, select SMB Shares. 3.
– To view the parent folders of a particular folder, click Up. h. From the Folder template drop-down menu, select the form that the user’s folders should take: i. • Select /Domain/User if you want the user’s folders to take the form: //. • Select /User if you want the user’s folders to take the form: /. (Optional) Configure the remaining SMB home shares attributes as needed. These options are described in the online help.
Change the Owner of an SMB Share Using the FluidFS Cluster Administrator Account If the FluidFS cluster is not joined to Active Directory, use the Administrator account to change the owner of an SMB share. These steps might vary slightly depending on which version of Windows you are using. 1. Start the Map network drive wizard. 2. In Folder type: \\\ 3. Select Connect using different credentials. 4. Click Finish. 5.
5. Select the ACL to UNIX 777 Mapping Enabled checkbox. NOTE: Actual data-access checks in FluidFS are still made against the original security ACLs. This feature applies only to NAS volumes with Windows or mixed security style (for files with Windows ACLs). Setting ACLs on an SMB Share To set ACLs, use Windows Explorer procedures. When defining an ACL for a local user account, you must use this format: \ Setting SLPs on an SMB Share Using MMC To set SLPs, you can us
View Audit SACL Access You can view SACL (System Access Control List) access to ensure that an auditing event is generated when a file or directory is accessed. To view Audit SACL Access: 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab. 3. In the File System tab navigation pane, expand NAS Volumes and select a NAS volume. 4. Click Edit Settings. The Edit NAS Volume Settings dialog box appears. 5. Click the Auditing tab.
3. In the right pane, select an SMB share and click Edit Settings. The Edit SMB Share Settings dialog box appears. 4. Select Content in the vertical tab. 5. Enable or disable showing files with names starting with a dot. • 6. To enable showing files with names starting with a dot, select the Show files with name starting with a dot check box. • To disable showing files with names starting with a dot, clear the Show files with name starting with a dot check box. Click Apply, then click OK.
Managing NFS Exports Network File System (NFS) exports provide an effective way of sharing files across a UNIX or Linux network with authorized clients. After creating NFS exports, NFS clients then need to mount each NFS export. The FluidFS cluster fully supports NFS protocol version 3 and all requirements of NFS protocol versions 4.0 and 4.1.
NOTE: A folder name must be less than 100 characters long and cannot contain the following characters: >, ", \, |, ?, and *. • To share the root of the NAS volume, leave the Folder Path field set to the default value of /. • To use an existing directory to share, type the path to the directory in the Folder Path field. • To browse to an existing directory to share: Click Select Folder. The Select Folder dialog box appears and displays the top-level folders for the NAS volume.
6. – To view the parent folders of a particular folder, click Up. Click OK. Change the Client Authentication Methods for an NFS Export Change the authentication method(s) that clients use to access an NFS export. 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab. 3. In the File System tab navigation pane, select NFS Exports. 4. In the right pane, select an NFS export and click Edit Settings. The Edit NFS Export Settings dialog box appears. 5.
Enable or Disable Secure Ports for an NFS Export Requiring secure ports limits client access to an NFS export to ports lower than 1024. 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab. 3. In the File System tab navigation pane, select NFS Exports. 4. In the right pane, select an NFS export and click Edit Settings. The Edit NFS Export Settings dialog box appears. 5. Enable or disable secure ports. • 6.
Setting Permissions for an NFS Export To assign users access to an NFS export, you must log in to the NFS export using a trusted client machine account and set access permissions and ownership of the NFS export using the chmod and chown commands on the NFS mount point. Accessing an NFS Export Clients use the mount command to connect to NFS exports using UNIX or Linux. NOTE: The parameters shown in the command lines are recommended parameters.
using the CLI. See the Dell FluidFS Version 5.0 FS8600 Appliance CLI Reference Guide for detailed information about global namespace commands. Global Namespace Limitations • Global namespace is supported on SMB2.x, SMB3.x, and NFSv4.x clients only.
– Names containing any of the following characters are not allowed: * . and .. * @Internal&Volume!%File – Names that have a suffix of four, or multiple of three, characters between two ~ signs. For example, ~1234~ and ~123123~ are not allowed. Enable or Disable FTP 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab. 3. In the File System pane, expand Environment and then click Authentication. 4. In the right pane, click the Protocols tab. 5.
Managing Quota Rules Quota rules allow you to control the amount of NAS volume space a user or group can utilize. Quotas are configured on a per NAS volume basis. When a user reaches a specified portion of the quota size (soft quota limit) an alert is sent to the storage administrator. When the maximum quota size (hard quota limit) is reached, users cannot write data to the SMB shares and NFS exports on the NAS volume, but no alert is generated.
Configuring Quota Rules Quota rules allow you to control the amount of NAS volume space a user or group can utilize. View Quota Rules for a NAS Volume View the current quota rules for a NAS volume. 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab. 3. In the File System tab navigation pane, expand NAS Volumes and select a NAS volume. 4. In the right pane, click the Quotas tab. The quota rules are displayed.
d. (Optional) Configure the remaining user search options as needed. These options are described in the online help. To change the maximum number of search results to return, select the maximum number of search results from the Max Results drop-down menu. e. Click Search. f. Select a user from the search results. g. Click OK. 7. To enable a soft quota limit, select the Soft Quota check box and type a soft quota limit in megabytes (MB), gigabytes (GB), or terabytes (TB) at which an alert will be issued. 8.
To change the maximum number of search results to return, select the maximum number of search results from the Max Results drop-down menu. e. Click Search. f. Select a group from the search results. g. Click OK. 8. To enable a soft quota limit, select the Soft Quota check box and type a soft quota limit in megabytes (MB), gigabytes (GB), or terabytes (TB) at which an alert will be issued. 9.
Create a Directory Quota Rule Quota rules can be set on empty directories only. After the rule is set, it can be edited or deleted, but cannot be turned off. When a rule is deleted, the directory reverts back to normal directory behavior. To create a directory quota rule: 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab. 3. In the File System tab navigation pane, expand NAS Volumes and select a NAS volume. 4.
• Data compression – Uses algorithms to reduce the size of stored data. When using data reduction, note the following limitations: • The minimum file size to be considered for data reduction processing is 65 KB. • Because quotas are based on logical rather than physical space consumption, data reduction does not affect quota calculations. • If you disable data reduction, data remains in its reduced state during subsequent read operations by default.
Configuring Data Reduction Data reduction must be enabled at the system level and configured on a per NAS volume basis. Enable or Disable Data Reduction on the FluidFS Cluster Data reduction must be enabled at the system level before it will run on NAS volumes on which data reduction is enabled. To minimize the impact of data reduction processing on system performance, schedule data reduction to run during off-peak times. 1. Click the Storage view and select a FluidFS cluster. 2.
Change the Candidates for Data Reduction for a NAS Volume Change the number of days after which data reduction is applied to files that have not been accessed or modified for a NAS volume. 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab. 3. In the File System tab navigation pane, expand NAS Volumes and select a NAS volume. 4. In the right pane, click Edit Settings. The Edit NAS Volume Settings dialog box appears. 5.
28 FluidFS Data Protection This section contains information about protecting FluidFS cluster data. Data protection is an important and integral part of any storage infrastructure. These tasks are performed using the Dell Storage Manager Client. Managing Antivirus The FluidFS cluster antivirus service provides real-time antivirus scanning of files stored in SMB shares. The antivirus service applies only to SMB shares; NFS is not supported.
Configuring AntiVirus Scanning To perform antivirus scanning, you must add an antivirus server and then enable antivirus scanning on a per SMB share basis. NOTE: If any of the external services are configured with IPv6 link-local addresses, the monitor will always show these services as Unavailable. Add an Antivirus Server Add one or more antivirus servers. Add multiple antivirus servers to achieve high-availability of virus scanning, and reduce the latencies for file access.
• 7. To enable Virus Scan, select the Enabled checkbox. • To disable Virus Scan, clear the Enabled checkbox. (Optional) If you are enabling Virus Scan, configure the remaining anti-virus scanning attributes as needed. These options are described in the online help. • To exempt directories from antivirus scanning, select the Folders Filtering check box and specify the directories in the Directories excluded from scan list.
– To view the parent folders of a particular folder, click Up. • To type a directory to exempt from antivirus scanning, type a directory (for example, /folder/subfolder) in the Folders text field, and then click Add. • To remove a directory from the antivirus scanning exemption list, select a directory and click Remove. 10. Click OK.
Dedicated FluidFS Replay Profiles For FluidFS deployments, Storage Manager creates a dedicated FluidFS replay that is automatically assigned to FluidFS LUNs (storage volumes). The profile setting defaults to Daily, and the retention policy is to delete after 25 hours. Creating On-Demand Snapshots Create a NAS volume snapshot to take an immediate point-in-time copy of the data. 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab. 3.
Change the Snapshot Frequency for a Snapshot Schedule Change how often to create snapshots for a snapshot schedule. 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab. 3. In the File System tab navigation pane, expand NAS Volumes and select a NAS volume. 4. In the right pane, click the Snapshots & Clones tab. 5. Select a snapshot schedule and click Edit Settings. The Edit Snapshot Schedule dialog box appears. 6. Specify when to create snapshots. • 7.
4. In the right pane, click the Snapshots & Clones tab. 5. Select a snapshot and click Edit Settings. The Edit Snapshot Settings dialog box appears. 6. In the Name field, type a new name for the snapshot. 7. Click OK. Change the Retention Policy for a Snapshot Specify whether to retain the snapshot indefinitely or expire the snapshot after a period of time. 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab. 3.
Restore a NAS Volume from a Snapshot The storage administrator can restore an entire NAS volume from a snapshot. The restored NAS volume will contain all the NAS volume data that existed at the time the snapshot was created. Each file in the restored NAS volume will have the properties, such as permission and time, that existed when you (or a schedule) created the snapshot.
Managing NDMP The FluidFS cluster supports Network Data Management Protocol (NDMP), which is an open standard protocol that facilitates backup operations for network attached storage, including FluidFS cluster NAS volumes. NDMP should be used for longer-term data protection, such as weekly backups with long retention periods.
Figure 58. Two-Way configuration NOTE: If a controller loses the connectivity to the tape, the NDMP session assigned to the controller will fail. Configuring and Adjusting NDMP Two-Way Backup Tape Connectivity You must define the zoning so that the FC-attached tape drive can be seen by the HBAs on all NAS controllers. Drives must be available through every HBA port so that you can choose which port to use for each backup, and balance the load between HBA ports.
To work around this problem, change the behavior during backup. If a backup is started with the DEREF_HARD_LINK environment variable set to Y, the backup will back up all instances of the hard link files as if they were regular files, rather than just backing up the first instance of the hard link files. In this case, a selective restore will always have the file data. The disadvantage of this option is that backups might take longer and more space is required to back up a data set with hard link files.
Environment Variable Description Used In Default Value TYPE Specifies the type of backup and restore application. The valid values are: Backup and Restore dump • • dump: NDMP server generates inode-based file history tar: NDMP server generates file based file history FILESYSTEM Specifies the path to be used for the backup. The path must be a directory. Backup None LEVEL Specifies the dump level for the backup operation. The valid values are 0 to 9.
Environment Variable Description Used In Default Value Backup -1 Backup N data set). If this feature is not important in your environment, this variable should not be set. BASE_DATE Specifies whether a token-based backup is performed. Token-based backup is used by Tivoli Storage Manager as an alternative to backups using the LEVEL environment variable. The valid values are: • • DEREF_HARD_LINK -1: Specifies that token-based backup is disabled 0: Specifies that a token-based backup is performed.
5. In the right pane, click Change Backup User Password. The Change Backup User Password dialog box appears. 6. In the Password field, type an NDMP password. The password must be at least seven characters long and contain three of the following elements: a lowercase character, an uppercase character, a digit, or a special character (such as +, ?, or ∗). 7. In the Confirm Password field, retype the NDMP password. 8. Click OK.
To improve data transfer speed, increase the number of concurrent backup jobs to more than one per NAS controller, distributing the load across the available NAS controllers. NDMP Include/Exclude Path When you define a backup using DMA, you can select specific directories from the virtual NAS volume to include in, or exclude from, backup jobs. Requirements The following requirements must be met to include or exclude NDMP paths: • The path specified can be a directory or a file.
View NDMP Events View events related to NDMP backups. 1. Click the Storage view. 2. In the Storage pane, select a FluidFS cluster. 3. Click the System tab. 4. In the System tab navigation pane, select Connections. 5. In the right pane, select NDMP Backups. 6. In the right pane, click the NDMP Events tab. The NDMP events are displayed.
Replication Scenarios Description Disaster recovery Mirrors data to remote locations for failover during a disaster Configuring replication is a three step process: • Add a replication partnership between two FluidFS clusters. • Add replication for a NAS volume. • Run replication on demand or schedule replication. How Replication Works Replication leverages snapshots. The first time you replicate a NAS volume, the FluidFS cluster copies the entire contents of the NAS volume.
Figure 62.
After a partner relationship is established, replication between the partners can be bidirectional. One system could hold target NAS volumes for the other system as well as source NAS volumes to replicate to that other system. A replication policy can be set up to run according to a set schedule or on demand. Replication management flows through a secure SSH tunnel from system to system over the client network.
4. In the right pane, click the Remote Cluster tab, select a remote cluster, then click Edit Settings. The Edit Settings dialog box appears. 5. Configure the VIP of the remote cluster and the port to use for replication (10560 or 3260). The chosen port must be open in any firewall between the clusters. 6. Click OK. Delete a Replication Partnership When you delete a replication partnership, the replication relationship between the source and target FluidFS clusters is discontinued.
6. Enter a name and choose the bandwidth limit for the node in KB/s. 7. Click OK. 8. The Edit Replication QoS Schedule dialog box appears. 9. Drag the mouse to select an area, right-click on it, and choose the percentage of the bandwidth limit to allow in these day and hour combinations. 10. Click OK. Change a QoS Node Change a QoS (Quality of Service) node (network level) of outgoing traffic bound to a replication. 1. Click the Storage view and select a FluidFS cluster. 2.
Replicating NAS Volumes You can perform manual and scheduled replication operations, and pause, resume, delete, and monitor replication. Add Replication for a NAS Volume Adding replication creates a replication relationship between a source NAS volume and a target NAS volume. After adding replication, you can set up a replication policy to run according to a set schedule or on demand. 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab. 3.
Run Replication On Demand After a replication is created, you can replicate a NAS volume on demand. You can run replication only from the source FluidFS cluster. 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab. 3. In the File System tab navigation pane, expand NAS Volumes and select a NAS volume. 4. In the right pane, click the Replications tab. 5. In the Replication Status area, click Start Manual Replication. The Start Manual Replication dialog box appears.
Pause Replication When you pause replication, any replication operations for the NAS volume that are in progress are suspended. While replication is paused, scheduled replications do not take place. If you require multiple replications to be paused, perform the following steps for each replication. You can pause replication only from the source FluidFS cluster. 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab. 3.
replication operations for the NAS volume that are in progress are suspended. You can promote a target NAS volume from either the source or target FluidFS cluster. 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab. 3. In the File System tab navigation pane, expand NAS Volumes and select a NAS volume. 4. In the right pane, click the Replication tab. 5. In the Replication Status area, click Promote Destination. The Promote Destination dialog box appears. 6.
Marketing can access the Marketing NAS volume or SMB share using \\FluidFS marketing\marketing, and Sales can access the Sales NAS volume or SMB share using \\FluidFSsales\sales. Initially, both DNS entries FluidFSmarketing and FluidFS sales point to the same set of client VIPs. At this point, both the marketing and sales SMB shares can be accessed from either one of the DNS names, FluidFSmarketing or FluidFS sales.
5. Ensure that Cluster B is used to temporarily serve client requests during the failover time. a. Choose one of the following options: • IP address-based failovers: Change the IP addresses for Cluster B to match the IP addresses used by Cluster A. Existing client connections might break and might need to be re-established. • DNS-based failovers: Point the DNS names from your DNS server to Cluster B instead of Cluster A.
• IP address-based failovers: Change the IP addresses for Cluster A to match the IP addresses originally used by Cluster A and change the IP addresses for Cluster B to match the IP addresses originally used by Cluster B. Existing client connections might break and might need to be re-established. • DNS-based failovers: Point the DNS names from your DNS server to Cluster A instead of Cluster B.
29 FluidFS Monitoring This section contains information about monitoring the FluidFS cluster. These tasks are performed using the Dell Storage Manager Client. Monitoring NAS Appliance Hardware Storage Manager displays an interactive, graphical representation of the front and rear views of NAS appliances.
Figure 64. Appliance View Tab Tool Tip 7. To adjust the zoom on the NAS appliance diagram, change the position of the zoom slider located to the right of the NAS appliance diagram. • 8. To zoom in, click and drag the zoom slider up. • To zoom out, click and drag the zoom slider down. To move the NAS appliance diagram in the Controller View tab, click and drag the NAS appliance diagram.
Figure 66. Controller View Tab 5. To view more information about hardware components in the NAS controller diagram, mouse over a hardware component in the NAS controller diagram. A tool tip appears and displays information including the name and status of the hardware component. The following graphic shows an example of a tool tip that appears after hovering the mouse cursor over a network port. Figure 67. Controller View Tab Tool Tip 6.
View the Status of the Fans View the status of the fans in a NAS appliance. 1. Click the Storage view and select a FluidFS cluster. 2. Click the Hardware tab. 3. In the Hardware tab navigation pane, expand Appliances→ Appliance ID, then select Fans. The status of each fan is displayed in the right pane. View the Status of the Power Supplies View the status of the power supplies in a NAS appliance. 1. Click the Storage view and select a FluidFS cluster. 2. Click the Hardware tab. 3.
Viewing NAS Volume Storage Usage Storage Manager displays a line chart that shows storage usage over time for a particular NAS volume, including NAS volume size, used space, snapshot space, unused reserved space, and unused unreserved space. 1. Click the Storage view. 2. In the Storage pane, select a FluidFS cluster. 3. Click the File System tab. 4. In the File System tab navigation pane, expand NAS Volumes and select a NAS volume. 5. In the right pane, click the Historical Storage Usage tab.
• To combine the data into a single chart with multiple Y axes, click Combine Charts. • To change the data metrics to display, select one or more of the following data metrics: – Total MB/Sec: Displays all read and write traffic in Megabytes per second. – SMB Write MB/Sec: Displays SMB write traffic in Megabytes per second. – SMB Read MB/Sec: Displays SMB read traffic in Megabytes per second. – NDMP Write MB/Sec: Displays NDMP write traffic in Megabytes per second.
30 FluidFS Maintenance This section contains information about performing FluidFS cluster maintenance operations. These tasks are performed using the Dell Storage Manager Client. Connecting Multiple Data Collectors to the Same Cluster You can have multiple data collectors connected to the same FluidFS cluster. To designate the Primary data collector and/or whether it receives events: 1. Click the Storage view and select a FluidFS cluster. 2. Click the Summary tab 3.
Remove a FluidFS Cluster From Storage Manager Remove a FluidFS cluster if you no longer want to manage it using Storage Manager. For example, you might want to move the FluidFS cluster to another Storage Manager Data Collector. 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab. 3. In the right pane, click Delete. The Delete dialog box appears. 4. Click OK.
5. Click OK. Delete a FluidFS Cluster Folder Delete a FluidFS cluster folder if it is unused. Prerequisite The folder must be empty. Steps 1. Click the Storage view and select a FluidFS cluster folder. 2. Click the Summary tab. 3. In the right pane, click Delete. The Delete dialog box appears. 4. Click OK. Adding a Storage Center to a FluidFS Cluster The back-end storage for a FluidFS cluster can be provided by up to two Storage Centers.
e. Click Next. 9. Use the Connectivity Report page to verify connectivity between the FluidFS cluster and the Storage Center. The NAS controller ports must show the status Up before you can complete the wizard. If you click Finish and the NAS controller ports do not have the status Up, an error will be displayed.
a. b. c. d. e. Cable the new NAS appliance(s) to the internal switch. Remove just one of the internal cables from the original NAS appliance. Connect a cable from each NAS controller port vacated in Step b to the internal switch. Remove the second internal cable from the original NAS appliance. Connect a cable from each NAS controller port vacated in Step d to the internal switch. 2. Click the Storage view and select a FluidFS cluster. 3. Click the Hardware tab. 4.
Then, click Refresh to update the Connectivity Report. When the zoning is configured correctly and the Connectivity Report has been refreshed, the status for each FluidFS cluster HBA shows Up. 14. Click Finish.
Steps 1. Click the Storage view and select a FluidFS cluster. 2. Click the Hardware tab. 3. In the Hardware tab navigation pane, expand Appliances→ NAS appliance ID, then select NAS controller ID. 4. In the right pane, click Attach. The Attach dialog box appears. 5. Click OK. The progress of the attach process is displayed in the Attach dialog box. If you close the dialog box, the process will continue to run in the background.
Managing Service Packs The FluidFS cluster uses a service pack methodology to upgrade the FluidFS software. Service packs are cumulative, meaning that each service pack includes all fixes and enhancements provided in earlier service packs. View the Upgrade History View a list of service pack upgrades that have been installed on the FluidFS cluster. 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab. 3. In the File System tab navigation pane, select Maintenance. 4.
Prerequisites • Contact Dell Technical Support to make service packs available for download to the FluidFS cluster. • The Storage Manager Data Collector must have enough disk space to store the service pack. If there is not enough space to store the service pack, a message will be displayed shortly after the download starts. You can delete old service packs to free up space if needed. • Installing a service pack causes the NAS controllers to reboot during the installation process.
NOTE: The installation process is a long-running operation. If you close the wizard, the installation process will continue to run in the background. You can view the installation progress using the File System tab→Maintenance → Internal→ Background Processes tab.
Restore the NAS Volume Configuration When you restore a NAS volume configuration, it overwrites and replaces the existing configuration. Clients that are connected to the FluidFS cluster are disconnected. Clients will then automatically reconnect to the FluidFS cluster. 1. Ensure the .clusterConfig folder has been copied to the root folder of the NAS volume on which the NAS volume configuration will be restored.
3. Click the File System tab and select Authentication. 4. In the right pane, click the Local Users and Groups tab. 5. Click Restore Local User. The Restore Local Users dialog box appears. 6. From the Backup Source drop-down menu, select the backup from which to restore local users. 7. Click OK. Restoring Local Groups Restoring the local groups configuration provides an effective way to restore all local groups without having to manually reconfigure them.
Reinstalling FluidFS from the Internal Storage Device Each NAS controller contains an internal storage device from which you can reinstall the FluidFS factory image. If you experience general system instability or a failure to boot, you might have to reinstall the image on one or more NAS controllers. Prerequisites • If the NAS controller is still an active member in the FluidFS cluster, you must first detach it.
31 FS Series VAAI Plugin The VAAI plugin allows ESXi hosts to offload some specific storage-related tasks to the underlying FluidFS appliances.
• 5. ~ # esxcli software vib install –v esxcli software vib install -v file:///tmp/ FluidFSNASVAAI_For_Esx_v5.5.vib Reboot the ESXi host. Plugin Verification To check if the VAAI plugin is installed in an ESXi host, type the following command in the ESXi console:# esxcli software vib list | grep Dell_FluidFSNASVAAI A positive reply should return: Dell_FluidFSNASVAAI 1.1.
32 FluidFS Troubleshooting This section contains information about troubleshooting problems with the FluidFS cluster. These tasks are performed using the Dell Storage Manager Client. Viewing the Event Log A FluidFS cluster generates events when normal operations occur and also when problems occur. Events allow you to monitor the FluidFS cluster, detect and solve problems. Events are logged to the Event Log. View the Event Log View events contained in the Event Log. 1.
• To prevent the search from wrapping, clear the Wrap check box. NOTE: By default, when a search reaches the bottom of the list and Find Next is clicked, the search wraps around to the first match in the list. When a search reaches the top of the list and Find Previous is clicked, the search wraps around to the last match in the list. 5. • To match whole phrases within the events, select the Full Match check box. • To highlight all of the matches of the search, select the Highlight check box.
7. Enter any requested diagnostic parameters and click OK. The diagnostic parameters are described in the online help. After the diagnostics have been run, Storage Manager will send diagnostic data using Dell SupportAssist. Related link Managing the FTP Server Managing the FTP Server Run Embedded System Diagnostics on a NAS Controller The embedded system diagnostics (also known as Enhanced Pre-boot System Assessment (ePSA) diagnostics) provide a set of options for particular device groups or devices.
Configuring the BMC Network You can configure the baseboard management controller (BMC) local area network (LAN) port to provide KVM (keyboard, video, and mouse) service for the FluidFS controller serial console I/O. The BMC KVM service enables the administrator or support engineer to access the FluidFS console I/O to troubleshoot various issues over a computer network. The FluidFS appliance hardware provides a special physical port known as the Lights-Out Management (LOM) port.
c. In the Username field, type ADMIN. d. In the Password field, type the iBMC password. e. Click OK. The iBMC Properties page appears. 3. Launch the iBMC virtual KVM. a. In the navigation pane, expand vKVM & vMedia and click Launch. b. In the right pane, click Launch Java KVM Client. The Video Viewer appears and displays the FluidFS cluster console. Troubleshooting Common Issues This section contains probable causes of and solutions to common problems encountered when using a FluidFS cluster.
• • • Workaround Unable to ping the domain using a FQDN. DNS might not be configured. NTP might not be configured. When configuring the FluidFS cluster to connect to an Active Directory domain: 1. Ensure that you use a FQDN and not the NetBIOS name of the domain or IP address of the domain controller. Ensure that the user has permissions to add systems to the domain. Use the correct password. Configure DNS. The FluidFS cluster and Active Directory server must use a common source of time.
If the backup appliance can connect to a FluidFS cluster, but cannot log in: 1. 2. Use the default user name “backup_user” configured in Storage Manager for the NDMP client while setting up the NDMP backup/restore in your backup application. Use the password configured in Storage Manager for the NDMP client while setting up the NDMP backup/restore in your backup application.
SMB Client Clock Skew Description SMB client clock skew errors. Cause The client clock must be within 5 minutes of the Active Directory clock. Workaround Configure the client to clock-synch with the Active Directory server (as an NTP server) to avoid clock skews errors. SMB Client Disconnect on File Read Description The SMB client is disconnected on file read. Cause Extreme SMB workload during NAS controller failover. Workaround The client needs to reconnect and open the file again.
Workaround The system recovers itself automatically, an event is issued when recovered. SMB Maximum Connections Reached Description The maximum number of SMB connections per NAS controller has been reached. Cause Each NAS appliance is limited to a certain number of connections. Workaround • • • If the system is in an optimal state (all NAS controllers are online) and the number of SMB clients accessing one of the NAS controllers reaches the maximum, consider adding another NAS appliance.
• Manually create the missing directories to enable access. Clients receive errors when trying to access existing data in a deleted path. • Remove the SMB share and communicate this to the client. List all available SMB shares on the FluidFS cluster and identify the problematic SMB share. It must have an indication that it is not accessible. 3. SMB Write to Read Only NAS Volume Description A client tries to modify a file on a read-only NAS volume.
If the FluidFS cluster is not responding due to a port mapper failure: • • • Check the FluidFS cluster status. Check the network connection by trying to NFS mount from some other system. Verify whether other clients experience the same problem. If the FluidFS cluster is not responding due to the program not being registered, check if the port mapper on your client is up.
• If a secure NFS export is not required (for example, the network is not public), ensure that the export is insecure and retry accessing it. NFS Mount Fails Due to Export Options Description This event is issued when an NFS mount fails due to export options. Cause The export list filters client access by IP address, network, or netgroup, and screens the accessing client. Workaround 1. Verify the relevant NFS export details. Write down all existing options so that you are able to revert to them.
• 3. Manually create the missing directories to enable the mount. Clients receive errors when trying to access existing data in a deleted path. • Remove the NFS export and communicate this to the client. List all available NFS exports on the FluidFS cluster and identify the problematic NFS export. It must have an indication that it is not accessible. NFS Owner Restricted Operation Description An NFS client is not permitted to perform the requested action to the specific file.
Workaround A possible way to verify this problem is to use newgrp to temporarily change the primary group of the user and thus ensure it is passed to the server. The simple workaround, although not always feasible, is to remove the user from unnecessary groups, leaving only 16 groups or less. Troubleshoot NAS File Access and Permissions Issues This section contains probable causes of and solutions to common NAS file access and permissions problems.
Strange UID and GID Numbers on Dell NAS System Files Description New files created from Ubuntu 7.x clients get the UID and GID of 4294967294 (nfsnone). Cause By default, Ubuntu 7.x NFS clients do not specify RPC credentials on their NFS calls. As a result, files created from these clients, by any user, are owned by 4294967294 (nfsnone) UID and GID. Workaround To force UNIX credentials on NFS calls, add the sec=sys option to the FluidFS cluster mounts in the Ubuntu fstab file.
Troubleshoot Replication Issues This section contains probable causes of and solutions to common replication problems. Replication Configuration Error Description Replication between the source and target NAS volumes fails because the source and target FluidFS cluster topologies are incompatible. Cause The source and target systems are incompatible for replication purposes. Workaround Verify that both the source and target have the same number of NAS controllers.
Workaround The replication continues automatically when the space is available. Verify that the replication automatically continues after a period of time (an hour). Replication Target Volume is Detached Description Replication between the source NAS volume and the target NAS volume fails because the target NAS volume is detached from the source NAS volume. Cause Replication fails because the target NAS volume was previously detached from the source NAS volume.
Workaround The replication continues automatically when the file system releases part of the resources. Verify that the replication automatically continues after a period of time (an hour). Replication Source is Down Description Replication between the source NAS volume and the target NAS volume fails because the file system of source NAS volume is down. Cause The file system of the source NAS volume is down. Workaround Check whether the FluidFS cluster is down in the source system.
Troubleshooting System Shutdown Description During a system shutdown using Storage Manager, the system does not stop and the NAS controllers do not shut down after 20 minutes. Cause The system shutdown procedure is comprised of two separate processes: • • Stopping the file system Powering down the NAS controllers The file system might take a long time to clean the cache to storage either due to lot of data, or due to an intermittent connection to the storage.
Workaround • • • 662 Connect a keyboard and monitor to the NAS controller that is taking a long time to boot up. If the system is booting, and is at the boot phase, let the upgrades finish. This can take up to 60 minutes to complete. Do not reboot the NAS controller manually if it is in the boot phase.
Part V Storage Center Disaster Recovery This section describes how to prepare for disaster recovery and activate disaster recovery when needed. It also contains instructions about using the Dell Storage Replication Adapter (SRA), which allows sites to use VMware vCenter Site Recovery Manager with Storage Centers.
33 Remote Storage Centers and Replication QoS A remote Storage Center is a Storage Center that is configured to communicate with the local Storage Center over the Fibre Channel and/or iSCSI transport protocols. Replication Quality of Service (QoS) definitions control how bandwidth is used to send replication and Live Volume data between local and remote Storage Centers.
2. 4. In the Storage tab navigation pane, select Remote Storage Centers. 3. In the right pane, click Configure iSCSI Connection. The Configure iSCSI Connection wizard opens. • From a PS Group, select Actions → Replication → Configure iSCSI Connection. The Configure iSCSI Connection wizard opens. Select the Storage Center or PS Group for which you want to configure an iSCSI connection, then click Next. The wizard advances to the next page. 5. Select iSCSI controller ports and select the network speed.
7. When you are done, click Finish. Creating and Managing Replication Quality of Service Definitions Replication Quality of Service (QoS) definitions control how bandwidth is used for replications, Live Volumes, and Live Migrations. Create a QoS definition before you create a replication, Live Volume, or Live Migration. Create a QoS Definition Create a QoS definition to control how bandwidth is used to send replication and Live Volume data between local and remote Storage Centers.
Enable or Disable Bandwidth Limiting for a QoS Definition Use the Edit Settings dialog box to enable or disable bandwidth limiting for a QoS Definition. 1. Click the Replications & Live Volumes view. 2. Click the QoS Nodes tab, then select the QoS definition. 3. In the right pane, click Edit Settings. The Edit Replication QoS dialog box appears. 4. Select or clear the Bandwidth Limited check box. 5. Click OK.
34 Storage Center Replications and Live Volumes A replication copies volume data from one Storage Center to another Storage Center to safeguard data against local or regional data threats. A Live Volume is a replicating volume that can be mapped and active on a source and destination Storage Center at the same time. Storage Center Replications A Storage Center can replicate volumes to a remote Storage Center and simultaneously be the target of Replication from a remote Storage Center.
Asynchronous Replication Asynchronous replication copies snapshots from the source volume to the destination volume after they are frozen. NOTE: By default, data is replicated from the source volume to the lowest storage tier of the destination volume. To change this default, modify the settings for a replication.
Requirement Description • Asynchronous replication: Version 5.5 or later Storage Center license The source and destination Storage Centers must be licensed for Remote Instant Snapshot. Storage Manager configuration The source and destination storage system must be added to Storage Manager Data Collector. NOTE: Replications cannot be created or managed when the Dell Storage Manager Client is directly connected to a Storage Center.
Topology Limitations for Volumes Associated with Multiple Replications The following limitations apply to volumes that are associated with multiple replications. • Only one synchronous replication can be configured per source volume. Subsequent replications must be asynchronous. • For cascade mode (replications configured in series), only the first replication can be a synchronous replication. Subsequent replications in the series must be asynchronous.
• 4. If a QoS definition has not been created, the Create Replication QoS wizard appears. Use this wizard to create a QoS definition before you configure replication. In the Simulate Volume(s) to Replicate table, select the volume(s) for which you want to simulate replication, then click Next. The wizard advances to the next page. 5. (Optional) In the Replication Attributes area, modify default settings that determine how replication behaves. 6. Click Next. The wizard advances to the next page. 7.
5. In the right pane, click Replicate Volume. • If one or more QoS definitions exist, the Create Replication wizard appears. • If a QoS definition has not been created, the Create Replication QoS wizard appears. Use this wizard to create a QoS definition before you configure replication. NOTE: If the volume is a replication destination, Replication QoS settings are enforced. If the volume is a Live Volume secondary, the Replication QoS settings are not enforced. 6.
Related link Replication Requirements Replication Types Migrating Volumes to Another Storage Center Migrating a volume to another Storage Center moves the data on that volume to a volume on another Storage Center. Successfully migrating a volume mapped to a server with minimal down-time consists of the following steps. NOTE: This method is the only way to migrate volumes for SCv2000 Storage Centers and Storage Centers running version 7.0 or earlier. For other Storage Centers running version 7.
i. j. Click OK. Click Finish. Modifying Replications Modify a replication if you want to enable or disable replication options, convert it to a Live Volume, or delete it. Change the Type for a Replication A replication can be changed from synchronous to asynchronous or asynchronous to synchronous with no service interruption. Prerequisite The source and destination Storage Centers must be running version 6.5 or later. Steps 1. Click the Replications & Live Volumes view. 2.
Select a Different QoS Definition for a Replication Select a different QoS definition for a replication to change how the replication uses bandwidth. 1. Click the Replications & Live Volumes view. 2. On the Replications tab, select the replication, then click Edit Settings. The Edit Replication Settings dialog box appears. 3. From the QoS Node drop-down menu, select a QoS definition. 4. Click OK.
Steps 1. Click the Replications & Live Volumes view. 2. On the Replications tab, select the replication, then click Convert to Live Volume. The Convert to Live Volume dialog box appears. 3. Modify the Live Volume attributes as necessary. These attributes are described in the online help. 4. When you are finished, click OK.
Related link Managed Replications for Live Volumes View the Snapshots for a Replication When a replication is selected, the Snapshots subtab displays the snapshots for the source volume and the destination volume. 1. Click the Replications & Live Volumes view. 2. On the Replications tab, select the replication. 3. In the bottom pane, click the Snapshots tab.
3. Click the Storage tab. 4. From the Storage tab navigation pane, select a volume. 5. Click Replicate Volume. 6. Select a remote storage system from the table. 7. Click Next. If a remote iSCSI connection is not configured, the Configure iSCSI Connection wizard opens. For instructions on setting up a remote iSCSI connection, see Configure an iSCSI Connection for Remote Storage Systems 8. Configure the replication settings as needed. NOTE: For information on the replication settings, click Help.
• If one or more QoS definitions exist, the Create Replication wizard appears. • If a QoS definition has not been created, the Create Replication QoS wizard appears. Use this wizard to create a QoS definition before you configure replication. NOTE: If the volume is a replication destination, Replication QoS settings are enforced. If the volume is a Live Volume secondary, the Replication QoS settings are not enforced. 6.
4. From the Storage tab navigation pane, select a volume. The volume must be the source of a replication relationship. 5. Click Create Schedule. The Create Schedule dialog box opens. 6. Click the Enable Schedule checkbox. 7. In the Name field, type a name for the schedule. 8. From the Frequency drop-down menu, select Daily Schedule. 9. Select the Replication Schedule radio button. 10. From the Start Date drop-down menu, select the start date of the schedule. 11.
Enable or Disable a Replication Schedule After creating a replication schedule, enable or disable the schedule to allow the schedule to initiate replications or prevent the schedule from initiating replications. 1. Click the Storage view. 2. In the Storage pane, select a PS Group. 3. Click the Storage tab. 4. From the Storage tab navigation pane, select a volume. The volume must be the source of a replication relationship. 5.
– SC7020 – SC7020F NOTE: SCv2000 and SCv3000 series controllers do not support Portable Volume. Portable Volume Process The general process of using portable volume disks includes: 1. Connecting the portable volume disk(s) to the source Storage Center. 2. Choosing the volumes that you want to transfer to the remote Storage Center. Selected volumes are copied to the portable volume disk(s), creating a replication baseline for each volume. 3.
Portable Volume Nodes When a portable volume disk is connected to a Storage Center or a Storage Center is the source or destination for a replication baseline, the Portable Volumes node appears in the Storage tab navigation pane. The following table describes the nodes that can appear under the Portable Volumes node. Portable Volume Node Description Unassigned Shows portable volume disks on the Storage Center that are currently unassigned.
Choose Volumes to Transfer to the Destination Storage Center On the source Storage Center, use the Start Replication Baseline wizard to select the destination Storage Center, the volumes that will be transferred, and the portable volume disk(s) that will transport the replication baselines for the volumes. 1. Click the Storage view. 2. In the Storage pane, select a Storage Center. 3. Click the Storage tab. 4. In the Storage tab navigation pane, select Portable Volumes. Figure 69.
– If you are using Dell USB disks, disconnect them and connect the remaining disks. Add new disks to the Portable Volume node. – If you are using Dell RD1000 disk bays, eject the full disk cartridges and insert new disk cartridges. Add new disks to the Portable Volume node. NOTE: If a new portable volume disk is added to an Invalid node, it contains data for a different transfer. If the data is not needed, erase the disk before adding it to the Portable Volume node.
Figure 72. Portable Volumes Unassigned Node NOTE: The Portable Volumes node appears only if one or more portable volume disks are present on the Storage Center. 6. If the portable volume disk(s) contain old or invalid data, erase them. a. In the Storage tab navigation pane, select the portable volume disk. b. In the right pane, click Erase. The Erase Portable Volume dialog box appears. c. Select an Erase Type, then click Yes. 7. In the right pane, click Manage Portable Volume Disks.
b. When you are done, click Finish. Modify the Portable Volume Schedule The portable volume Schedule allows you to define when portable volume copy and restore operations are allowed and set a priority value (Not Allowed, Low, Medium, or High) for the operations. By default, the portable volume schedule does not restrict portable volume copy/restore operations. 1. Click the Storage view. 2. In the Storage pane, select a Storage Center. 3. Click the Storage tab. 4.
5. • Repl Baseline From [ ] In the right pane, click Edit Encryption Security Key. The Edit Encryption Security Key dialog box appears. 6. In the Encryption Security Key field, type a new security key, then click OK. Rename a Portable Volume Disk You can change the name assigned to the portable volume USB disk. 1. Click the Storage view. 2. In the Storage pane, select a Storage Center. 3. Click the Storage tab. 4. In the Storage tab navigation pane, select the portable volume disk. 5.
Cancel a Portable Volume Disk Restore Operation You can cancel the operation to restore a replication baseline from a portable volume disk to the destination Storage Center. 1. Click the Storage view. 2. In the Storage pane, select the destination Storage Center. 3. Click the Storage tab. 4. In the Storage tab navigation pane, select Repl Baseline From [ ]. 5.
Live Volume Types Live Volumes can be created using asynchronous replication or synchronous replication. The following table compares the Storage Center version requirements and features of each Live Volume type. Live Volume Type Storage Center Snapshot Support Active Snapshot Support Deduplication Support Asynchronous Version 5.5 and later Yes Yes Yes Synchronous Version 6.
Live Volume Before Swap Role In the following diagram, the primary Storage Center is on the left and the secondary Storage Center is on the right. Figure 74. Example Live Volume Configuration 1. Server 2. Server IO request to primary volume over Fibre Channel or iSCSI 3. Primary volume 4. Live Volume replication over Fibre Channel or iSCSI 5. Secondary volume 6. Server IO request to secondary volume (forwarded to primary Storage Center by secondary Storage Center) 7.
Automatic Swap Role for Live Volumes Live Volumes can be configured to swap primary and secondary volumes automatically when certain conditions are met to avoid situations in which the secondary volume receives more IO than the primary volume. Attributes that Control Swap Role Behavior When automatic swap role is enabled, the following limits determine when a role swap occurs.
Tiebreaker The tiebreaker is a service running on the Data Collector that prevents the primary and secondary Live Volumes from simultaneously becoming active. If the secondary Storage Center cannot communicate with the primary Storage Center, it consults the tiebreaker to determine if the primary Storage Center is down. If the primary Storage Center is down, the secondary Live Volume activates.
Figure 77. Step Four NOTE: When the primary Storage Center recovers, Storage Center prevents the Live Volume from coming online. Automatic Restore of a Live Volume Enabling Automatic Restore repairs the Live Volume relationship between the primary and secondary Live Volumes after recovering from a failure. After an automatic restore, the original secondary Live Volume remains as the primary Live Volume. The following steps occur during an automatic repair of a Live Volume.
2. The primary Storage Center recognizes that the secondary Live Volume is active as the primary Live Volume. 3. The Live Volume on the secondary Storage Center becomes the primary Live Volume. 4. The Live Volume on the primary Storage Center becomes the secondary Live Volume. Figure 79.
Managed Replication Before Live Volume Swap Role In the following diagram, the primary Storage Center is on the left and the secondary Storage Center is located on the right. Figure 80. Live Volume with Managed Replication Example Configuration 1. Server 2. Server IO request to primary volume over Fibre Channel or iSCSI 3. Primary volume (Live Volume and managed replication) 4. Live Volume replication over Fibre Channel or iSCSI 5. Secondary volume (Live Volume) 6.
• The destination Storage Center (managed replication) must be running version 6.5 or later and meet the replication requirements. Related link Replication Requirements Live Volume Requirements Creating Live Volumes Create a Live Volume to replicate a volume to another Storage Center while allowing servers to send IO for the volume to both Storage Centers. This additional flexibility can be used to perform planned outages without interrupting volume availability.
Convert Multiple Volumes to Live Volumes To convert multiple volumes to Live Volumes, create the Live Volumes from the Replications & Live Volumes view. Prerequisite The Live Volume requirements must be met. See Live Volume Requirements. About this task Fluid Cache volumes cannot be the primary or secondary volume in a Live Volume. Steps 1. Click the Replications & Live Volumes view. 2. On the Live Volumes tab, click Create Live Volumes. The Create Live Volumes wizard appears. 3.
Change the Replication Type for a Live Volume The replication type used by a Live Volume can be changed with no service interruption. Prerequisites • The source and destination Storage Centers must be running version 6.5 or later. • If the Live Volume manages a synchronous replication, the replication type for the Live Volume must be asynchronous. Steps 1. Click the Replications & Live Volumes view. 2. On the Live Volumes tab, select the Live Volume, then click Edit Settings.
Related link Managed Replications for Live Volumes Supported Live Volume with Managed Replication Topologies Live Volume with Managed Replication Example Configuration Managed Replication Requirements Include Active Snapshot Data for an Asynchronous Live Volume The Active Snapshot represents the current, unfrozen volume data. 1. Click the Replications & Live Volumes view. 2. On the Live Volumes tab, select the Live Volume, then click Edit Settings. The Edit Live Volume dialog box appears. 3.
4. Click OK. Allow a Live Volume to Automatically Swap Roles Live Volumes can be configured to swap primary and secondary volumes automatically when certain conditions are met to avoid situations in which the secondary volume receives more IO than the primary volume. 1. Click the Replications & Live Volumes view. 2. On the Live Volumes tab, select the Live Volume, then click Edit Settings. The Edit Live Volume dialog box appears. 3. Select the Automatically Swap Roles check box. 4.
Delete a Live Volume Use the Live Volumes tab to delete a Live Volume. About this task If the Live Volume manages a replication, the managed replication is converted into a standalone replication when the Live Volume is deleted. Steps 1. Click the Replications & Live Volumes view. 2. On the Live Volumes tab, select the Live Volume, then click Delete. The Delete Objects dialog box appears. 3.
A confirmation page appears. 6. Click Next. A warning page appears if Storage Manager is managing only one of the Storage Centers. 7. Click Finish. The Results Summary page appears. 8. Click OK. Manually Bring Primary Live Volume Online After a failure, the primary Live Volume may be offline preventing the Live Volume relationship to be restored.
• The Live Volume must be configured as synchronous and high-availability. • Both primary and secondary Storage Centers must be managed by Storage Manager. Steps 1. Click the Replications & Live Volumes view. 2. Click the Live Volumes tab. 3. Select a Live Volume then click Edit Settings. The Edit Live Volume dialog box appears. 4. Select the Failover Automatically check box. 5. To enable automatic restore, select the Restore Automatically check box. 6. Click OK.
View the Progress Report for a Live Volume When a Live Volume is selected, the Progress Reports subtab displays charts for the amount of data waiting to be copied and the percent complete. 1. Click the Replications & Live Volumes view. 2. On the Live Volumes tab, select the Live Volume. 3. In the bottom pane, click the Progress Reports tab. View IO/sec and MB/sec Charts for a Live Volume When a Live Volume is selected, the IO Reports subtab displays charts for IO per second and MB per second.
35 Storage Center DR Preparation and Activation Activate disaster recovery to restore access to your data in the event of an unplanned disruption. How Disaster Recovery Works Disaster recovery (DR) is the process activating a replicated destination volume when the source site fails. When the source site comes back online, the source volume can be restored based on the volume at the DR site. The following diagrams illustrate each step in the DR process.
Step 2: The Source Site Goes Down When the source site goes down, the data on the source volume can no longer be accessed directly. However, the data has been replicated to the destination volume. Figure 83. Replication When the Source Site Goes Down 1. Source volume (down) 2. Replication over Fibre Channel or iSCSI (down) 3. Destination volume 4. Server mapping to source volume (down) 5.
Step 4: Connectivity is Restored to the Source Site When the outage at the source site is corrected, Storage Manager Data Collector regains connectivity to the source Storage Center. The replication cannot be restarted at this time because the destination volume contains newer data than the original source volume. Figure 85. Replication After the Source Site Comes Back Online 1. Source volume 2. Replication over Fibre Channel or iSCSI (down) 3. Destination volume (activated) 4.
Step 5B: The Activated DR Volume is Deactivated After the replication from the activated DR volume to the original source volume is synchronized, Storage Manager prompts the administrator to halt IO to the secondary volume. NOTE: IO must be halted before the destination volume is deactivated because the deactivation process unmaps the volume from the server. Figure 87. DR-Activated Volume is Deactivated 1. Source volume being recovered 2. Replication over Fibre Channel or iSCSI 3.
Related link Remote Data Collector Preparing for Disaster Recovery Prepare for DR by saving restore points, predefining DR settings, and testing those settings.
• Up: The replication is up and running normally. • Degraded: There is something wrong with the replication. See to the State column information about why replication is no longer running. This replication is eligible for DR. • 3. Down: The replication is not running. See to the State column information about why replication is no longer running. This could be because the destination system is no longer available or that the source and Destination volume are no longer up and running.
Test Activating Disaster Recovery Testing DR activation for a replication restore point creates a test-activated view volume and maps it to the appropriate server without interrupting service for the original volume. This allows you to make sure that your DR plan is viable. • Periodically test-activate DR for restore points to ensure their the restore point is viable. • DR activation settings specified for test activation are retained for future DR activation and test activation.
Figure 89. Test Activate Disaster Recovery Dialog Box b. Select the server to which the test-activated volume will be mapped by clicking Change next to the Server label. c. Modify the remaining settings for the test-activated volume as needed, then click OK. These attributes are described in the online help. 6. When you are done, click Finish. • Storage Manager creates test-activated view volumes and maps them to the configured server(s).
Figure 90. Test Activate Disaster Recovery Dialog Box 4. In the Name field, type the name for the activated view volume. 5. Select the server to which the activated view volume will be mapped. a. Next to the Server label, click Change. The Select Server dialog box appears. b. Select the server, then click OK. 6. Modify the remaining activation settings as needed. These attributes are described in the online help.
Disaster Recovery Activation Limitations Activating DR for a replication removes any replications that use the activated volume (original destination/secondary volume) as the source volume. Related link Replicating a Single Volume to Multiple Destinations Planned vs Unplanned Disaster Recovery Activation During disaster recovery activation, you may choose whether you want to allow planned DR activation. The following table displays some of the differences between planned and unplanned DR activation.
• A recommendation about whether the destination volume is currently synchronized with the source volume is displayed below the Sync Data Status field in green or yellow text. NOTE: For high consistency mode synchronous replications that are current, the Use Active Snapshot check box is automatically selected. Figure 91. Activate Disaster Recovery Dialog Box b. (Live Volume, Storage Center 6.
Activate Disaster Recovery for a Single Restore Point To activate DR for a replication or Live Volume, use the corresponding restore point. 1. Click the Replications & Live Volumes view. 2. Click the Restore Points tab. 3. Right-click the restore point, then select Activate Disaster Recovery. The Activate Disaster Recovery dialog box appears.
9. Click OK. • Storage Manager activates the recovery volume. • Use the Recovery Progress tab to monitor DR activation Related link Saving and Validating Restore Points Access Data on an Original Primary Volume After DR Activation If DR is activated for a Live Volume using the Preserve Live Volume option, the original primary Storage Center prevents the original primary volume from being active until the Live Volume is restored.
• Use the Recovery Progress tab to monitor DR activation Restarting Failed Replications If a source volume is current and functional, and the destination system is available but a Replication failed or was deleted, you can restart the Replication. To see if a Replication can be restarted, validate Restore Points. Restart Replication for Multiple Restore Points If multiple replications and/or Live Volumes hosted by a Storage Center pair failed or were deleted, you can restart them simultaneously.
Restoring Replications and Live Volumes A replication source volume or Live Volume primary volume can be restored from a replication destination volume or Live Volume secondary volume. Restoring a volume is necessary when it has been deleted or DR has been activated and data has been written to the activated volume. Volume Restore Options The options to restore a volume differ depending on whether DR was activated.
6. (Optional) Configure replication settings for each restore point. a. Select the restore point that you want to modify, then click Edit Settings. The Restore/Restart DR Volumes dialog box appears. b. (Storage Center 6.5 and later, Live Volume only) Choose a recovery method. • If the Recover Live Volume check box is available, select it to repair the Live Volume by reestablishing connectivity between the original source volume and activated volume.
8. Click OK. • 9. Storage Manager restores the replication or Live Volume. • Use the Recovery Progress tab to monitor the replication or Live Volume. On the Recovery Progress tab, when the restore point message displays Mirror is synced waiting for destination to be deactivated, halt IO to the destination volume. 10. Deactivate the destination volume by selecting the restore point and clicking Deactivate Destination.
36 Remote Data Collector A remote Data Collector provides access to Storage Manager disaster recovery options when the primary Data Collector is unavailable. Remote Data Collector Management The Storage Manager Client can connect to the primary Data Collector or the remote Data Collector. In the event that the primary Data Collector is unavailable and you need to access Storage Manager disaster recovery options, use the Client to connect to the remote Data Collector.
Software Requirements The software requirements that apply to the primary Data Collector also apply to the remote Data Collector. However, a remote Data Collector uses the file system to store data so there is no database requirement. Related link Data Collector Requirements Dell Storage Manager Virtual Appliance Requirements The Dell Storage Manager Virtual Appliance requires the following conditions. Component Requirement Server operating system VMware vSphere 5.5, 6.0, or 6.
5. Click Finish. The Storage Manager Data Collector Setup wizard appears. Configure the Remote Data Collector with the Data Collector Setup Wizard Use the Data Collector Setup wizard to configure the remote Data Collector. 1. Configure the first page of the Data Collector Setup Wizard. Figure 93. Storage Manager Data Collector Setup Wizard a. Under Data Collector Type, select Configure as Remote Data Collector. b.
d. In the Password field, type the password for the specified user. e. Click Next. The remote Data Collector attempts to connect to the primary Data Collector. When the connection is established, the Finished setup page appears. Figure 95. Setup Complete Page 3. Click Finish. Install a Virtual Appliance as a Remote Data Collector Install the Virtual Appliance then configure it as a Remote Data Collector to use the Virtual Appliance for disaster recovery.
14. Select a server or a server cluster on which to deploy the Virtual Appliance. 15. Click Next. The Select Storage page appears. 16. Select the datastore that will hold the Virtual Appliance data. 17. Click Next. The Setup Networks page appears. 18. From the Destination drop-down menu, select a network for the Virtual Appliance. 19. Click Next. The Customize Template page appears. 20. Complete the following fields. NOTE: Some of these features are hidden. Expand the heading to view the setting.
f. 6. Click Next. The Create Administrator User page appears. 7. Enter the credentials for the new administrator user of the Remote Data Collector. a. In the User field, type the user name for the new administrator user. b. In the New Password field, type a password for the new administrator user. c. In the Confirm Password field, retype the password. 8. Click Next. The Summary page appears. 9. Click Finish. A confirmation dialog box appears. 10. Click OK. The Virtual Appliance restarts.
b. On the General Information tab, click Stop to stop the Data Collector Manager service. 2. Use the Dell Storage Manager Client to connect to the primary Data Collector and log on. 3. Click the Replications & Live Volumes view, then click the Remote Data Collector tab. 4. Click Remove Remote Data Collector. A confirmation dialog box appears. 5. Click Yes.
The Client connects to the remote Data Collector and displays the Primary Data Collector tab. Figure 97. Primary Data Collector Tab Create a User Create a user account to allow a person access to Storage Manager. 1. In the Data Collector Manager, click the Users tab. 2. Click Create User. The User Settings page opens. 3. Enter information for the new user. a. b. c. d. e. f. 4. Type the user name of the user in the User Name field.
Use a Remote Data Collector to Test Activate Disaster Recovery Testing disaster recovery functions the same way for primary and remote Data Collectors. 1. Use the Dell Storage Manager Client to connect to the remote Data Collector. 2. Click the Restore Points tab. 3. Click Test Activate Disaster Recovery.
3. In the Remote Data Collector Host or IP Address field, type the host name or IP address of the Storage Center. 4. Click OK. Enabling Email Notifications for the Remote Data Collector You can configure the primary Data Collector to send you an email notification if communication with the remote Data Collector is lost. 1. Start the Dell Storage Manager Client and log on to the primary Data Collector. 2. In the top pane, click Edit User Settings. The Edit User Settings dialog box appears. 3.
37 Storage Replication Adapter for VMware SRM VMware vCenter Site Recovery Manager (SRM) supports storage vendors using Storage Replication Adapters. The Dell Storage Replication Adapter (SRA) allows sites to use VMware vCenter SRM on Dell Storage Centers through Dell Storage Manager. Where to Find Dell SRA Deployment Instructions This chapter provides overview information about using SRM on Storage Centers through Storage Manager and the Dell SRA.
Requirement Storage Center Configuration Description • Install and configure Storage Manager Primary Data Collector on the recovery site; install and configure Storage Manager Remote Data Collector on the protected site. • VMware vSphere server objects must be created on both the source and destination Storage Centers. Replication QoS Nodes must be defined on the source and destination Storage Centers.
Figure 98. SRA Configuration with a Single Data Collector 1. Protected site 2. Recovery site 3. VMware SRM server at protected site 4. VMware SRM server at recovery site 5. Primary Data Collector at recovery site 6. Storage Center at protected site 7. Storage Center at recovery site In a configuration with only one Storage Manager Data Collector, locate the Data Collector at the Recovery Site.
5. Primary Data Collector at protected site 6. Remote Data Collector at recovery site 7. Storage Center at protected site 8. Storage Center at recovery site In a configuration with a Storage Manager Remote Data Collector, locate the Remote Data Collector on the Recovery Site. This configuration allows DR activation from the remote site when the Protected Site goes down.
Part VI Storage Center Monitoring and Reporting This section describes using Threshold Alerts to create custom alerts, using reports, configuring Chargeback to bill departments based on storage usage, monitoring logs, and monitoring performance.
38 Storage Center Threshold Alerts Threshold alerts are automatically generated when user-defined threshold definitions for storage object usage are crossed. Threshold queries allow you to query historical data based on threshold criteria. Configuring Threshold Definitions Threshold definitions monitor the usage metrics of storage objects and generate alerts if the user-defined thresholds are crossed. The types of usage metrics that can be monitored are IO usage, storage, and replication.
7. Select the type of usage metric to monitor from the Alert Definition drop-down menu. 8. (Optional) Assign the threshold definition to all of the storage objects that are of the type specified in the Alert Object Type field by selecting the All Objects check box. If you select this check box, it cannot be modified after the threshold definition is created. 9.
Figure 100. Threshold Alerts Definitions Tab Edit an Existing Threshold Definition Edit a threshold definition to change the name, notification settings, or schedule settings. 1. Click the Threshold Alerts view. 2. Click the Definitions tab. 3. Select the threshold definition to edit and click Edit Settings in the bottom pane. The Edit Threshold Definition dialog box appears. 4. To change the name of the threshold definition, enter a new name in the Name field. 5.
Delete Multiple Threshold Definitions You can delete multiple threshold definitions simultaneously by selecting them and then right-clicking the selection. 1. Click the Threshold Alerts view. 2. Click the Definitions tab. 3. Use Shift+click and/or Control+click to select multiple threshold definitions. 4. Right-click on the selection and select Delete. The Delete Objects dialog box appears. 5. Click OK.
• 5. Disks: Select the disk for which to display the assigned threshold definitions. • Storage Profiles: Select the storage profile for which to display the assigned threshold definitions. In the right pane, click Set Threshold Alert Definitions. The Set Threshold Alert Definitions dialog box appears. The threshold definitions assigned to usage metrics of the selected storage object are displayed in the dialog box.
Viewing Threshold Alerts for Threshold Definitions Use the Definitions tab to view the current threshold alerts and historical threshold alerts for a threshold definition. View the Current Threshold Alerts for a Threshold Definition When a threshold definition is selected on the Definitions tab, the Current Threshold Alerts subtab displays the active alerts for the definition. 1. Click the Threshold Alerts view. 2. Click the Definitions tab. 3. Select the threshold definition to view.
• To display threshold alerts for all of the Storage Centers, click Select All. Filter Threshold Alerts by Threshold Definition Properties You can filter the threshold alerts based on the properties of the threshold definitions that triggered the alerts. 1. Click the Threshold Alerts view. 2. Click the Alerts tab. 3. Use the Filter pane to filter threshold alerts by threshold definition properties.
Supported Threshold Definitions Threshold Alert Recommendation Type Alert Object Type Alert Definition Storage Storage Center Percent Used When the used space percentage for a Storage Center exceeds the configured alert threshold, the alert recommends moving the volume to a specific Storage Center. General Volume Advisor Requirements Storage Centers must meet the following requirements to be considered for volume movement recommendations.
Recommendations Based on Volume Latency If the recommendation was triggered by a threshold definition that monitors volume latency, the Recommend Storage Center dialog box displays a recommendation to move a specific volume to a specific Storage Center. Figure 101. Recommended Storage Center Dialog Box If Storage Manager identified a possible reason for the increased volume latency, the reason is displayed in the Recommend Reason field.
Creating Threshold Definitions to Recommend Volume Movement Create a threshold definition to recommend volume movement based on the rate of Storage Center front-end IO, volume latency, Storage Center controller CPU usage, or percentage of storage used for a Storage Center. Create a Threshold Definition to Monitor Front-End IO for a Storage Center When Storage Center front-end IO exceeds the value set for the error threshold, Storage Manager triggers a threshold alert with a volume movement recommendation.
10. When you are finished, click OK. • If you selected the All Objects check box, the threshold definition is created and the Create Threshold Definition dialog box closes. • If you did not select the All Objects check box, the Add Objects dialog box appears. 11. Choose the volumes that you want to monitor. a. In the table, select the Storage Center that hosts the volumes. b.
Create a Threshold Definition to Monitor the Percentage of Used Storage for a Storage Center When the Storage Center storage usage percentage exceeds the value set for the error threshold, Storage Manager triggers a threshold alert with a volume movement recommendation. 1. Click the Threshold Alerts view. 2. Click the Definitions tab. 3. Click Create Threshold Definition. The Create Threshold Definition dialog box appears. 4. In the Name field, type a name for the threshold definition. 5.
Automatically Create a Live Volume and Move the Volume Based on a Recommendation Use the Recommend Storage Center dialog box to automatically move a volume based on a recommendation. About this task NOTE: The option to create a Live Volume appears only for Storage Centers running version 7.0 or earlier. Steps 1. In the Recommend Storage Center dialog box, click Convert to a Live Volume to move to recommended Storage Center. The Convert to Live Volume dialog box opens. 2.
Steps 1. Examine the volumes hosted by the current Storage Center and decide which volume(s) to move to the recommended Storage Center. 2. Convert each volume that you want to move to a Live Volume. • 3. Use the recommended Storage Center as the destination. • Map the destination volume to the server that is currently mapped to the volume. After the Live Volume is synchronized, swap roles to make the recommended Storage Center the primary for the Live Volume. a.
Configure SMTP Server Settings The SMTP server settings must be configured to allow Storage Manager to send notification emails. 1. In the top pane of the Dell Storage Manager Client, click Edit Data Collector Settings. The Edit Data Collector Settings dialog box opens. 2. Click the SMTP Server tab. 3. Configure the SMTP server settings by performing the following steps: a. b. c. d. 4. In the From Email Address field, enter the email address to display as the sender of emails from the Data Collector.
Performing Threshold Queries Threshold queries allow you to query historical data based on threshold criteria. For example, if a Storage Center experienced a spike of IO usage, you could create a threshold query to discover the threshold definition settings that would have detected the event. After you find the threshold settings you need, you can use them to create a threshold definition that will automatically monitor the Storage Center. Figure 104.
The available storage objects are dependent on the type of query selected in step b. d. Select the type of usage metric to query from the third Definition drop-down menu. The available threshold metrics are dependent on type of query selected in step b and the type of object selected in step c. e. Select the period of time to query the data from the Start Time drop-down menu. f. Enter the threshold value that the usage metric must have reached in the Threshold Value field. g.
• If only the query filter values were changed, click Save to save the changes to the query.
39 Storage Center Reports The Reports feature allows a user to view Storage Center and Chargeback reports generated by Storage Manager. Chargeback Reports The information displayed in a Chargeback report includes a sum of charges to each department and the cost/storage savings realized by using a Storage Center as compared to a legacy SAN. The Chargeback reports are in PDF format and present the same data that can be viewed on the Chargeback view.
Report Type Description • Storage Center Summary: Displays information about storage space and the number of storage objects on the Storage Center. Displaying Reports The Reports view can display Storage Center Automated reports and Chargeback reports. View a Storage Center Automated Report The contents of Storage Center reports are configured in the Data Collector automated reports settings. 1. Click the Reports view.
Steps 1. Click the Reports view. The Automated Reports tab appears and it displays all of the Storage Center and Chargeback reports that can be viewed. 2. To display only Chargeback reports, click the plus sign (+) next to the Chargeback folder. The name of each report consists of the text Chargeback followed by the date and time that the report was generated. For example, the name of a daily report for June 12th, 2013 would be: Chargeback - 06/12/2013 23:15:00 Figure 106. Chargeback Reports 3.
• To display the previous page of the report, click Previous . Print a Report For best results, print reports using the Landscape orientation. 1. Click the Reports view. 2. Select the report to print from the Reports pane. 3. Click Print 4. Select the printer to use from the Name drop-down menu. 5. Click OK. The report is printed to the selected printer. . The Print dialog box appears. Save a Report to the Client Computer You can save a report PDF on your computer or a network share. 1.
Figure 107. Automated Reports Tab 3. 4. Select the check boxes in the Automated Report Settings area to specify how often to generate the following reports: • Storage Center Summary – Select the Weekly and/or Monthly check boxes. • Disk Class – Select the Weekly and/or Monthly check boxes. • Disk Power On Time – Select the Weekly and/or Monthly check boxes. • Alerts – Select the Daily and/or Weekly check boxes. • Volume Storage – Select the Daily, Weekly, and/or Monthly check boxes.
Related link Configure Chargeback or Modify Chargeback Settings Configure Storage Manager to Email Reports Set Up Automated Reports for an Individual Storage Center By default, Storage Centers are configured to use the global automated report settings that are specified for the Data Collector. If you want to use different report settings for a Storage Center, you can configure the automated report settings in the Storage Center properties.
6. Click OK. The reports are generated and the Generate Reports dialog box closes. NOTE: Generating a report overwrites previously generated reports in the folder for that day. To prevent these reports from being overwritten, select a different directory from the Automated Report Options area in the Automated Reports tab. 7. Click OK. Configure Storage Manager to Email Reports Storage Manager can send you automated report PDFs by email.
Steps 1. In the top pane of the Dell Storage Manager Client, click Edit User Settings. The General tab opens. 2. Click the Manage Events tab. 3. Select the check box for each event you want to be notified about. 4. Click OK.
40 Storage Center Chargeback Chargeback monitors storage consumption and calculates data storage operating costs per department. Chargeback can be configured to charge for storage based on the amount of allocated space or the amount of configured space. When cost is based on allocated space, Chargeback can be configured to charge based on storage usage (the amount of space used), or storage consumption (the difference in the amount space used since the last automated Chargeback run).
7. Select how to assign a base cost to storage from the Assign Cost By drop‑down. • 8. Global Disk Classes: Costs are assigned to each available disk class. • Individual Storage Center Disk Tier: Costs are assigned per storage tier level for each Storage Center. Select a location from the Currency Locale drop-down menu to specify the type currency to display in Chargeback. For example, if the selected location is United States, the currency unit is dollars ($).
Figure 109. Storage Costs Per Disk Class 3. Click Finish to save the Chargeback settings. Assign Storage Costs for Storage Center Disk Tiers If the Edit Chargeback Settings wizard displays this page, assign storage cost for each Storage Center disk tier. 1. For each storage tier, select the unit of storage on which to base the storage cost from the per drop-down menu. 2. For each storage tier, enter an amount to charge per unit of storage in the Cost field. Figure 110.
Configuring Chargeback Departments Chargeback uses departments to assign base billing prices to departments and department line items to account for individual IT‑related expenses. Volumes and volumes folder are assigned to departments for the purpose of charging departments for storage consumption. Setting Up Departments You can add, modify, and delete Chargeback departments as needed. Add a Department Add a chargeback department for each organization that you want to bill for storage usage. 1.
Edit a Department You can modify the base storage price charged to a department, change the department attributes, and change the department contact information. 1. Click the Chargeback view. 2. Click the Departments tab. 3. Select the department that you want to edit from the list of departments on the Chargeback pane. 4. Click Edit Settings or right-click on the department and select Edit Settings. The Edit Settings dialog box appears. 5. Modify the department options as needed.
3. Select the department that contains the line item that you want to edit from the list of departments on the Chargeback pane. 4. Select the line item you want to edit from the Department Line Items pane. 5. Click Edit Settings or right-click on the line item and select Edit Settings. The Edit Line Item dialog box appears. Figure 113. Edit Line Item Dialog Box 6. To change the name of the line item, edit the value in the Name field. 7.
Figure 114. Add Volume Dialog Box 5. Select the volumes to assign to the department. 6. Click Add Volumes to add the selected volumes to the list of volumes to assign to the department. 7. Click OK to assign the volumes to the department. Assign Volume Folders to a Department in the Chargeback View Use the Chargeback view to assign multiple volume folders to a department simultaneously. 1. Click the Chargeback view. 2. Click the Departments tab. 3.
Figure 115. Add Volume Folders Dialog Box 5. Select the volume folders to assign to the department. 6. Click Add Volume Folders to add the selected volume folders to the list of volume folders to assign to the department. 7. Click OK to assign the volume folders to the department. Remove Volumes/Volume Folders from a Department in the Chargeback View Use the Chargeback view to remove multiple volumes from a department simultaneously. 1. Click the Chargeback view. 2. Click the Departments tab. 3.
Remove a Volume/Volume Folder from a Department in the Storage View Use the Storage view to remove volumes and volume folders from a department one at a time. 1. Click the Storage view. 2. In the Storage pane, select a Storage Center. 3. Click the Storage tab. 4. In the Storage tab navigation pane, select the volume or volume folder. 5. In the right pane, click Edit Settings. A dialog box appears. 6. Next to Chargeback Department, click Change. The Add Chargeback Department dialog box appears.
a. Click Filter Objects. The Filter Objects dialog box appears. b. Select the check box(es) of the department(s) to display and clear the check box(es) of the department(s) to hide. • To select all of the department check boxes, click Select All. • To clear all of the department check boxes, click Unselect All. c. Click OK. The bar chart hides the departments that had their check boxes cleared in the Filter Objects dialog box.
Working with Charts You can zoom in and out on charts, save them as images, or print them. Zoom in on an Area of the Chart Zoom in on an area to see more details. 1. Use the mouse to select an area of the chart in which to zoom. a. Click and hold the right or left mouse button on the chart. b. Drag the mouse to the right to select an area of the chart. 2. Release the mouse button to zoom into the selected area of the chart.
Export Chargeback Run Data for a Single Department Chargeback run data for a department can be exported to CSV, Text, Excel, HTML, XML, or PDF. 1. Click the Chargeback view. 2. Make sure the Chargeback Runs tab is selected. 3. In the Chargeback pane, select the Chargeback run for which you want to export data. 4. Click the Table subtab. 5. Select the department for which you want to export data, then click Save Department Run Data. The Save Department Run Data dialog box appears. 6.
41 Storage Manager Log Monitoring Storage Manager provides a centralized location to view Storage Center and PS Series group alerts, events, indications, and logs collected by the Storage Manager Data Collector. System events logged by Storage Manager can also be viewed. Storage Alerts Storage alerts and indications warn you when a storage system requires attention.
Figure 117. Alerts Tab Display Storage Alerts on the Monitoring View Alerts for managed storage systems can be displayed on the Storage Alerts tab. 1. Click the Monitoring view. 2. Click the Storage Alerts tab. 3. Select the check boxes of the storage systems to display and clear the check boxes of the storage systems to hide. The Storage Alerts tab displays alerts for the selected storage systems. 4. To display indications, select the Show Indications check box. 5.
4. • Last Day: Displays the past 24 hours of storage alerts. • Last 3 Days: Displays the past 72 hours of storage alerts. • Last 5 Days: Displays the past 120 hours of storage alerts. • Last Week: Displays the past 168 hours of storage alerts. • Last Month: Displays the past month of storage alerts. • Custom: Displays options that allow you to specify the start time and the end time of the storage alerts to display.
Send Storage Center Alerts and Indications to the Data Collector Immediately By default, the Data Collector retrieves alerts and indications from a Storage Center at a regular interval. However, if you want alerts and indications to appear in Storage Manager immediately when they are triggered, configure a Storage Center to send them to the Data Collector. 1. Click the Storage view. 2.
Viewing Storage Manager Events Use the Events tab to display and search Storage Manager events. Figure 118. Storage Manager Events Tab Display Storage Manager Events View Storage Manager events on the Events tab. 1. Click the Monitoring view 2. Click the Events tab. 3. Select the check boxes of the storage systems to display and clear the check boxes of the storage systems to hide. The tab displays the events logged by the Storage Manager for the selected storage systems. 4.
Select the Date Range of Storage Manager Events to Display You can view Storage Manager events for the last day, last 3 days, last 5 days, last week, last month, or specify a custom time period. 1. Click the Monitoring view 2. Click the Events tab. 3. Select the date range of the Storage Manager events to display by clicking one of the following options: 4. • Last Day – Displays the past 24 hours of event log data. • Last 3 Days – Displays the past 72 hours of event log data.
Configuring Email Alerts for Storage Manager Events To receive email notifications for Storage Manager events, configure SMTP server settings for the Data Collector, add an email address to your user account, and enable notification emails for the events. Configure SMTP Server Settings The SMTP server settings must be configured to allow Storage Manager to send notification emails. 1. In the top pane of the Dell Storage Manager Client, click Edit Data Collector Settings.
Storage Logs Storage logs are records of event activity on the managed storage systems. You can use the Storage Logs tab to display and search for events in storage system logs. NOTE: To view Storage Center logs in the Storage Logs tab, the Storage Center must be configured to send logs to the Storage Manager Data Collector. Sending Storage Center Logs to Storage Manager To view Storage Center logs in Storage Manager, the Storage Center must be configured to send logs to the Storage Manager Data Collector.
Send Storage Center Logs to a Syslog Server Modify the Storage Center to forward logs to a syslog server. Prerequisite The Storage Center must be added to Storage Manager using a Storage Center user with the Administrator privilege. Steps 1. Click the Storage view. 2. In the Storage pane, select the Storage Center for which you want to configure alert forwarding. 3. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 4. Click the Alerts and Logs tab. 5.
9. Connect to the Syslog server to make sure the test message was successfully sent to the server. Remove a Syslog Server Remove a syslog server if you no longer want the Data Collector to forward syslog messages to it. Prerequisite The Storage Center must be added to Storage Manager using a Storage Center user with the Administrator privilege. Steps 1. Click the Storage view. 2. In the Storage pane, select the Storage Center for which you want to configure alert forwarding. 3.
Viewing Storage Logs To display and search for events in the Storage Center logs, use the Logs tab in the Storage view or use the Storage Logs tab in the Monitoring view. To display and search for events in the PS Series group logs, use the Events Logs node in the Monitoring tab of the Storage view or use the Storage Logs tab in the Monitoring view. Figure 120. Storage Logs Tab Display Events in the Storage Logs Storage logs represent event activity on the selected storage systems. 1.
Select the Date Range of Log Events to Display You can view log events for the last day, last 3 days, last 5 days, last week, or specify a custom time period. 1. Click the Monitoring view 2. Click the Storage Logs tab. 3. Select the date range of the event log data to display by clicking one of the following options: 4. • Last Day – Displays the past 24 hours of event log data. • Last 3 Days – Displays the past 72 hours of event log data.
Audit Logs Audit logs are records of logged activity that are related to the user accounts on the PS Series group. Use the Audit Logs tab to display information specific to PS Series group user accounts. Viewing Audit Logs To display and search for PS Series group events in the audit logs, use the Audit Logs node in the Storage view or use the Audit Logs tab in the Monitoring view. Figure 121. Audit Logs Node Display Audit Logs Audit logs represent user account activity on the selected PS Series groups.
4. • Last Day: Displays the past 24 hours of audit log data. • Last 3 Days: Displays the past 72 hours of audit log data. • Last 5 Days: Displays the past 120 hours of audit log data. • Last Week: Displays the past 168 hours of audit log data. • Custom: Displays options that allow you to specify the start time and the end time of the audit log data to display. If you clicked Custom, perform the following tasks to specify the start time and end time of the audit log data to display.
Figure 122. Save Monitoring Data Dialog Box 3. Select the Storage Centers from which to export the monitoring data. • 4. 5. To select all of the listed Storage Centers, click Select All. • To deselect all of the listed Storage Centers, click Unselect All. Select the type(s) of monitoring data to export: • Storage Center Alerts: Error messages that have been generated by the selected Storage Centers.
Part VII Storage Manager Maintenance This section describes how to manage the Data Collector, manage Storage Manager users, and configure settings for Dell SupportAssist.
42 Data Collector Management The Storage Manager Data Collector is a Windows service that collects reporting data and alerts from managed Storage Centers. The Data Collector service is managed by the Data Collector Manager. Using the Data Collector Manager Use Data Collector Manager to view the status of the Data Collector, start and stop the Data Collector service, and set Data Collector properties.
– User name (example: user) – User Principal Name (example: user@domain) 3. – NetBIOS ID (example: domain\user) To remember the username and password and use it the next time the Data Collector Manager is started, select the Remember Password check box. 4. Click Log In. The Data Collector Manager window appears and displays the General Information tab. The status of the Data Collector service is displayed on the General Information tab.
Access the Data Collector Website from Data Collector Manager Data Collector Manager contains a shortcut to the Data Collector website. 1. In the Data Collector Manager, click the General Information tab. 2. Click Go to Website. 3. If a certificate warning appears, acknowledge the warning to continue to the Data Collector website. Access the Data Collector Website Using the Website Address Any client that has network connectivity to the Data Collector can access the Data Collector website.
Change the Data Collector Service Type The Data Collector service type controls the type of Windows account under which the Data Collector runs. Prerequisite Local user and domain user accounts must be able to log in as a service and must have administrator privileges on the host server. Steps 1. In the Data Collector Manager, click the Service tab. 2. Select the type of Windows account under which to run the Data Collector from the Type drop-down menu.
Figure 126. Change Data Source — Page Two 10. To migrate historical data from the current database to the new database, clear the Do not migrate any data from previous data source check box. • To migrate IO usage data, select the Migrate IO Usage Data check box, then select either Days or Weeks from the dropdown menu and specify the number of days or weeks of IO usage data to move in the Migrate Last field.
5. Click OK. The Change Database Connection dialog box closes. Export the Database Schema from an SQL Database If you are using an SQL database to store Storage Manager data, you can export the database schema. 1. In the Data Collector Manager, click the Service tab. 2. Click Export Database Schema. 3. Specify the location to save the schema file. 4. Enter a name for the schema file in the File name field. 5. Click Save. A dialog box appears after the schema file is saved. 6. Click OK.
Configuring Network Settings Use the Network tab to manage Data Collector ports, configure a proxy server for Dell SupportAssist, or manually select a network adapter. Figure 128. Data Collector Manager — Network Tab Modify the Ports Used by the Data Collector The ports for the web server and legacy web service can be modified to avoid port conflicts. 1. In the Data Collector Manager, click the Network tab. 2.
Steps 1. In the Data Collector Manager, click the Network tab. 2. Clear the Automatically Select Network Adapter check box. 3. Select the network adapter to use from the Network Adapter drop-down menu. 4. Click Apply Changes. Configuring Security Settings Use the Security tab to configure a custom SSL certificate for the Data Collector or set a login banner message for the client. Figure 129.
b. Browse to the location of the public key file, and then select it. c. Click Open. The Select dialog box closes and the Public Key field is populated with the path to the public key file. 4. Upload the private key file. a. Next to the Private Key field, click Select. The Select dialog box opens. b. Browse to the location of the private key file, and then select it. c. Click Open. The Select dialog box closes and the Private Key field is populated with the path to the private key file. 5.
Figure 132. Data Collector Manager — SMTP Server Tab 2. Configure the SMTP server settings by performing the following steps: a. b. c. d. 3. Enter the host name or IP address of the SMTP server in the Host or IP Address field. Enter the email address to display as the sender of emails from Storage Manager in the From Email Address field. If the port number of the SMTP server is not 25, enter the correct port number in the Port field.
3. To modify the maximum number of log files for each Data Collector debug log type, change the value in the Maximum Log Files field. 4. To modify the number of days after which a log is expired, change the value in the Log Lifetime field. 5. To modify the number of days after which an alert is expired, change the value in the Alert Lifetime field. 6. To modify the number of days after which reporting data is expired, change the value in the Reporting Data Lifetime field. 7. Click Apply Changes.
Delete an Available Storage Center Remove a Storage Center when you no longer want to manage it from Storage Manager. If a Storage Center is removed from all Storage Manager user accounts, historical data for the Storage Center is also removed. 1. In the Data Collector Manager, click the Storage Centers tab. 2. Select the Storage Center to delete. 3. Click Delete Storage Center. A warning message is displayed. 4. Click Yes.
3. In the User/PS Groups Maps pane, select the user to unmap from the PS Series group. 4. Click Delete User/PS Group Map. 5. Click Yes. Managing Available FluidFS Clusters Use the FluidFS Clusters tab to manage available FluidFS clusters. Figure 135. Data Collector Manager — FluidFS Clusters Tab Related link FluidFS Maintenance Refresh the List of FluidFS Clusters Remove a FluidFS cluster from a user account to prevent the user from viewing and managing the cluster. 1.
Managing Available Fluid Cache Clusters Use the Fluid Cache Clusters tab to manage Fluid Cache clusters attached to the Data Collector. Figure 136. Data Collector Manager — Fluid Cache Clusters Tab Related link Dell Fluid Cache for SAN Cluster Administration Refresh the List of Fluid Cache Clusters Refresh the list of Fluid Cache clusters to view new Fluid Cache clusters. 1. In the Data Collector Manager, click the Fluid Cache Clusters tab. 2. Click Refresh.
Remove Fluid Cache User Mappings Remove Fluid Cache user mappings to restrict users from viewing Fluid Cache clusters. 1. In the Data Collector Manager, click the Fluid Cache Clusters tab. 2. Select a user from the User/Fluid Cache Cluster Maps table. 3. Click Delete User/Fluid Cache Cluster Map. A confirmation dialog box appears. 4. Click Yes. Managing Users Use the Users tab to manage Storage Manager users and mappings to Storage Centers, PS Series groups, and Fluid Cache Clusters. Figure 137.
Related link Managing Local User Password Requirements Viewing Log Entries Use the Logs tab to view Storage Manager log entries. Figure 139. Data Collector Manager — Logs Tab Update the List of Log Entries Refresh the Logs tab to display new log entries. 1. In the Data Collector Manager, click the Logs tab. 2. Click Refresh. Clear Log Entries Clear the log entries in the Logs tab to delete all Storage Manager Data Collector log files. 1. In the Data Collector Manager, click the Logs tab. 2.
NOTE: By default, when a search reaches the bottom of the list and Find Next is clicked, the search wraps around to the first match in the list. When a search reaches the top of the list and Find Previous is clicked, the search wraps around to the last match in the list. Gathering and Exporting Troubleshooting Information Use the Debug Loggers tab to set debug log options and to export configuration and log data for troubleshooting purposes. Figure 140.
c. Browse to the location where you want to save the export file. d. In the File name field, type the file name. e. Click Save. The dialog box closes. 5. Click OK. • If you chose to use Dell SupportAssist to send debug logs to Dell Technical Support, a progress message is displayed and the debug logs are sent. • If you chose to save information to an export file, configuration and log data is exported to the specified file.
Configure IPv6 Settings Use the Storage Manager Virtual Appliance CLI to modify the IPv6 network settings. 1. Using the VMware vSphere Client, launch the console for the Storage Manager Virtual Appliance. 2. Log in to the Storage Manager Virtual Appliance. 3. Press 2 then Enter to enter the Configuration menu. 4. Press 2 then Enter to start the Network IPv6 setup. 5. Press 1 or 2 to enable or disable DHCP. Press Enter. 6. To assign a new hostname, type a hostname. Press Enter. 7.
4. • For the database partition, select Hard Disk 3. Modify the Provision Size of the disk to one of the suggested sizes. • 5. For the EM partition, change the disk size to 10 GB, 20 GB, or 40GB. • For the database partition, change the disk size to 20 GB, 40 GB, or 80GB. Click OK. The server expands the disk size. 6. Click Open Console to launch the console for the Storage Manager Virtual Appliance. 7. Log in to the Storage Manager Virtual Appliance. 8.
5. Press Enter to return to the Diagnostics menu. View the Hosts Table The hosts table shows network information for the Storage Manager Virtual Appliance. Use the Storage Manager Virtual Appliance CLI to view the hosts table. 1. Using the VMware vSphere Client, launch the console for the Storage Manager Virtual Appliance. 2. Log in to the Storage Manager Virtual Appliance. 3. Press 3 then Enter to enter the Diagnostics menu. 4. Press 4 then Enter.
Migrating a Microsoft SQL Server Database If the database server is Microsoft SQL Server 2008, 2012, or 2014, the Data Collector database can be migrated to a new Microsoft SQL Server. 1. Back up the database on the original Microsoft SQL Server. 2. Set up a new Microsoft SQL Server and configure it to use mixed mode authentication (SQL Server and Windows Authentication mode). 3. Perform a restore of the database on the new Microsoft SQL Server. 4.
Clean an Embedded Database on the File System • Reinstall the Storage Manager Data Collector. The embedded database on the file system is automatically cleaned up during the reinstallation process.
43 Storage Manager User Management Use the Data Collector Manager to add new users and manage existing users. To change preferences for your user account, use the Dell Storage Manager Client. Storage Manager User Privileges The Data Collector controls user access to Storage Manager functions and associated Storage Centers based on the privileges assigned to users: Reporter, Volume Manager, or Administrator. The following tables define Storage Manager user level privileges with the following categories.
Administrator Privileges The Administrator privilege level is the most powerful user profile in Storage Manager. The Administrator role has full access to Storage Manager features. The only exceptions are SupportAssist properties and Data Collector properties. The Administrator can view and manage these features, but cannot add new properties. NOTE: Storage Manager privileges for Fluid Cache describe the ability of a user to add Fluid Cache clusters in the Dell Storage Manager Client.
Steps 1. On the server that hosts the Data Collector, start the Data Collector Manager. 2. In Data Collector Manager, click the Directory Service tab. Figure 141. Directory Service Tab 3. Click Edit. The Service Settings dialog box opens. 4. Configure LDAP settings. a. Select the Enable Directory Services check box. b. In the Domain field, type the name of the domain to search. NOTE: If the server that hosts the Data Collector belongs to a domain, the Domain field is automatically populated. c.
8. Click Apply Changes. • If an error opens, you must manually configure the directory service settings. • If Kerberos authentication is not enabled, the Register TLS Certificate dialog box opens. Specify the location of the SSL public key for the directory server, then click OK. Figure 142. Register TLC Certificate Dialog Box The Data Collector service restarts to apply the changes, and directory service configuration is complete.
Troubleshoot Directory Service Discovery The Data Collector attempts to automatically discover the closest directory service based on the network environment configuration. Discovered settings are written to a text file for troubleshooting purposes. If discovery fails, confirm that the text file contains values that are correct for the network environment. 1. On the server that hosts the Data Collector, use a text editor to open the file C:\Program Files (x86)\Compellent Technologies \Compellent Enterprise
Figure 144. User Groups Tab 4. Select the Storage Manager user group to which you want to add directory groups. 5. Click Add Directory Groups. The Add Directory User Groups dialog box opens. Figure 145. Add Directory User Groups Dialog Box 6. (Multi-domain environments only) From the Domain drop-down menu, select the domain that contains the directory groups to which you want to grant access. 7. Select each directory group that you want to add to the Storage Manager user group. 8.
3. In the right pane, click the User Groups tab. Figure 146. User Groups Tab 4. Select the Storage Manager user group to which you want to add a directory user. 5. Click Add Directory Users. The Add Directory Users dialog box opens. Figure 147. Add Directory Users Dialog Box 6. In the Directory Users field, type the name of each directory user that you want to add. Enter each user name on a single line. • For OpenLDAP, the user name format is supported (example: user).
Revoke Access for Directory Service Users and Groups To revoke access to Storage Manager for a directory service user or group, remove the directory group or user from Storage Manager user groups. Remove a Directory Service Group from a Storage Manager User Group Remove a directory service group from a Storage Manager user group to prevent directory users in the group from accessing Storage Manager. 1. On the server that hosts the Data Collector, start the Data Collector Manager. 2.
Managing Local Users with the Data Collector Manager Storage Manager users and mappings to Storage Center can be configured on the Users tab of the Data Collector Manager. Figure 148. Users Tab Related link Starting the Data Collector Manager Update the Information Displayed on the Users Tab Refresh the Users tab to display changes to user accounts and user/Storage Center maps. 1. In the Data Collector Manager, click the Users tab. 2. Click Refresh. The Users tab reappears after the data is refreshed.
Configure or Modify the Email Address of a User An email address must be configured if you want Storage Manager to send email notifications to the user. 1. In the Data Collector Manager, click the Users tab. 2. Select the user to modify and click Edit Settings. The User Settings page opens. 3. Enter the email address of the user in the Email Address field. 4. Click OK. Change the Privileges Assigned to a User You can increase or decrease the privilege level for a user account. 1.
3. Click Select Storage Center Mappings. The Select Storage Center Mappings dialog box opens. 4. Select the check box of each Storage Center to map to the user. Clear the check box of each Storage Center to unmap from the user. 5. Click Next. The Users tab reappears after the Storage Center mappings are changed. Delete a User Delete a user account to prevent the user from viewing and managing the Storage Center. 1. In the Data Collector Manager, click the Users tab. 2.
• To set the number of previous passwords Storage Manager checks against when validating a password, type a value in the History Retained field. To disable previous password validation, type 0. • To set the minimum number of characters in a new password, type a value in the Minimum Length field. The minimum password length is four characters. • To set the number of login failures that will lock out an account, type a number in the Account Lockout Threshold field.
Require Users to Change Passwords The new password requirements apply to new user passwords only. Existing user passwords may not follow the password requirements. Require users to change passwords at next login so that the password complies with the password requirements. Prerequisite Password Configuration must be enabled. Steps 1. In the Data Collector Manager, click the Password Configuration tab. 2. Select the Requires Password Change check box. 3. Click Apply Changes.
Configure Client Options The default view, storage units formatting, and warning/error threshold percentages can be configured for the current user on the Client Options section of the General tab. Specify the Default View to Display in the Dell Storage Manager Client You can choose the view that is first displayed after you log in to the Client. 1. In the top pane of the Dell Storage Manager Client, click Edit User Settings. The Edit User Settings dialog box opens. 2.
44 Dell SupportAssist Management The Storage Manager Dell SupportAssist feature sends data to Dell Technical Support for monitoring and troubleshooting purposes. You can configure Dell SupportAssist to send diagnostic data automatically, or you can send diagnostic data manually using Dell SupportAssist when needed. Dell SupportAssist settings can be configured for all managed Storage Centers or individually for each Storage Center.
Figure 149. Edit Settings — Dell SupportAssist Tab 3. Select how often Storage Center Dell SupportAssist data is sent from the Frequency drop-down menu. • 4 Hours: Sends usage statistics every 4 hours. • 12 Hours: Sends usage statistics every 12 hours. • 1 Day: Sends usage statistics every 24 hours. 4. NOTE: The default collection schedule for Storage Usage data is daily at midnight. Therefore, the default Frequency setting of 4 Hours is ignored for Storage Usage reports.
Manually Sending Diagnostic Data Using Dell SupportAssist You can send diagnostic data manually using Dell SupportAssist for multiple Storage Centers or for a specific Storage Center. If a Storage Center does not have Internet connectivity or cannot communicate with the Dell SupportAssist servers, you can export the data to a file and send it to Dell Technical Support manually.
Steps 1. Click the Storage view. 2. In the Storage view navigation pane, select a Storage Center. 3. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 4. Click the Dell SupportAssist tab. 5. Click Send SupportAssist Data Now. The Send SupportAssist Data Now dialog box opens. Figure 151. Send Dell SupportAssist Data Now Dialog Box 6. In the Reports area, select the check boxes of the Storage Center usage reports to send to Dell Technical Support. 7.
Saving SupportAssist Data to a USB Flash Drive If the Storage Center is not configured to send, or is unable to send SupportAssist data to the SupportAssist server, you can save the SupportAssist data to a USB flash drive and then send the data to Dell Technical Support. USB Flash Drive Requirements The flash drive must meet the following requirements to be used to save SupportAssist data: • USB 2.
NOTE: Storage Manager saves the Storage Center configuration data to the USB flash drive automatically. 10. Click Finish. The Send SupportAssist Data Now dialog box displays Dell SupportAssist progress and closes when the process is complete. NOTE: Do not remove the drive from the port on the controller until SupportAssist has completed saving data. This process may take up to five minutes. 11.
Figure 152. Edit Dell SupportAssist Contact Information Dialog Box 6. Enter the name, phone number, and email information for the Dell SupportAssist contact representative. 7. Select the Receive email notification... check box to be notified whenever a support alert is sent to Dell Technical Support. 8. Enter the address information for the Dell SupportAssist contact representative. 9. Select contact preferences.
Steps 1. Click the Storage view. 2. In the Storage view navigation pane, select a Storage Center. 3. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 4. Click the Dell SupportAssist tab. 5. Select the Enabled check box under Web Proxy Settings to enable a proxy server. 6. Select the check boxes of the Storage Center usage reports to send to Dell Technical Support. 7. Specify the IP address and port for the proxy server. 8.