Dell EqualLogic Group Manager Administrator’s Guide PS Series Firmware Version 9.1 FS Series Firmware Version 4.
Notes, cautions, and warnings NOTE: A NOTE indicates important information that helps you make better use of your product. CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. Copyright © 2017 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries.
Contents About This Manual............................................................................................................16 Audience........................................................................................................................................................................... 16 Related Documentation.....................................................................................................................................................
About the NAS Reserve.................................................................................................................................................... 41 3 Set Up the iSCSI SAN................................................................................................... 43 4 Post-Setup Tasks.......................................................................................................... 44 About the Group Date and Time........................................................
Log Out of Group Manager Using Single Sign-On...................................................................................................... 69 Enable or Disable Single Sign-On................................................................................................................................69 About SNMP Access to the Group...................................................................................................................................69 About SNMP Authentication.........
Modify a Local CHAP Account.................................................................................................................................. 110 Delete a Local CHAP Account................................................................................................................................... 110 Configure CHAP for Initiator Authentication on Existing Volumes..............................................................................
Modify Groupwide Volume Settings.......................................................................................................................... 126 About Space Borrowing.................................................................................................................................................. 126 Benefits of Space Borrowing....................................................................................................................................
Enable or Disable Volume Undelete............................................................................................................................141 Display Deleted Volumes............................................................................................................................................ 141 Restore Deleted Volumes..........................................................................................................................................
12 NAS Cluster Operations............................................................................................. 163 NAS Cluster Configuration..............................................................................................................................................163 About Mixed-Model NAS Clusters............................................................................................................................ 163 Configure a NAS Cluster....................................
Create a NAS Container..................................................................................................................................................183 Modify NAS Clusterwide Default NAS Container Settings.............................................................................................. 184 Modify NAS Clusterwide Default NAS Container Permissions...................................................................................
Modify a NAS Antivirus Server................................................................................................................................. 201 Delete a NAS Antivirus Server..................................................................................................................................202 About NAS Antivirus Clusterwide Defaults...............................................................................................................
About Volume Data Protection....................................................................................................................................... 224 Protect NAS Container Data with NDMP.......................................................................................................................225 Configure NDMP for a NAS Cluster......................................................................................................................... 225 About Snapshots............
Where to Go from Here............................................................................................................................................270 Promote an Inbound Replica Set to a Recovery Volume................................................................................................. 270 Recovery Volume Options......................................................................................................................................... 271 Recovery Volume Restrictions.
Disable Synchronous Replication (SyncRep) for a Volume Collection............................................................................. 308 Change the Pool Assignment of a Synchronous Replication (SyncRep) Volume............................................................ 309 View the Distribution of a Volume Across Pools............................................................................................................. 309 About Switching and Failing Over SyncRep Pools......................
Monitor Network Hardware..................................................................................................................................... 330 Monitor iSCSI Connections to a Member..................................................................................................................331 Monitor iSCSI Connections....................................................................................................................................... 331 About Storage Performance.....
About This Manual Dell EqualLogic PS Series arrays optimize resources by automating capacity, performance, and network load balancing. Additionally, PS Series arrays offer all-inclusive array management software and firmware updates. Dell EqualLogic FS Series appliances, when combined with PS Series arrays, offer a high-performance, high-availability, scalable NAS solution.
Contacting Dell Dell provides several online and telephone-based support and service options. Availability varies by country and product, and some services might not be available in your area. To contact Dell for sales, technical support, or customer service issues, go to dell.com/ support.
1 About Group Manager Group Manager is an easy-to-use SAN and NAS management tool integrated with the Dell EqualLogic PS Series firmware. Providing a comprehensive single point of management, Group Manager eliminates the need for a dedicated management workstation or server by enabling administrators to remotely manage virtually any aspect of their EqualLogic iSCSI-based SAN or NAS.
About GUI and CLI Access By default, PS Series group administrators can access the GUI remotely using a web browser or a standalone Java application. Administrators can also manage a group by using the CLI across a telnet, SSH or a serial connection. If you use a serial connection you must be connected to the primary controller. The Group Manager graphical user interface (GUI) is based on the Java platform.
• German (de) • Japanese (ja) • Korean (ko) • Simplified Chinese (zh) • Spanish (es) The Group Manager GUI defaults to the same language as set for the browser and operating system. If your browser and operating system are set to a non-English supported language, and you want the GUI to display in English, log in to the Group Manager GUI using the Group Manager IP with /english.html at the end (for example, http://ip_address/english.html).
1. Click Group → Group Configuration. 2. Click the Administration tab. 3. Select the Show banner before login checkbox. 4. Click Set session banner. 5. Type the banner message in the field in the dialog box. You can also copy text and paste it into the dialog box. 6. Press Return to wrap the banner text. 7. (Optional) Click Preview to see how the banner will look. 8. Click OK. To disable the session banner, clear the Show banner before login checkbox and save the change.
Location Tree Navigation Tabs Alarms Action Shortcut Show context (right-click) menu for current table row Shift+F10 Move to previous tree node Up arrow Move to next tree node Down arrow Collapse current tree node or move to parent of a collapsed node Left arrow Expand current tree node or move to first child of an expanded node Right arrow Show context (right-click) menu for selected tree node Shift+F10 Previous tab Ctrl+Page Up Next tab Ctrl+Page Down Show/hide Alarms panel Ctrl+Alt
1. Click the magnifying glass icon or press Ctrl+Shift+F. The Find Objects dialog box opens. 2. Type the text that you want to search for. About Online Help for Group Manager In addition to tooltips and command-line help for the Group Manager GUI and CLI, online help is available for the Group Manager GUI. An Internet connection is required to use online help, which is served from a website in the Dell.com domain. You also have the option to install the help on your local system or a private web server.
• Spanish (es) You can download non-English versions of the online help from eqlsupport.dell.com. The readme file in each language kit includes instructions for installing and using the localized online help. Troubleshooting Online Help If you have trouble launching the online help, check the browser security settings and the installed Java version. Browser Security Depending on your browser choice and the local Internet security settings, you might need to configure browser access to the help folder.
2 Architecture Fundamentals The Dell EqualLogic product family provides a unified file and block storage platform. Block-level storage consists of a sequence of bytes and bits of a certain length, called a block. Each block stores the data (like a hard drive) and the disk controller reads and writes data to the disks inside the storage array. Block-level access enables storage administrators to stipulate which block to send reads and writes to for the best performance.
Figure 2. PS Series Group and Pools depicts a PS Series group with three members and two storage pools. Table 3. PS Series Group and Pools explains the callouts used in the figure. Figure 2. PS Series Group and Pools Table 3. PS Series Group and Pools Callout Description 1 PS Series group Storage area network (SAN) comprising one or more PS Series arrays connected to an IP network. Arrays are high-performance (physical) block-storage devices.
Callout Description 9 Snapshots A point-in-time copy of data on a volume. Snapshots can be taken on a single volume or on a collection. 10 Thin-provisioned volume With thin provisioning, a minimal amount of space (10 percent by default) is reserved on a volume and then allocated when the space is needed. FS Series Architecture You can design a unified (block and file) storage architecture by adding a Dell FluidFS NAS appliance to a PS Series SAN.
Figure 3. PS Series Group with NAS Cluster Table 4. PS Series Group with NAS Cluster Callout Description 1 PS Series group Storage area network (SAN) comprising one or more PS Series arrays connected to an IP network. Arrays are high-performance (physical) block-storage devices. 2 NAS cluster Collection of NAS hardware (appliances) configured as part of a PS Series group. The FluidFS software runs on the cluster. 3 NAS appliances Hardware enclosures that contain NAS controllers.
Figure 4. PS Series Group Table 5. PS Series Group Callout Description 1 PS Series group Storage area network (SAN) comprising one or more PS Series arrays connected to an IP network. Arrays are high-performance (physical) block storage devices. 2 PS Series members One or more PS Series arrays represented as individual members within a pool to which it provides storage space to utilize. 3 PS Series storage pools Containers for storage resources (disk space, processing power, and network bandwidth).
members in the pool. The system automatically rebalances the load as the group scales. These operations are transparent to the servers, applications, and users. Group: Configuration Recommendations Before you configure a group, review the following recommendations. • Make sure all the network interfaces on the members are configured, functioning, and accessible. If you have any issues, contact Dell Technical Support. NOTE: Limit configuration changes when a group has members that are offline.
– Verify that the storage pool does not have the maximum number of iSCSI connections for the release in use. – Verify the access control policies for the volume. Using the iSCSI initiator name instead of an IP address can make access controls easier to manage and more secure. – Ensure that Dell EqualLogic MPIO extensions are properly installed on the supported operating systems. See the Host Integration Tools documentation for details.
Identify requirements for the various types of data • Requires 24/7 uptime and access • For archival only • Unique to specific departments (or example, the finance department might need exclusive access to certain data) Identify application requirements • List all applications accessing the data. • Calculate the disk space, network bandwidth needs, and performance characteristics for each application.
About Volumes Volumes provide the storage allocation structure within the PS Series group. To access storage in a PS Series group, you allocate portions of a storage pool to volumes. You can create a volume on a single group member or one that spans multiple group members. You assign each volume a name, size, and a storage pool. The group automatically load balances volume data across pool members. Figure 5. PS Series Volumes depicts volumes in a PS Series group. Table 6.
Callout Description Space received from PS Series arrays to allocate data as needed through various structures (volumes, snapshots, thin provisioning, replicas, containers, SMB/NFS, quotas, and local users and groups). 7 Volumes Storage allocated by a PS Series group as addressable iSCSI targets. 8 Collection A set of volumes. 9 Snapshots A point-in-time copy of data on a volume. Snapshots can be taken on a single volume or on a collection.
Table 7. Volume Attributes describes the attributes that allocate space and set the characteristics of a volume. Table 7. Volume Attributes Volume Attribute Description Name Volume name is unique in the group. The volume name appears at the end of the iSCSI target name, which the group generates automatically. Computer access to the volume is always through the iSCSI target name, rather than the volume name. Description Optional description for the volume — up to 127 characters.
Volume Types A PS Series group supports the following volume types: • Standard The default volume type is a standard volume. No restrictions apply to a standard volume. You can enable (and disable) thin provisioning on a standard volume. • Template A template volume is a type of volume that is useful if your environment requires multiple volumes that share a large amount of common data.
Figure 6. NAS Hardware Architecture Table 8. NAS Hardware Architecture Callout Description 1 PS Series group (partial) Storage area network (SAN) comprising one or more PS Series arrays connected to an IP network. Arrays are high-performance (physical) block-storage devices. 2 NAS cluster Collection of NAS hardware (appliances) configured as part of a PS Series group. 3 NAS appliances Hardware enclosures that contain NAS controllers.
Figure 7. NAS Software Architecture Table 9. NAS Software Architecture Callout Description 1 PS Series group (partial) Storage area network (SAN) comprising one or more PS Series arrays connected to an IP network. Arrays are high-performance (physical) block-storage devices. 2 NAS cluster Collection of NAS hardware (appliances) configured as part of a PS Series group. The FluidFS software runs on the cluster. 3 NAS appliances Hardware enclosures that contain NAS controllers.
Callout Description Redundant, hot-swappable controllers in NAS appliances. The controllers interface over a fabric to the PS Series SAN storage.
A NAS cluster can serve data to multiple clients simultaneously, with no performance degradation. Clients connect to NAS storage through the NAS protocols of their operating system: • UNIX users access NAS storage through the NFS protocol. • Windows users access NAS storage through the SMB protocol. After the client establishes the preliminary connection, the NAS storage acts as a normal storage subsystem, accessed in the usual way by users or applications.
• Add a NAS appliance to a NAS cluster to increase processing power and allow more client connections • Replace a failed controller If a NAS controller fails, the NAS cluster is still operational, but you cannot perform most service configuration modifications until you detach the failed NAS controller. While a NAS controller is down or detached, performance might decrease because data is no longer cached.
Figure 8. NAS Reserve Table 10. NAS Reserve Callout Description 1 NAS storage space Space allocated for storing user data as needed through various structures (volumes, snapshots, thin provisioning, replicas, containers, SMB/NFS, quotas, and local users and groups) 2 NAS reserve Amount of available storage space allocated to the NAS cluster for storing internal data and user data.
3 Set Up the iSCSI SAN To start using the PS Series array: 1. Configure the array on the network and create a PS Series group. See the Installation and Setup Guide for more information. 2. Log In to the Group Manager GUI. 3. Set the RAID Policy and Pool for a New Member (and assign the member to the default pool). 4. Create a Volume. 5. Connect Initiators to iSCSI Targets.
4 Post-Setup Tasks After you complete the initial setup and deployment of your PS Series array, Dell strongly recommends that you perform certain tasks to finish configuring the group.
2. Click the General tab. 3. Click Add under NTP servers in the Date and Time panel. 4. Type the IP address for the NTP server. 5. Type the port number for the NTP server. 6. Click OK. 7. Select the IP address and click Modify or Delete as needed. 8. Use the arrows to move a server up or down in the list. Change the Time Zone and Clock Time To display the current date and time values: 1. Click Group → Group Configuration. 2. Click the General tab. 3.
3. Select your preferences and click OK. Set GUI Communication Policies You can set policies for managing connections between your workstation and the Group Manager GUI. 1. Click Tools. 2. Click User Preferences. 3. Click the Communication tab. 4. Select your preferences and click OK. Set Alarm Policies You can set alarm policies to control problem notification. 1. Click Tools. 2. Click User Preferences. 3. Click the Alarms tab. 4. Select your preferences and click OK.
space is insufficient). The group also generates event messages when certain normal operations occur (for example, when a user logs in to the group or creates a volume). To display events: 1. Click Monitoring. 2. Under Events, select Event Log. The events display in the window. To change which group’s events display in the window, select the group from the View dropdown menu. From the Event Log window, you can: • Display all events or events of a specific priority.
• Clear the event list. To erase all the events from the panel, click the Clear event list icon ( More. • Show or hide details about a specific event: ). To show the events again, click – Move the pointer over an event. A pop-up window opens, showing event details. – Double-click an event. The event details panel opens at the bottom of the events list. – Select an event and click the Show/Hide details icon near the upper-right corner of the window.
Configure Email Notifications You can define the list of email recipients to whom notifications will be sent for various alert levels. You can also change the email notification configuration at any time. 1. Click Group → Group Configuration. 2. Click the Notifications tab to open the Email Event Notifications panel. 3. If it is not already selected, select the Send email to addresses checkbox. 4. In the Email recipients section, click Add to open the dialog box.
• An email address to send (the address that appears in the From field in the notification email). You can use the group name at your company’s email address. For example: GroupA@company.com When the intended recipient receives email, the email itself specifies which group it came from. This information is helpful in multigroup environments, and reduces the chance that the email server or recipient will discard or reject notifications.
Change the syslog Notification Configuration 1. Click Group → Group Configuration. 2. Click the Notifications tab to display the Event Logs panel. 3. Make any of the following changes: 4. • To disable syslog notification, clear Send events to syslog servers. • To modify the IP address for a syslog server: 1. Select the IP address and click Modify. 2. Change the address and click OK. • To delete a syslog server, select the IP address and click Delete.
5 Data Security You can secure data at the group, volume, or NAS container level. Table 12.
6 About Group-Level Security Group Manager supports several strategies to ensure that only the people and applications that have approved credentials can log in to the PS Series group and gain access to your data. Security can be accomplished through the following methods: • Administration accounts — You can assign several predefined levels of administrative accounts to provide individuals with various levels of access to Group Manager’s features.
If your environment requires additional security, you might consider a dedicated management network. (See Configure a Management Network for more information.
Account Type Description Volume administrator Volume administrators are (optionally) assigned a quota of storage to manage within one or more pools. They can create and manage volumes within their quota, and can perform all operations on volumes they own. Volume administrators cannot exceed their quotas by creating or modifying volumes, and cannot be assigned volumes by group or pool administrators if the capacity of the volume exceeds the free space within the quota.
Table 14. Differences Between Authentication Methods Type Advantages Active Directory groups • • • Active Directory or RADIUS users • • Local accounts • • Disadvantages Good scalability for large environments with • many users; you can quickly add many administrator accounts to the group. For example, if a company hires new IT staff, and • the “IT Users” group has access to the group, no extra action is required on the part of the group administrator.
Attribute Description NOTE: Dell recommends that administrator account names not be reused after they have been deleted. All accounts can always view their own audit log information, and new accounts with previously used account names will be able to view audit records for the old account. Password Password for the account can be 3 to 13 ASCII characters and is case-sensitive. Punctuation characters are allowed, but spaces are not.
• A maximum of 4096 bit. • Minimum key length of 128 bytes. • Local users only. To create or view the SSH public key: 1. Click Group → Group Configuration. 2. Click the Administration tab. 3. In the Accounts and Groups panel, select either: 4. • All accounts and groups to view both local and remote accounts. • Local accounts to view local accounts only. • Locally authenticated users to view users that have been locally authenticated. Select the account and click Modify.
NOTE: • Account Name, Password, and Contact information must be ASCII characters only. Description can be up to 127 Unicode characters. Fewer characters are accepted for this field if you type the value as a Unicode character string, which takes up a variable number of bytes, depending on the specific character. • Dell recommends that administrator account names not be reused after they have been deleted.
4. Click Delete and confirm that you want to delete the account. NOTE: Dell recommends that administrator account names not be reused after they have been deleted. All accounts can always view their own audit log information, and new accounts with previously used account names will be able to view audit records for the old account.
• You plan to select the Require vendor-specific RADIUS attribute option when you configure the group to use a RADIUS authentication server. You must specify the EQL-Admin-Privilege attribute. Table 16. Vendor-Specific Attributes describes the Dell vendor-specific attributes and values for RADIUS attributes. Table 16.
Attribute Field Required Value VSA syntax String (3 to 247 ASCII characters) Admin-Email VSA vendor ID 12740 (Optional) Email address of the administrator. VSA number 2 VSA syntax String (3 to 247 ASCII characters) Admin-Phone VSA vendor ID 12740 (Optional) Phone number for the administrator. VSA number 3 VSA syntax String (3 to 247 ASCII characters) Admin-Mobile VSA vendor ID 12740 (Optional) Mobile phone number for the administrator.
1. Click Group → Group Configuration. 2. Click the Administration tab. 3. In the Authentication panel, under Authentication Type, select RADIUS and then click the RADIUS settings button to open the RADIUS Settings dialog box. 4. In the RADIUS Authentication Servers section, click Add. The Add RADIUS Authentication Server dialog box opens. 5. Specify the IP address of the server. If the server uses a port other than port 1812 (the default), specify the correct port number. 6.
About LDAP Authorization and Active Directory LDAP is the abbreviation for Lightweight Directory Access Protocol, which provides a simplified protocol for authenticating users. An LDAP server typically contains a database of users, user names, passwords, and related information. LDAP clients are able to interrogate the server to authenticate these users and obtain the account characteristics.
9. Select whether to use the default port for the selected protocol, or specify a different port. 10. Type the Base DN for the Active Directory server, or select Get Default to use the default value. The Base DN can be up to 254 ASCII characters. 11. Select whether to use anonymous connections to the server or type a Bind DN. 12. If a Bind DN is specified, type the Bind password. Passwords can be up to 63 ASCII characters. 13.
1. Click Group → Group Configuration. 2. Click the Administration tab. 3. In the Access panel, make sure that the Enable web access checkbox is selected and select Active Directory as the authentication type. 4. Click AD settings to open the Active Directory Settings dialog box. 5. Select the IP address of the server you want to test. 6. Confirm that the AD server is correctly configured and click the Test AD settings button. 7.
About Active Directory Groups In addition to local and RADIUS administration, administrator account sessions can be authenticated using Active Directory. Individual Active Directory users, or entire Active Directory groups, can be given access to Group Manager using the same levels of access permission available for local user accounts. Using Active Directory authentication is useful in large SAN environments in which administrators require access to multiple groups.
About Single Sign-On Single sign-on enables users who have already logged in to their PCs using Windows Active Directory credentials to automatically log in to the Group Manager GUI without having to specify the Windows Active Directory login credentials again. To use single sign-on, configure the PS Series group to direct it to the same Active Directory domain that authenticates users when they log in to their workstations.
Log Out of Group Manager Using Single Sign-On Logging out of the Group Manager GUI using single sign-on (SSO) is the same as logging out using regular login credentials, but you see additional options. 1. Click Log out at the top-right corner of the Group Manager GUI. A dialog box opens. 2. Select Log out from Group Manager GUI to log out of Group Manager, or select Log in as different user to log in using your regular login credentials. Enable or Disable Single Sign-On To enable single sign-on: 1.
6. Click Save all changes. To change or delete an SNMP community name: 1. In the SNMP Access panel, select the name. 2. Click Modify or Delete as needed. You can specify up to 5 names, and each name can be up to 64 ASCII characters long. Names cannot contain the following characters: space, tab, comma, pound sign (#). 3. Click OK. 4. Click Save all changes. Display SNMP Access to a Group 1. Click Group → Group Configuration. 2. Click the SNMP tab.
Trap Type Trap Names Network Configuration • MemberGatewayIPAddrChanged • NetmaskChange • eqlgroupIPv4AddrChanged • eqlgroupIPv6AddrChanged • eqlMWKAChangeNotification RAID eqlMemberHealthRAIDSetDoubleFaulted, eqlMemberHealthRAIDLostCache, eqlMemberHealthRAIDSetLostBlkTableFull, eqlMemberHealthRaidOrphanCache, eqlMemberHealthRaidMultipleRaidSets Security authenticationFailure Start coldStart, warmStart Temperature eqlMemberHealthTempSensorHighThreshold, eqlMemberHealthTempSensorLowThresh
To be able to perform management operations using VDS and VSS, you must first allow these services to access your PS series group. You use the same access control methods (access policies, access policy groups, and basic access points) to define VDS/VSS access. Display and Configure Windows Service Access to a Group To be able to perform management operations using VDS and VSS, you must first allow these services to access your PS series group.
4. Confirm that you want to delete the policy. When you delete or modify a basic access point, you might need to update any computer that was previously accessing volumes through that access point. About IPsec IPsec is a set of standardized protocols designed to allow systems on IP-based networks to verify each other’s identities and create secured communication links. IPsec uses cryptographic security mechanisms for authentication and protection.
• Traffic is protected using certificates or pre-shared keys. NOTE: IPsec configurations cannot be modified. They must be removed and then recreated using the new configuration. Protect Communication Between Group Members To enable IPsec security for communication between group members, use the ipsec enable CLI command. After IPsec is enabled, all network traffic between group members is protected automatically. No further configuration is required.
You can generate certificates suitable for use in IPsec connections to the PS Series using any Windows, OpenSSL, or other commercial Certificate Authority product. From the Group Manager CLI, you can import, display, and delete certificates, using the ipsec certificate commands. See the Dell EqualLogic Group Manager CLI Reference Guide for more information.
• Example 4: Tunnel Mode (Host-to-Gateway) using PSK with Cisco ASA Configuration For information regarding connectivity considerations, limitations, and requirements for the various IPsec configurations, see IPsec Performance Considerations. Example 1: Transport Mode (Host-to-Host) with PSK and IPv4 Figure 10. Transport Mode (Host-to-Host) with Certificates or PSK illustrates a transport mode IPsec configuration in which one host is using IPv4 and PSK and another host is using IPv6 and certificates.
Setting IPv4 Value 10.125.56.4/32 10.125.56.5/32 Protocol Any Action RequireInRequireOut Auth1 ComputerCert Auth1CAName CN=sqaca Auth1CertMapping No Auth1ExcludeCAName No Auth1CertType Root Auth1HealthCert No Anodes DHGroup14-AES256-SHA384 QuickModeSecMethods ESP:SHA1-AES256+60min+10000000kb Table 19. iSCSI Initiator Configuration (IPv6) lists how the Microsoft iSCSI Initiator should be configured for the IPv6 connection shown in Figure 10.
Setting IPv6 Value MainModeSecMethods DHGroup14-AES256-SHA384 QuickModeSecMethods ESP:SHA1-AES256+60min+10000000kb CLI Commands (IPv4) Enter the following CLI commands on the PS Series group to implement the IPv4 configuration shown in Figure 10. Transport Mode (Host-to-Host) with Certificates or PSK: > ipsec certificate load PSAcert IPsecPSA.pfx local password password > ipsec certificate load RootCA rootca.
Figure 11. Tunnel Mode Between Linux Hosts Using PSK iSCSI Initiator Configuration (IPv4) This example uses the following configuration: • • • Mint 17 (also known as Qiana) Linux Kernel 3.13.0-36-generic, x86_64 strongSwan 5.1.2 The following configuration files are relevant: • • • • • /etc/strongswan.conf is the configuration file that governs the operation of the strongSwan components (for example, debugging level, log file locations, and so on). You will not need to modify this file. /etc/ipsec.
# # # # # # leftcert=selfCert.der leftsendcert=never right=192.168.0.2 rightsubnet=10.2.0.0/16 rightcert=peerCert.der auto=start # conn sample-with-ca-cert # leftsubnet=10.1.0.0/16 # leftcert=myCert.pem # right=192.168.0.2 # rightsubnet=10.2.0.0/16 # rightid="C=CH, O=Linux strongSwan CN=peer name" # auto=start Begin Pre-Shared Key Authentication, IPv4 1. strongSwan host IP address is 10.127.238.154 2. array addresses are 10.124.65.38 (the wka) and 10.124.65.
NOTE: strongSwan allows you to specify properties that apply to all connections (conn %default). The auto=route directive tells strongSwan to install an IPsec security policy into the host's security policy database for every defined connection. If this directive were not present here, it would need to appear in the configuration for every connection. keyexchange=ikev1 is necessary because by default it will use/expect IKE version 1 for the key exchange algorithm.
Figure 12. Tunnel Mode Between Linux Hosts Using Certificate-Based Authentication iSCSI Initiator Configuration (IPv4) This example uses the following configuration: • • • Mint 17 (also known as Qiana) Linux Kernel 3.13.0-36-generic, x86_64 strongSwan 5.1.2 The following configuration files are relevant: • • • • • /etc/strongswan.conf is the configuration file that governs the operation of the strongSwan components (for example, debugging level, log file locations, and so on).
# # # # # # leftcert=selfCert.der leftsendcert=never right=192.168.0.2 rightsubnet=10.2.0.0/16 rightcert=peerCert.der auto=start # conn sample-with-ca-cert # leftsubnet=10.1.0.0/16 # leftcert=myCert.pem # right=192.168.0.2 # rightsubnet=10.2.0.0/16 # rightid="C=CH, O=Linux strongSwan CN=peer name" # auto=start Begin Certificate-Based Authentication, IPv4 1. strongSwan host IP address is 10.127.238.154 2. array addresses are 10.124.65.38 (the wka) and 10.124.65.39 (eth0) 3.
Issuer: C=US, ST=New Hampshire, L=Nashua, O=Dell Equallogic, OU=Networking and iSCSI, CN=Joe Secure/emailAddress=Joe_Secure@dell.com Validity Not Before: Oct 14 19:01:25 2014 GMT Not After : Oct 14 19:01:25 2015 GMT Subject: C=US, ST=New Hampshire, L=Nashua, O=Dell Equallogic, OU=Networking and iSCSI, CN=Joe Secure/emailAddress=Joe_Secure@dell.
You will be prompted to enter information that will be incorporated into the certificate request. This is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value. If you enter '.', the field will be left blank.
Certificate: Data: Version: 1 (0x0) Serial Number: 9335600219447230923 (0x818eb6effd3601cb) Signature Algorithm: sha256WithRSAEncryption Issuer: C=US, ST=New Hampshire, L=Nashua, O=Dell Equallogic, OU=Networking and iSCSI, CN=Joe Secure/emailAddress=Joe_Secure@dell.com Validity Not Before: Oct 14 19:24:12 2014 GMT Not After : Oct 14 19:24:12 2015 GMT Subject: C=US, ST=New Hampshire, L=Nashua, O=Dell Equallogic, OU=Networking and iSCSI, CN=kirt5.lab.equallogic.com/emailAddress=Joe_Secure@dell.
Modulus: 00:ef:67:f5:d5:06:06:38:33:54:41:44:7e:bc:6d: 70:35:ea:9a:10:7e:d4:f3:a2:c9:f5:3b:8c:35:19: 59:ba:77:09:01:b8:26:9e:e8:76:5e:54:06:82:5c: f7:2c:a8:17:1a:16:bb:12:54:56:b5:3c:62:0b:58: e8:4a:30:78:aa:3f:9f:9c:39:8a:3a:d2:9e:1d:3f: dc:ea:4e:ff:e9:ae:a5:f0:c2:2c:ca:62:e2:56:00: 65:1b:96:0f:22:6a:c5:58:5c:00:d2:e3:b7:75:76: 02:1e:8e:47:59:07:8b:bc:4b:a5:b3:84:b0:ac:2e: 43:61:d2:29:a7:96:e2:60:21:5b:47:93:09:92:33: 7f:b9:94:78:6e:d3:cb:02:13:9d:18:53:62:f0:a2: 5a:27:c1:fd:31:8c:28:7a:48:8c:aa:5d:dc:6d:4
221Data traffic for this session was 6250 bytes in 4 files. Total traffic for this session was 7728 bytes in 6 transfers. 221 Thank you for using the FTP service on 10.124.65.39. 9. Drop the certificates in place on the strongSwan host side: # cp draoidoir.crt /etc/ipsec.d/certs # cp root-ca.crt /etc/ipsec.d/cacerts # cp client.key /etc/ipsec.d/private 10. Configure strongSwan to use the certificates for authentication. Here we have opted to use a Distinguished Name as the identifier on each side.
# # # rightsubnet=10.2.0.0/16 rightid="C=CH, O=Linux strongSwan CN=peer name" auto=start "leftcert=draoidoir.crt" tells strongSwan where it can find its local certificate (in /etc/ ipsec.d/certs). This is the local certificate that it will present to the array. "leftsendcert=yes" tells strongSwan that it should always send its certificate chain to any peers. "authby=pubkey" in each connection tells strongSwan that these peers will use certificatebased authentication. "rightid=...
Example 4: Tunnel Mode (Host-to-Gateway) Using PSK In Figure 13. Tunnel Mode (Host-to-Gateway) Using PSK, a tunnel mode connection to a Cisco ASA gateway is established, using pre-shared keys to authenticate IPv4 traffic. The example uses IKEv1. Figure 13. Tunnel Mode (Host-to-Gateway) Using PSK . Cisco ASA Configuration The following Cisco ASA configuration for the gateway is shown in Figure 13. Tunnel Mode (Host-to-Gateway) Using PSK. ASA Version 7.2(3) ! hostname ciscoasa domain-name company.
interface Ethernet0/7 !passwd <> encrypted ftp mode passive dns server-group DefaultDNS domain-name company.com access-list 101 extended permit ip 10.125.55.0 255.255.255.0 host 10.125.56.2 pager lines 24 mtu outside 1500 mtu inside 1500 icmp unreachable rate-limit 1 burst-size 1 asdm image disk0:/asdm-523.
CLI Commands (IPsec) Enter the following CLI commands on the PS Series group to implement the configuration shown in Figure 13. Tunnel Mode (Hostto-Gateway) Using PSK: > ipsec security-params create RemGW_PSK_Auth_Tunnel pre-shared-key key tunnel type v4 tun-ip-addr 10.125.56.1 > ipsec policy create ToRemGW_IPv4_PSK_Ikev1 type v4 ip-addr 10.125.56.0 netmask 255.255.255.
• The PS Series firmware provides no mechanism for using IPsec to protect traffic between replication partners. It is technically possible to create IPsec polices on both the primary and secondary group in which each group treats the other as an iSCSI initiator and traffic is protected accordingly. However, this configuration is not supported, and Dell recommends against implementing it in a production environment.
Table 20. Supported RDNs OID RDN Meaning 2.5.4.6 C Country 2.5.4.7 L Locality Name 2.5.4.5 serialNumber Serial Number 2.5.4.9 STREET Street Address 2.5.4.8 ST State or Province 2.5.4.10 O Organization 2.5.4.11 OU Organizational Unit 2.5.4.3 CN Common Name DC Domain Component RSA.1.9.1 MAILTO PKCS9 Email Address RSA.1.9.1 emailAddress PKCS9 Email Address RSA.1.9.2 unstructuredName PKCS9 Unstructured Name 2.5.4.4 SN Surname 2.5.4.12 title Title 2.5.4.
Algorithm Type Supported Algorithms • AES-CBC–256 IKE (Diffie-Hellman) Key Exchange • • • 2 (if legacy support is not disabled) 14 24 IPsec Integrity • • • • • HMAC-SHA1–96 HMAC-SHA2–224 HMAC-SHA2-256 HMAC-SHA2-384 HMAC-SHA2-512 IPsec Encryption • • • • • NULL 3DES-CBC AES-CBC AES-CBC–192 AES-CBC–256 NOTE: IKE (Diffie-Hellman) Key Exchange Group 2 algorithm is supported only if legacy support is not disabled.
Protect Communication Between Group Members To enable IPsec security for communication between group members, use the ipsec enable CLI command. No further configuration actions are required. Protect iSCSI Initiator Connections IP traffic between the group and iSCSI initiators is not automatically protected after IPsec has been enabled. Configure an IPsec configuration as follows: NOTE: See the Dell EqualLogic Group Manager CLI Reference Guide for command syntax and examples of the CLI commands. 1.
About Dedicated Management Networks For increased security, or if your environment requires the separation of management traffic and iSCSI traffic, you can configure a dedicated management network (DMN) that is used only for administrative access to the group. The management network is separate from the network that handles iSCSI traffic to the group.
• Obtain an IP address and default gateway information for the management network address. This address is the one to which administrators can connect. • For each group member, obtain an IP address for the management network interface. The IP address must be on the same subnet as the management network address, and this subnet should not be the same as the one used for data I/O.
• If you are running SAN Headquarters, you must update the group IP address in the application to the dedicated management address. For more information, see the SAN Headquarters documentation. • If you are using an NTP server, Dell recommends that the NTP server be on the same subnet as the dedicated management network. Display Management Network Information 1. Click Group → Group Configuration. 2. Click the Advanced tab. 3. Click Configure management network to display details.
• Pages are overwritten by a random pattern as a background operation. • Page map entries are cleared. To enable Secure Erase: 1. Click Group → Group Configuration. 2. Click the Advanced tab. 3. In the Secure Erase panel, check the Enable secure erase data checkbox.
7 About Volume-Level Security To secure your data, you must prevent access by unauthorized iSCSI initiators. By controlling access to your iSCSI targets, you can secure access to individual volumes. Group Manager provides several ways to control access to your volumes. You can use these security measures in tandem with group-level and NAS-level security to provide the required level of security for your data. • • • • You can specify a CHAP user name, IP address, or iSCSI initiator name.
Different access methods are available depending on the needs of your environment: • An access policy consists of a set of extended access points. Each extended access point enables users to provide a set of access attributes describing the endpoints, such as an IQN initiator name, CHAP name, and IP addresses. After an access policy is associated with a volume, all the endpoints described by the extended access points will have access to the volume.
Study 2: Apply an existing policy to a volume Scenario: A user wants to grant access to a volume using a previously specified access policy (or policy group), without having to reenter the IP address, initiator name, and CHAP user name. Solution: If the volume has not been created yet: 1. Run the Create volume wizard to define the parameters of the new volume. Complete wizard steps 1 and 2. 2. When the Define iSCSI Access Points step is reached, select Select or define access control policies. 3.
3. To remove a node from cluster A, remove the node’s access policy from the group policy. This disassociation instantly removes the node from the group. Study 6: Determine each volume a host/cluster can access Scenario: The group administrator needs to determine which volumes a host can access. Solution: 1. Click Group → Group Configuration. 2. Click the Access Policies tab. 3. In the Access Policies panel, select either the policy group or the access policy that is assigned to the host or cluster.
NOTE: Click Access to show all access policies and access points that are currently assigned to this volume. 3. In the Activities panel, click Add basic access point. 4. Specify the authorization parameters (CHAP account name, iSCSI initiator name, or IP address) for the host that you are configuring. NOTE: Asterisk characters can be used within an IP address to indicate that any value is accepted in an octet (for example: 12.16.*.*). Group Manager displays *.*.*.
4. Perform the desired action in the corresponding subpanel: Add, Modify, or Remove an Access Policy Group: • To bind an additional access policy group to the volume, click Add to open the Add Access Policy dialog box. You can then select additional groups that you want to bind to this volume. • To make changes to the access policies within an access policy group, select a group policy and click Modify to open the Edit Access Policy Group dialog box.
Create an Access Policy Group Access policy groups combine individual access policies together so that they can be managed as a single entity. 1. Click Group → Group Configuration. 2. Click the Access Policies tab. 3. In the Access Policies panel, locate the Access Policy Groups section and click New to open the New Access Policy Group dialog box. 4. Specify a policy name for the new group and (optionally) a description. 5. In the Access Policies section, click Add.
5. Select the checkbox next to each group policy name that you want to associate with the selected volume and click OK. Manage Access Controls for VDS/VSS Access To allow VDS and VSS access to the group, you must create at least one VDS/VSS access control policy that matches the access control credentials you configure on the computer by using Remote Setup Wizard or Auto-Snapshot Manager/Microsoft Edition.
Using a RADIUS server to manage CHAP accounts is helpful if you are managing a large number of accounts. However, computer access to targets depends on the availability of the RADIUS server. NOTE: If you use CHAP for initiator authentication, you can also use target authentication for mutual authentication, which provides additional security. Display Local CHAP Accounts To display local CHAP accounts: 1. Click Group → Group Configuration. 2. Click the iSCSI tab.
• Enable target authentication (for mutual authentication) Modify a Local CHAP Account To modify a local CHAP account: 1. Click Group → Group Configuration. 2. Click the iSCSI tab. 3. In the Local CHAP Account panel, select the account name that you want to edit and click Modify. The Modify CHAP Account dialog box opens. 4. Change the name or password or enable or disable the account, as needed. 5. Click OK. Delete a Local CHAP Account To delete a local CHAP account: 1.
2. Click Group → Group Configuration. 3. Click the iSCSI tab. 4. In the iSCSI Authentication panel, select Enable RADIUS authentication for iSCSI initiators. 5. (Optional) Select Enable local authentication and check local first. 6. Click RADIUS settings to configure the group to use a RADIUS server (if you have not already done so). 7. Add at least one RADIUS server by clicking the RADIUS settings button and adding the IP address of the RADIUS authentication server. 8.
Prerequisites for Configuring an iSNS Server The following considerations might apply when configuring an iSNS server: • By default, the group disables target discovery by iSNS servers. If you want iSNS servers to discover a target, you must enable this functionality on the target. Set up the iSNS server and configure the iSCSI initiator to use the iSNS server for discovery. See your iSNS server and iSCSI initiator documentation for details. • The iSNS server must be accessible to all the group members.
5. Click OK. 6. Click Save all changes. Delete an iSNS server To delete the IP address for an iSNS server to remove the server from the configuration: 1. Click Group → Group Configuration. 2. Click the iSCSI tab. 3. In the iSCSI Discovery panel, select the server’s IP address. 4. Click Delete. 5. When prompted to confirm the decision, click Yes.
About Multihost Access to Targets In a shared storage environment, you must control computer access to iSCSI targets (volumes and snapshots), because multiple computers writing to a target in an uncoordinated manner will result in volume corruption. When an initiator tries to log in to a target, the group uses access control policies to determine if access should be authorized.
About Snapshot Access Controls Online snapshots are seen on the network as iSCSI targets. It is important to protect your snapshots from unauthorized and uncoordinated access by iSCSI initiators. NOTE: When a snapshot is online and accessible, a user or application can change the contents of the snapshot. If the content changes, the snapshot no longer represents a point-in-time copy of a volume and has limited use for data recovery.
Windows clients cannot change any file or directory permissions. Read, write, and execute access is controlled by the UNIX permissions for Windows files and directories, which you set in Group Manager. • NTFS — Controls file access by Windows permissions in all protocols. A client can change the permission and ownership by using the Windows Security tab. This security style is the default style.
8 PS Series Group Operations You can perform basic and advanced operations on the PS Series SAN. Table 23.
Before modifying the group name or group IP address, make sure you understand how these changes will affect your environment: • You identify replication partners by group name and use the group IP address to perform replication. If you modify the group name or IP address, make sure replication partner administrators make the change to their partner configuration. Replication fails if the partner information is incorrect.
• Enable the management port, if needed (see Configure a Management Network) • Select the RAID policy (see Set the RAID Policy and Pool for a New Member) After you complete these tasks, you can configure the new member either through the CLI or through the GUI. To configure the member through the GUI: 1. Click Group, expand Members, and select the unconfigured member. 2. In the Warning dialog box, click Yes or No.
Convert a RAID Policy You configure a RAID policy when you add a member to a group. In most cases, you can convert the RAID policy at a later time. Prerequisites • Make sure that you can change the current RAID policy to a different one, and that you understand the conversion options. • To convert to a no-spare-drives RAID policy, you must use the Group Manager CLI. • While the RAID policy is changing, the member’s RAID status is expanding.
RAID Sets The tables below show a logical drive layout when an array is initialized for the first time. The actual physical layout of drives can change and evolve due to maintenance and administrative actions. Spare drives can move as they are used to replace failed drives and newly added drives become the spares. NOTE: It is not possible to determine which physical drives are associated with each RAID set. This information is dynamic and maintained by the Dell EqualLogic firmware. Table 25.
Table 29. 42–Drive Configuration shows the RAID set relationship for each RAID type in a 42-drive configuration. Table 29. 42–Drive Configuration RAID Policy Spare Disks RAID Set Relationship Best Practices RAID 6 2 (12+2) (12+2) (10+2) Yes RAID 10 2 (7+7) (7+7) (5+5) Yes RAID 50 2 (12+2) (12+2) (10+2) Not for business-critical data RAID 5 2 (12+2) (12+2) (10+2) Not for business-critical data Table 30.
• To enable a volume RAID preference, make sure that at least one member in the volume’s pool has the preferred RAID level. If no pool member has the preferred RAID level, the group ignores the RAID preference until a member exists with the preferred RAID level. • If you disable a RAID preference on a volume, the group resumes automatic performance load balancing. Procedure To enable or disable a volume RAID preference: 1. Click Volumes. 2. Expand Volumes and select the volume name. 3.
4. b. Expand Members and select the member that you want to shut down. c. Click the Maintenance tab. d. In the Power panel, click the Shut down button. e. In the Member shutdown dialog box, enter the grpadmin password and click OK. (Only group administrators can shut down members.) f. The system displays a warning message that the member is shutting down. Click OK to acknowledge the warning. To turn off array power, turn off all power switches on the array after the shutdown completes.
To cancel an in-progress member pool move operation: 1. Click Group. 2. Expand Members and then select the member name. 3. Click Cancel member move. Change a Storage Pool Name or Description You can change the name or description of any storage pool, including the default pool. 1. Click Group. 2. Expand Storage Pools and then select the pool name. 3. Click Modify pool settings. 4. Modify the pool name or description. 5. • Name can be up to 63 bytes and is case-insensitive.
You can change the default values to meet the needs of your environment. Modify Groupwide Volume Settings When you create or enable thin provisioning on a volume, the group applies defaults unless you explicitly override them for a volume. These defaults control snapshot space, snapshot behavior, thin-provisioning space, sector size, and iSCSI alias naming. You can modify the groupwide default values to meet the needs of your configuration.
The borrowing capabilities added for v8.0 and later can help in the following situations: • If a volume’s snapshot reserve or replica reserve is set too high or too low: – Too high – The system considers the unused snapshot and replica reserve as “borrowable” space, which is then available to other volumes whose reserves were set too low. This flexibility accommodates volumes whose IO patterns caused the volume reserves to be exceeded and better utilizes the available space.
You can also see these statistics through several CLI commands. Refer to the Dell EqualLogic Group Manager CLI Reference Guide for more information. Using Space-Borrowing Information Using information displayed in the GUI, you can see how space borrowing is affecting your system and answer the following questions: 1. Have I set my reserves too high or too low? 2. Am I benefiting from the borrowing feature? 3. Are any objects at risk of being deleted because they are borrowing space? 4.
Compression Prerequisites The compression feature is available only on PS6210 and PS6610 arrays running firmware version 8.0 or later. Compression is disabled by default on all arrays until it is manually activated either through the Group Manager interface or the command-line interface (CLI). NOTE: The array monitors the storage environment for eligible snapshots. After starting compression, it might be several hours before any compression activity is displayed.
Member Compression States Table 32. Member Compression States describes the possible member compression states. Table 32. Member Compression States Member State Description No-Capable-Hardware Compression cannot be enabled on this member due to its hardware capabilities. Not Started Compression has never been successfully enabled on this member. Running Compression is currently running on this member. Suspended Compression has previously been enabled and is now paused.
Resume Compression To resume compression on a member after it has been suspended: 1. Click Group. 2. Expand Members and select the group member on which compression is to be resumed. 3. In the Activities panel, click Resume compression. The General Member Information panel shows Compression ... running under the General settings section, and the Activities panel again contains the Suspend compression link.
See Compression Statistics by Volume. Compression Commands in the CLI Snapshot and replica compression can be enabled using the command-line interface (CLI) on any PS6210 or PS6610 array running firmware version 8.0 or later. Table 33.
9 About Volumes Volumes provide the storage allocation structure within the PS Series group. To access storage in a PS Series group, you allocate portions of a storage pool to volumes. You can create a volume on a single group member or one that spans multiple group members. You assign each volume a name, size, and a storage pool. The group automatically load balances volume data across pool members. Create a Volume To create a new volume: 1. Click Volumes → Volumes. 2.
• You cannot set a template volume to read-write permission. To modify a volume permission: 1. Click Volumes. 2. Expand Volumes and then select the volume name. 3. In the Activities panel, click Set access type. 4. Change the permission in the Set access type dialog box. 5. (Optional) Select the Allow simultaneous connections from initiators with different IQNs checkbox. 6. Click OK. Modify a Volume Alias An alias can help administrators identify a volume.
• A simple tag has no values. For example, the tag “Backup” indicates that the volume is used as a backup volume without going into further detail. • An extended tag uses a main tag and a value. For example, the tag “Applications” with the value “Sharepoint” indicates that the volume information is used within that application. Within Group Manager, only group administrators are allowed to create, rename, and delete tags.
To associate tags for a specific volume, you can use either the Volumes panel, the Activities panel, or the General Volume Information panel. From the Volumes panel: 1. Click Volumes → Volumes. 2. Select a volume from the list displayed in the Volumes panel. 3. In the Activities panel, click Modify tags to open the Pick Tags for Volume dialog box. From the Activities panel: 1. Click Volumes. 2. Expand Volumes and select a volume from the tree view. 3.
Delete a Volume When you delete a volume, space that the group allocated to the volume becomes part of free pool space. The following requirements and considerations apply: • If you delete a volume, the group also deletes its snapshots. However, the group does not delete any volume replicas on the secondary group. • The volume must be set offline to perform the delete operation. The group closes any active iSCSI connections to the volume.
The volume collection is displayed under Volume Collections. Modify a Volume Collection To modify a volume collection: 1. Click Volumes. 2. Expand Volume Collections and then select the collection. 3. Click Modify volume collection to open the Modify Volume Collection dialog box. 4. Click the General tab to change the collection name or description. 5. Modify the name (up to 63 characters) or description (up to 127 characters). Name can be up to 63 bytes and is case-insensitive.
• Deleting a volume folder does not delete the volumes it contains. • A volume folder cannot contain a volume collection. • A volume folder can contain standard, thin-provisioned, template, thin clone, or synchronous replication (SyncRep) volumes. • A volume folder cannot contain failback replica sets, or promoted or cloned inbound replica sets. • When the last volume is removed from a folder, the folder is not deleted.
5. (Optional) You can also change the description of the folder in the Description field. The description can be up to 127 characters. Fewer characters are accepted for this field if you type the value as a Unicode character string, which takes up a variable number of bytes, depending on the specific character. 6. Click OK. The folder displays its new name. Add and Remove Volumes from Folders You can move volumes into, out of, or between folders. Your account must have group administrator privileges.
2. Expand Volumes and then select the name of the volume folder that you want to delete. 3. In the Activities panel, click Delete folder to open the dialog box. 4. Click Yes to delete the folder. If the deleted folder contained volumes, the volumes are displayed in the Volumes list in the tree view. About Restoring Deleted Volumes Volume undelete provides an administrator with the ability to restore volumes that might have been deleted by mistake. This feature is enabled by default.
• Deleted volumes remain in the recovery bin for up to 1 week after deletion. If a deleted volume has not been restored after 1 week, it will be purged after the date and time shown in the Volume Recovery Bin dialog box. • When a volume has been deleted, its information appears slightly differently in the CLI than in the GUI. Whereas the GUI shows the original name of the volume even after it has been deleted, the CLI shows a modified name for the volume when you list the contents of the recovery bin.
2. Click Volumes in the Volumes panel (not an individual volume name). 3. In the Activities panel, click Manage recovery bin. The Volume Recovery Bin dialog box opens. 4. Select the volume name that you want to permanently delete in the recovery bin. 5. Click Purge. To purge all volumes in the recovery bin at the same time, click Purge All. 6. When prompted to confirm the decision, click Yes to continue with the purge or No to cancel. NOTE: When a volume is purged, all of its data is lost.
Increase the Reported Size of a Volume You can increase the reported size of the volume while the volume remains online. The following considerations apply to increasing the size of a volume: • If the size you specify is not a multiple of 15MB, the group rounds up the value to the nearest multiple of 15MB. • If you do not specify a unit for the size, the unit defaults to MB. • If you configured the volume for replication, the wizard shows the delegated space on the replication partner.
• Windows 8/Windows Server 2012 Running Defrag Tools Run defragmentation tools (such as fstrim, windows manual defrag, or esxtool) during periods of low I/O activity, because these operations might result in large numbers of unmapping operations and reduce array performance. Unmapping Replicated Volumes You should not run operations that result in SCSI unmap operations (for example, format or defrag) on volumes on which replication (including synchronous replication) is enabled.
To check the current setting of unmapping, issue the following command: fsuitl behavior query disabledeletenotify NOTE: The disabledeletenotify setting is a global operating system setting that not only disables unmap operations from being sent to the PS Series storage arrays, but also disables TRIM to SSDs. For more information about the fsutil utility, see: technet.microsoft.com/en-us/library/cc785435(v=ws.10).
Status Description offline (snap reserve Volume or snapshot was automatically set met) offline due to the selected snapshot recovery policy. Solution Increase the amount of reserved snapshot space. offline (thin max grow met) A thin-provisioned volume and its snapshots were automatically set offline because a write exceeded the maximum in-use space value. Increase the value of the maximum in-use space setting or increase the volume’s reported size.
About Managing Storage Capacity Utilization On Demand (Thin Provisioning) You can use thin-provisioning technology to more efficiently allocate storage space, while still meeting application and user storage needs. With a thin-provisioned volume, the group allocates space based on volume usage, enabling you to “over-provision” group storage space (provision more space than what is physically available).
When enabling thin provisioning on an existing volume, the following considerations apply: • Enabling thin provisioning on a volume usually decreases the amount of space that the group allocates to the volume (called the volume reserve). • Enabling thin provisioning changes the amount of allocated snapshot space and replication space, because the group allocates snapshot space and replication space based on a percentage of the volume reserve.
– If the maximum in-use space value is 100 percent, and an initiator write exceeds this limit, the volume is not set offline; however, the write fails, and the group generates event messages. If you increase the reported size of the volume, writes succeed. This behavior is the same as when in-use space for a volume that is not thin-provisioned reaches its reported size.
The Pool Space table values change, based on the new volume setting. If the volume change exceeds pool capacity, the free space field displays a negative value. 6. Click OK. About Improving Pool Space Utilization (Template Volumes and Thin Clones) Some computing environments use multiple volumes that contain a large amount of common data. For example, some environments clone a standard volume and create multiple “boot volumes” that administrators use to boot different client computers.
• Volume reserve decreases to the amount of in-use space (or the minimum volume reserve, whichever is greater), and free volume reserve becomes unreserved space. • Snapshot reserve is adjusted, based on the new volume reserve. If necessary to preserve existing snapshots, the snapshot reserve percentage is increased. When you create a thin clone volume, it has the same reported size and contents as the template volume.
Attribute or Operation Restriction Pool move Thin clones inherit the pool setting of the template volume. If you move the template volume to a different pool, the thin clones also move. Replication You can replicate a template volume only one time. You cannot replicate a thin clone until you replicate the template volume to which the thin clone is attached.
– Snapshot reserve setting – Thin-provisioning setting • Access control • Multihost access To create a thin clone from a template volume: 1. Click Volumes. 2. Expand Volumes and then select the template volume. 3. Click Create thin clone to open the Create Thin Clone – Volume Settings dialog box. 4. Type a unique name and, optionally, a description. 5. Click Next to open the Create Thin Clone – Space dialog box. 6. Change the snapshot reserve setting and the thin-provisioning settings.
3. Click Convert to volume. 4. Confirm that you want to convert the template volume to a standard volume. About Data Center Bridging Data center bridging (DCB) is a set of extensions to IEEE Ethernet standards, intended to improve the performance, controllability, and reliability of the Ethernet fabric. DCB can make Ethernet the unified fabric for different types and priorities of data traffic in the data center.
in the EqualLogic SAN. You can download the paper from: http://en.community.dell.com/techcenter/storage/w/wiki/ 4355.configuring-dcb-with-equallogic-sans.aspx Set the Data Center Bridging VLAN ID To set the DCB VLAN ID for the first time: 1. Click Group → Group Configuration. 2. Click the Advanced tab. 3. Enter the DCB VLAN ID in the VLAN ID field; it must be a value from 0 to 4095. This value must be the same as the VLAN ID configured in the switch for use by the iSCSI SAN.
10 VMware Group Access Panel The VMware Group Access panel allows you to view VMware virtual volume (VVol) settings, if VVols are configured for your group. To open the VMware Group Access panel, click the VMware tab in the selection pane on the lower-left side of the Group Manager window. NOTE: If the VMware tab is not displayed in the selection pane, move the View Drag handle (the two arrows above the list of tabs) up to expand the list. Or, click the VMware icon ( ).
About Protocol Endpoints A protocol endpoint is the iSCSI target used for VVol storage, and the mean by which to access VVol storage containers. In order to perform VVol user operations from within vCenter, protocol endpoint access rules need to be established. NOTE: Configuring a protocol endpoint requires establishing one or more access policies. Dell requires the use of Virtual Storage Manager (VSM) to establish Protocol Endpoint (PE) access rules. To view all access policies defined for a group: 1.
Storage Container Space Limits Storage container space is managed by the storage administrator. Storage administrators must allow access to enough physical storage to meet the needs of a storage container of a given logical size under a reasonable range of operating circumstances, without preallocating too much physical storage to the container. A storage container will have access to a finite amount of physical storage, which is defined as the container’s physical size.
4. In the Activities panel, click Modify storage container. The Modify Storage Container dialog box opens. 5. On the General settings tab: a. Modify the name for the container and/or the description. b. Select a different storage pool for the container (if more than one pool is available). 6. On the Physical space tab: a. Type a value in the Container physical size field. b. Change the unit of measurement for the size (MB, GB, TB) if necessary. 7. Click OK to modify the storage container.
NOTE: The Dell Virtual Storage Manager (VSM) runs an event process that enables Group Manager to show the correlation between the virtual machine and the VVol. In some cases, the vCenter virtual machine-related event might be incorrectly propagated to Group Manager. To manually trigger a verification between the VSM and Group Manager, use the VSM system schedule command Verify Snapshots and Replicas Run Now. For details, see the Dell Virtual Storage Manager Installation and User’s Guide.
11 NAS Operations Network-attached storage (NAS) provides high-performance, high-availability, scalable resources with on-demand provisioning in a unified storage environment. You can perform basic and advanced operations on NAS storage, as shown in Table 39. Basic and Advanced NAS Operations. Table 39.
12 NAS Cluster Operations Table 40. Basic and Advanced NAS Cluster Operations provides a list of basic and advanced NAS cluster operations. Table 40. Basic and Advanced NAS Cluster Operations Basic Configure a NAS cluster Modify NAS cluster settings Add, modify, or delete a local group for a NAS cluster Advanced Diagnose problems in a NAS cluster Delete a NAS cluster NAS Cluster Configuration A NAS cluster is a collection of NAS appliances configured in a PS Series group.
Table 41. NAS Cluster Expansion Existing NAS Cluster Expansion NAS Controllers Two model FS7500 NAS controllers Two FS7500 One FS7600 (creates a mixed cluster) One model FS7600 NAS appliance One FS7600 Two FS7500 (creates a mixed cluster) One model FS7610 NAS appliance One FS7610 Configure a NAS Cluster You must configure two NAS controllers to a NAS cluster as a NAS controller pair; you cannot add just one NAS controller.
SAN network NAS cluster management IP address NAS controller IP addresses Password for grpadmin account: NOTE: • If you are using DNS, you must manually add the NAS cluster IP address and NAS cluster name to the DNS server. • If you are in a routed client network and using multiple NAS cluster IP addresses, add all NAS cluster IP addresses to the DNS server and associate them with the same NAS cluster name. NAS Cluster Configuration 1.
• NAS controllers are correctly connected to the appropriate switch stack (that is internal, SAN, and IPMI ports are connected to one switch stack, and client ports are connected to the client switch stack). • No duplicate IP addresses are in the network configuration. The IP addresses used in the NAS configuration and the group configuration must be unique in the network. If an IP address used in the NAS cluster is also used elsewhere, change the IP address where it is used outside of the NAS cluster.
• If the SAN network failed, the Modify SAN network page opens. • If the client network failed, the Modify client network page opens. You can change any of the settings that were not already stored in the system on these pages. Click OK to accept your changes. Click Retry. If the network configuration fails again, contact your customer support representative. Validation Failure Description: You see a system validation error or similar error message.
• Configure the NAS cluster to use DNS (Domain Name Service), which is a networking service that translates Internet domain names into IP addresses. If you want to use DNS, manually add the NAS cluster IP address and NAS cluster name to the DNS server. If you are in a routed client network and using multiple NAS cluster IP addresses, add all the NAS cluster IP addresses to the DNS server and associate them with the same NAS cluster name.
• Create a snapshot. To protect NAS container data, you can create snapshots. • Create a snapshot schedule. Use a schedule to regularly create NAS container snapshots. • Configure NAS container replication. Modify a NAS Cluster Name To modify the name of a NAS cluster: 1. Click Group, expand Group Configuration, and then select the NAS cluster. 2. Click Rename NAS cluster. 3. In the Rename NAS cluster dialog box, specify the new NAS cluster name.
1. Click Group, expand Group Configuration, and then select the NAS cluster. 2. Click the Local Users and Groups tab. 3. In the Local Groups panel, click Add. 4. In the Add local user dialog box, specify the group name. NOTE: The group name accepts up to 20 ASCII characters, including letters, numbers, underscores, hyphens, and periods. The first character must be a letter or a number. 5. Click OK. NOTE: You cannot modify a local group.
3. In the Local Users panel, select the user and click Delete. NOTE: You cannot delete the built-in local administrator account (Administrator). 4. Confirm that you want to delete the local user. Map Users for a NAS Cluster NOTE: To map users, you must have Active Directory and either LDAP or NIS configured in the NAS cluster. To define a mapping between a Windows user and a UNIX user: 1. Click Group, expand Group Configuration, and then select the NAS cluster. 2. Click the Authentication tab. 3.
You can specify an NTP server for the group and a DNS server for the NAS cluster when configuring Active Directory, or you can perform these tasks separately. NOTE: Configuring Active Directory interrupts client access to SMB shares. To configure Active Directory: 1. Click Group, expand Group Configuration, and then select the NAS cluster. 2. Click the Authentication tab. 3. In the Active Directory panel, click Configure Active Directory.
3. In the Active Directory panel, click Leave. Domain users will be prevented from access if you leave the Active Directory domain. The status changes to not configured. NOTE: You cannot delete the Active Directory configuration. Configure or Modify NIS or LDAP for a NAS Cluster To authenticate UNIX clients, you can use NIS or LDAP for external authentication. NOTE: Configuring NIS or LDAP interrupts client access to SMB shares.
4. Click OK to apply the changes. 5. In the Activities panel, click Modify client properties to open the dialog box. 6. Modify the following properties as needed: • Default gateway • MTU size If you change the MTU size, clients are disconnected. The client usually reconnects automatically. NOTE: Modify the MTU byte size only if directed by Dell support. For normal NAS cluster operation, a value of 1500 is required. • 7. Bonding mode (ALB or LACP) Click OK to apply the changes.
The SAN network configuration for a NAS cluster includes the following IP addresses: • Management IP address, which allows access between the PS Series group and the NAS cluster. The management IP address must be on the same subnet as the group IP address. • IP addresses for each NAS controller, which allows access between the PS Series group and the NAS controllers. The NAS controller IP addresses must be on the same subnet as the group IP address.
While the NAS cluster is in maintenance mode, all client connections to the NAS controllers are stopped and no new connections can be made. Put the NAS cluster into maintenance mode when you: • Move the NAS hardware to a different location. • Change the PS Series group IP address because of: – Network changes in your environment – A new configuration, such as adding a management network to the group • Perform maintenance or infrastructure work.
3. After the hardware restarts, start the NAS cluster. About Deleting a NAS Cluster If you no longer need to provide NAS operations, you can delete the NAS cluster from the PS Series group. When you delete a NAS cluster: • All service data and all client data that is stored in the NAS reserve is destroyed. • The NAS reserve space is added back to the free pool space. • The NAS controllers are reset to the factory defaults. • The NAS controllers reboot.
13 NAS Controller Operations Table 43. Basic and Advanced NAS Controller Operations provides a list of basic and advanced NAS controller operations. Table 43. Basic and Advanced NAS Controller Operations Basic Add or replace NAS controllers Update NAS controller firmware Advanced Shut down a NAS controller pair Add Additional NAS Controllers After you add NAS controllers using the wizard, you can improve performance and availability of your network by adding up to two NAS appliances to the NAS cluster.
7. 8. For each client network, verify the following settings for the NAS cluster, and click Auto fill: • VLAN tagging • IP address • Netmask • Default gateway Click Auto fill to populate the table with NAS controller IP addresses, or type the addresses. NOTE: • The Auto Fill option bases new addresses on the first NAS cluster IP address. This approach results in duplicate addresses if any of the new addresses are already used on the network.
NOTE: Detach a NAS controller only when directed by your customer support representative. In some cases, your support representative might instruct you to cleanly shut down a NAS controller before detaching it. After cleanly shutting down the NAS controller, you can turn on power, wait for the NAS cluster to recognize the NAS controller, and then detach the NAS controller. You cannot detach a NAS controller if its peer NAS controller is already detached.
4. In the Attach NAS controller dialog box, select the NAS controller that you want to attach to the NAS cluster. You can identify a NAS controller by its service tag. NOTE: If the NAS controller is not listed, verify the physical state and network connections for the controller and then click Rediscover to refresh the list of controllers in the dialog box. 5. Click the Attach NAS controller button. A progress window opens, showing the progress of the attach NAS controller operation.
Upload the service pack by opening a URL using Windows Explorer (not Internet Explorer) or any other FTP client utility. For example: ftp://grpadmin@nas_cluster_management_ip_address:44421/service_pack Do not alter the service pack file name in any way. 4. When prompted, type the password for the grpadmin account. 5.
14 NAS Container Operations Table 44. Basic and Advanced NAS Container Operations provides a list of basic and advanced NAS container operations. Table 44.
Modify NAS Clusterwide Default NAS Container Settings To modify or display the NAS clusterwide default NAS container space settings: 1. Click Group, expand Group Configuration, and then select the NAS cluster. 2. Click the Defaults tab. 3. In the Default NAS Container Settings panel, modify the settings as needed. Modify NAS Clusterwide Default NAS Container Permissions To modify or display the NAS clusterwide default NAS container permission settings: 1.
2. Click Modify Settings. The Modify Settings dialog box opens. 3. Click the Space tab. 4. In the Size field, enter the new size for the NAS container. 5. Click OK. Modify the Snapshot Reserve and Warning Limit for a NAS Container NOTE: Select the Enable data reduction checkbox to activate the Modify policy button. Enabling data reduction permanently removes the snapshot reserve functionality from the NAS container.
NOTE: Deleting a NAS container deletes all the snapshots of the NAS container and all the SMB shares and NFS exports in the NAS container. After they are deleted, recovery containers cannot be recovered using the volume recovery bin. When you delete a NAS container, its replica is not deleted; the replica is promoted to a container on the destination cluster. To delete a NAS container: 1. Click NAS, expand NAS Clusters and Local Containers, and then select the NAS container name. 2.
1. Ensure that the NFS export has read-write permissions. Also, make sure that the trusted user setting is All. 2. For security, type the IP address of the export client in the Limit access to IP address field. This action ensures that only the client’s root user can access the export. 3. From a Linux or UNIX client, enter the showmount command to display the NFS exports that are hosted by the NAS cluster. For example: showmount -e nas_vip 4.
3. Select the NFS export in the NFS Exports panel and then click Modify NFS Export. The Modify NFS Export dialog box opens. 4. In the dialog box, click the Permissions tab. 5. Specify whether to allow access to all clients, if they meet other access control requirements, or to a specific IP address or subnet. You can use asterisks in the IP address. 6. Click OK. Modify the Permission for an NFS Export To modify the permission (read-write or read-only) for an NFS export: 1.
6. Click OK. Modify an NFS Export NOTE: To edit an NFS export, you must have group administrator (grpadmin) privileges. To edit the properties of an NFS export: 1. Click NAS, expand NAS Clusters and Local Containers, and then select the NAS container name. 2. Click the NFS Exports tab. 3. Select the NFS export in the NFS Exports panel and then click Modify NFS export. The Modify NFS Export dialog box opens. 4.
About SMB Shares SMB shares provide an effective way to share files located on a FluidFS cluster, such as the FS76x0, by using the Server Message Block (SMB) protocol. FluidFS v4 supports SMB protocol versions 1.0, 2.0, and 3.0. The default SMB protocol version is SMB 3.0. You can set the default to an earlier version using the CLI command nas-cluster select cluster_name smb-protocol. Refer to the Dell EqualLogic Group Manager CLI Reference Guide for more information about this command.
NOTE: You can create an initial SMB share when you create a NAS container. However, you cannot configure and enable the NAS antivirus service. You must modify this initial SMB share to configure and enable the antivirus service. 1. Click NAS, expand NAS clusters and Local Containers, and then select the NAS container name. 2. Click Create SMB share to open the wizard. 3. In the General Settings page: a. Type a name for the SMB share in the Name field.
2. Click the SMB Shares tab. 3. Select the SMB share in the SMB Shares panel and click Modify SMB share. 4. In the Modify SMB Share dialog box, click the General tab. 5. Modify the directory as needed. 6. Click OK. Delete an SMB Share To delete an SMB share and delete all the user data stored in that share: 1. Click NAS, expand NAS Clusters and Local Containers, and then select the NAS container name. 2. Click the SMB Shares tab. 3.
7. • Enable or disable virus scanning. • Modify file extensions. • Select directory paths to exclude. Click OK. NOTE: The default antivirus exclude path is no longer available. Directory paths must already exist in the SMB share before they can be excluded. To exclude directory paths from antivirus scanning: 1. Create the SMB share without the exclude option. See Create an SMB Share. 2. Go to the SMB share and create the directory paths that you want to exclude from antivirus scanning. 3.
NOTE: Only the SMB shares created in a NAS container after setting this default property will have access-based enumeration by default. SMB shares that were created before setting this property still have the properties that were set when the shares were created. Enable Access-Based Enumeration on Newly Created SMB Shares To write data to a NAS container, you must create an SMB share. 1. Click NAS, expand NAS Clusters and Local Containers, and then select the NAS container name. 2.
For example, when client jsmith connects to the FluidFS cluster, the feature will present jsmith with any available SMB shares, as well as an SMB home share labeled jsmith. NOTE: You still must create the user folders yourself and set the permissions manually or by using an automated script if the automatic home folder creation option is not enabled in the SMB home share settings.
Check marks appear in all the Allow checkboxes. This object is the share from which you will be creating a folder for each user’s home share. 5. In the Group or user names box, select Everyone, then click Remove → Apply. Replace the Owner 1. In the Properties window on the Security tab, click the Advanced button to open the window for Advanced Security Settings for home. 2. In the Advanced Security Settings window, click the Owner tab and then click the Edit button. 3.
3. Click Yes to confirm. Modify SMB Home Share Settings To modify SMB home share settings: 1. Click NAS, expand NAS Clusters, and select SMB Home Share. 2. In the Activities panel, click Modify settings. 3. In the SMB Home Share General dialog box, you can enable or disable automatic home folder creation, and enable or disable access-based enumeration. You can also modify antivirus settings if antivirus is enabled. 4. Click OK.
Check marks appear in all the Allow checkboxes. 6. Click OK. 7. Click two OK buttons. Both windows close, returning you to the Windows Explorer main window. 8. Close the MMC console. Create a NAS Thin Clone During the creation of a NAS thin clone, you cannot modify its size, minimum reserve percentage, or in-use warning limit percentage. These settings can be changed later. 1. In the NAS panel, select a local container that contains a snapshot. 2. Select the snapshot. 3.
Optimal Virtual IP Assignment To optimize availability and performance, client connections are load balanced across the available NAS controllers. NAS controllers in a NAS cluster operate simultaneously. If one NAS controller fails, clients are automatically failed over to the remaining controllers. When failover occurs, some SMB clients reconnect automatically, while in other cases, an SMB application might fail and the user must restart it.
To cancel, click No. Modifying Client Network Properties Default values for gateway IP address, bonding mode, and MTU are shared among all the client networks for a NAS cluster. Depending on the bonding mode selection, a message prompts you about the change in the number of virtual IP addresses for your client networks and the need to change the virtual IP address settings for each client network to ensure that load balancing is optimal. To modify the client network properties: 1.
NAS Antivirus Server Specifications The following requirements apply for antivirus servers: • FluidFS version 3.0 or later must be loaded on the cluster. • The server must be accessible by the network. Dell recommends that the server be located on the same subnet as the NAS cluster. • The server must run certified ICAP-enabled antivirus software.
6. In the Port field, type the port number or click Use default port. 7. Click OK to confirm your changes. Delete a NAS Antivirus Server You cannot delete the last server unless you first allow any in-progress operations to complete and you disable NAS antivirus on all SMB shares. NOTE: Reducing the number of available antivirus servers might affect file-access performance. To delete a NAS antivirus server: 1. Click Group, expand Group Configuration, and select the NAS cluster. 2.
• Use only numbers, letters, underscores, and dollar signs ($) in the file types. 2. Click Group, expand Group Configuration, and select the NAS cluster. 3. Click the Advanced tab and go to the Antivirus Defaults for SMB Shares panel. 4. In either the File Extensions to Exclude or Directory Paths to Exclude subpanel, click Add to open the Add List Item dialog box. 5. Specify a file type such as xls or ppt. Do not include the period (.) that separates a file type from the file name. 6.
Monitor the NAS Antivirus Service If you have configured NAS antivirus, you can monitor which SMB shares are using the service. 1. Click NAS , expand NAS Clusters and Local Containers, and then select the NAS container name. 2. Click the SMB Shares tab. 3. In the Virus scanning column, determine which shares have virus scanning enabled or disabled.
NOTE: Wildcard characters (*) and question marks (?) are not supported in antivirus exclude paths. – Click Delete and then click Yes to confirm. 4. Click OK. 5. Click the Save all changes icon. Exclude Directory Paths for an SMB Share You can exclude directory paths for a specific SMB share when you create an SMB share or when you subsequently modify the NAS antivirus settings for an SMB share. 1. Click NAS, expand NAS Clusters and Local Containers, and then select the NAS container name. 2.
• Select an extension, click Modify, change the extension, and click OK. • Select an extension, click Delete, and click Yes to confirm. 6. Repeat to add, modify, or delete additional file types. 7. Click OK. Antivirus Policy Depending on the antivirus policy, a file could be deleted immediately or made inaccessible to users and programs. If the antivirus server or NAS cluster's default setting causes file deletion, you can only recover a previous (uninfected) file.
5. Select the user type. 6. In the User field, you can enter a user name (or the beginning of a user name) and click the Search button. 7. Select the user and click Next. 8. In the Create quota – Quota settings dialog box, specify the following configuration settings: • Quota size and units (MB, GB, or TB) • In-use space warning limit, as a percentage of the quota size NOTE: Specifying zero for a quota size and warning limit disables the quota. 9. Click Next. 10.
3. In the Quotas panel, select defuser and click Modify. The Modify Quota dialog box opens. 4. In the dialog box, specify the following configuration settings: 5. • Quota size and units (MB, GB, or TB) • In-use space warning limit, as a percentage of the quota size Click OK. Delete a NAS Container Quota You cannot delete the default group quota or the default user quota. To disable a default quota, set the quota size and the warning limit to zero. To delete a NAS container quota: 1.
NOTE: Dell does not recommend using a combination of local and external authentication where replication and quotas are applied. External Authentication External authentication is managed on a server whenever a user logs in to a container in the same group as the server. Using external authentication, a user can log in to different containers in the group using the same user name and password. This authentication is performed with Active Directory, LDAP, or NIS, for example.
Term Description Available Space Storage space that is physically available for the NAS container. The available space for a NAS container is the amount of unused NAS container space (reserved and unreserved), provided that the NAS reserve has free space. Oversubscribed Space A portion of a thin-provisioned NAS container that is not available and not in use by the NAS container.
NOTE: • The NAS clusterwide default values are applied when new containers are created. • If you select the Container thin provisioning default checkbox, ensure that you click the Save all changes icon afterward. If you do not, updates from the array will clear your selection. Thin-Provisioned Container Attributes Thin-provisioned containers have the following characteristics: • Minimum reserve is 0% to 99%. The default is 0%. • Maximum size for thin-provisioned container is 500TB.
The data reduction settings are shown at the bottom of the status information panel. NAS Container Data Reduction Data reduction is a process that runs according to a schedule on each NAS container that has data reduction enabled. A policy that you define determines whether or not a file qualifies for data reduction, on the basis of access and modification times of that file.
Data Reduction Methods Data reduction is supported on a per-NAS-volume basis to store data more efficiently. The Dell FluidFS cluster supports two types of data reduction: • Deduplication — Performed on qualified data when data reduction is enabled on a container. You cannot disable deduplication when data reduction is enabled. Deduplication (or dedupe) provides data reduction by eliminating redundant copies of data across files in a volume by keeping only one copy of unique deduplicated data.
3. Click the Data Reduction tab. 4. Select the Enable data reduction checkbox. A confirmation message is displayed. 5. Click Yes to confirm enabling data reduction. 6. Review the data reduction policy shown on the dialog box. 7. (Optional) If you want to modify the policy settings for the container, click the Modify policy button to open the dialog box. • Compression By default, compression is disabled and only deduplication is enabled.
NOTE: The status of the filter is listed as File filters….disabled if the ignore-filters option was set through the CLI. If the status is File filters…disabled, any filters that have been configured through the GUI (or CLI) for Access Time or Modify Time have been disabled and all files are candidates for data reduction. Specifying the ignorefilters flag enables data reduction on a container with archive data without waiting for the minimum Access Time/ Modify Time data-reduction policy.
• This value must be in the range of 30 to 365 days if compression is enabled. If compression is disabled, the range is 5 to 365 days. By default, this value is 30 days. In Modify Time, enter a value. After modifying the modify time, click the Save icon to save the changes. The modify time is the minimum number of days that must pass since the file was last modified before the file is eligible for data reduction. This value must be in the range of 30 to 365 days if compression is enabled.
Analyzing data on a container for data reduction will take longer than scanning and reducing previously analyzed data. When defining the data reduction schedule on your system, consider when all files on a container need to have been analyzed for data reduction. The efficiency of the data reduction process is affected by the number of controllers analyzing data. Create a Data Reduction Schedule 1. Click Group, expand Group Configuration, and select a NAS cluster. 2. Click the Data Reduction tab. 3.
FS Series VAAI Plugin The VAAI plugin allows ESXi hosts to offload some specific storage-related tasks to the underlying FluidFS appliances.
To verify that an FS Series datastore has VAAI enabled use the command vmkfstools –P in the ESXi host console. The following example illustrates the query and output for a datastore named FSseries_datastore residing on a FS Series v4 or later system: ~ # vmkfstools -Ph /vmfs/volumes/FSseries_Datastore/ NFS-1.00 file system spanning 1 partitions File system label (if any): FSseries_Datastore Mode: public Capacity 200 GB, 178.
15 Diagnose and Resolve NAS Cluster and PS Series Issues If you have to work with Dell support to resolve an issue related to a PS Series array or a NAS cluster, you can provide the support team with necessary data to facilitate successful troubleshooting of the issue without having to install software or download tools from the Dell support site.
2. Day – Last 24 hours. Data is shown for each hour. 3. Week – Last 7 days. Data is shown every 6 hours in the last 7 days. 4. Month – Last month. Daily for the last month. 5. Year- Last year. Data is shown for every 2 weeks of the last year. Online Diagnostics You can obtain online diagnostic information while the system is still online and running.
4. Choose the appropriate option. 5. Press the Escape key at any time during a test to stop the test. Generating PS Series and NAS Cluster Diagnostics Reports NOTE: To generate diagnostics reports, you must have group administrator (grpadmin) privileges. To generate a diagnostics report: 1. Log in to Group Manager by using your administrator login ID and password. 2. In the navigation pane, click Tools. 3. Click Diagnostics reports. 4.
The following message is displayed: Do you want to delete TCP/SCP_server_ip_address? 7. Click Yes. The server is deleted from the list and the SMTP servers section is updated. Reinstall FS Series Firmware v4 from an Internal USB FS7600 and FS7610 NAS appliances contain an internal USB from which you can reinstall the FS Series firmware v4 factory image. If you experience general system instability or a failure to boot, you might have to reinstall the image.
16 About Backing Up and Protecting Your Data A PS Series group is part of a comprehensive backup and data protection solution. Snapshots provide quick recovery and offloading backup operations. On a PS Series group, the system creates the copy instantly and maintains it on disk storage within the group. It does not disrupt access to the volume and requires minimal impact on running applications. Snapshots can provide a stable copy of data for copying to backup media.
Protect NAS Container Data with NDMP A NAS cluster supports the Network Data Management Protocol (NDMP), which facilitates backup operations for network-attached storage, including NAS containers. A NAS cluster includes an NDMP server that performs NAS container backups to an external Data Management Application (DMA) server running backup software. After you configure a DMA server for a NAS cluster, the NDMP server listens on the client network for backup requests from the DMA servers.
A snapshot represents the contents of a volume at the time of creation. You can create snapshots of standard volumes, in addition to template volumes and thin clone volumes. A volume can be restored from a snapshot or a snapshot can be cloned to create a new volume. Creating a snapshot does not prevent access to a volume, and the snapshot is instantly available to authorized iSCSI initiators.
NOTE: Generally, snapshots will not be deleted unless you take action to delete them. In some instances, however, snapshots can be deleted by the system. For example, when a new snapshot is taken and not enough snapshot reserve space is available for the new snapshot and the previous one, the older one will be deleted. About Snapshot Reserve Before you can create snapshots of a volume, you must allocate snapshot reserve for the volume. Snapshot reserve is consumed from the pool where the volume resides.
The Modify volume settings dialog box opens. 4. 5. In the dialog box, click Space. In the Snapshot Space section, modify the following values as needed: • Snapshot reserve – The reserve is the amount of space allocated to snapshot storage for the volume. It is expressed as a percentage of the volume’s size. For example, if the volume size is 100GB and the snapshot reserve is 100 (the default), an additional 100GB of space will be allocated for snapshot storage.
When you select a snapshot timestamp, its full name (volume and timestamp) appears in the GUI main window and in the Snapshot iSCSI Settings panel. You can modify the snapshot name. The new name can contain up to 127 characters. (Fewer characters are accepted for this field if you type the value as a Unicode character string, which takes up a variable number of bytes, depending on the specific character.
1. Click Volumes. 2. Expand Volumes and then expand the volume name. 3. Select the snapshot timestamp. 4. In the Activities panel, click Modify snapshot properties to open the dialog box. 5. In the General tab, type the new snapshot name and, optionally, a description. 6. Click OK. NOTE: Snapshot names can be up to 127 characters.
4. In the Activities panel, click Delete snapshot. 5. When prompted to confirm the deletion, click Yes. Restore a Volume from a Snapshot You can restore a volume from a snapshot, and replace the data in the current volume with the volume data at the time you created the snapshot. The snapshot still exists after the restore operation. The following considerations and constraints apply when restoring a volume from a snapshot: • All members that contain data from a volume or snapshot must be online.
3. In the Name field, type a snapshot name: 4. • A snapshot name can contain up to 229 characters, including letters, numbers, and underscores. Fewer characters are accepted for this field if you type the value as a Unicode character string, which takes up a variable number of bytes, depending on the specific character. • If you do not assign a snapshot name, the NAS cluster generates a name automatically, based on the NAS container name and the timestamp. Click OK.
Modify the Name of a NAS Container Snapshot When you create a snapshot, the name automatically assigned to the snapshot is based on the volume name, and includes a timestamp and an identification number. To modify the name of a NAS container snapshot: 1. Click NAS and expand NAS Cluster. 2. Expand Local containers and select the container that is associated with the snapshot that you want to modify. A plus sign appears next to the container names that are associated with one or more snapshots. 3.
Fewer characters are accepted for this field if you type the value as a Unicode character string, which takes up a variable number of bytes, depending on the specific character. – (Optional) Collection description, up to 127 characters, including colons, hyphens, and periods. Fewer characters are accepted for this field if you type the value as a Unicode character string, which takes up a variable number of bytes, depending on the specific character. Procedure To create a custom snapshot collection: 1.
About Snapshot Space Borrowing Snapshot space borrowing enables the system to temporarily increase the available snapshot space for a volume by borrowing from other sources. If borrowed space is needed for other functions, the firmware might delete snapshots that are borrowing space. Because of this potential deletion, you should always be aware of when snapshots are borrowing space. NOTE: Borrowed space is intended to help during peaks of activity when more space is needed temporarily.
1. Click Volumes. 2. Expand Volumes and then select the volume name. 3. In the Snapshots section of the Activities panel, click Modify snapshot policy. 4. In the dialog box, select either Set volume offline or Delete oldest snapshot. NOTE: Snapshot space borrowing cannot be enabled if the Set volume offline option is selected. The Delete oldest snapshot option enables snapshot space borrowing. NOTE: You cannot select Set volume offline for recovery volumes. 5.
Eventually, the oldest replicas are deleted from the replica set to free space for new replicas. The amount of space that you allocate for storing replicas limits the number of replicas you can keep on the secondary group. NOTE: To ensure that a complete copy of volume data exists on the secondary group, the most recent, complete replica of a volume cannot be deleted. To access or recover volume data from replicas, you can: • Clone an individual replica to create a new volume on the secondary group.
• Reported size of the volume • Whether thin-provisioned or not • Estimated rate of volume changes (depends on volume usage) 2. Make sure that the primary group has enough free pool space for the local replication reserve for each replicated volume. 3. Identify a replication partner (secondary group) to store the volume replicas. This secondary group must meet the space and network connectivity requirements. 4.
Figure 16. Replication to Multiple Partners Reciprocal Replication Between Partners Both partners replicate volumes to each other. For example, in Figure 17. Reciprocal Replication Between Partners, GroupA replicates Volume1 to GroupB, and GroupB replicates Volume2 to GroupA. For the replication of Volume1, GroupA is the primary group and GroupB is the secondary group. For the replication of Volume2, GroupB is the primary group and GroupA is the secondary group. Figure 17.
Figure 18. Centralized Replication About Replication Space Volume replication between partners requires space on both the primary group (the volume location) and the secondary group (the replica location). These space requirements are classified in the following way: • Local replication reserve Each volume requires primary group space for use during replication and, optionally, for storing the failback snapshot.
Figure 19. Local Replication Reserve Local replication reserve has two purposes: • Preserve the contents of the volume at the time replication started. The primary group creates a snapshot of the volume in the local replication reserve to preserve the contents of the volume at the time replication started. If the volume changes occur during replication, the snapshot tracks those changes, consuming more local replication reserve.
• If you did not enable the option to borrow free pool space (or if you enabled the option, but not enough free pool space is available), the primary group deletes the failback snapshot and generates an event message. To reestablish the failback snapshot, increase the local replication reserve and replicate the volume.
The local replication reserve size is based on a percentage (5 percent to 200 percent) of the volume reserve. For a thin-provisioned volume, the volume reserve size changes dynamically based on volume usage; therefore, the local replication reserve size also changes. The recommended local replication reserve percentage depends on whether you want to keep the failback snapshot: • No failback snapshot Specify 100 percent for the local replication reserve.
The secondary group administrator delegates space to the primary group when configuring the group as a replication partner. The administrator can modify the partner configuration and increase or decrease delegated space as needed. When the primary group administrator configures a volume for replication, a portion of that delegated space is reserved for the volume. This space, called replica reserve, limits the number of replicas for that volume that you can keep on the secondary group.
2. The primary group increases the replica reserve if the replica volume usage increased since you enabled replication on the volume. NOTE: If delegated space is too small to hold the increased, replica reserve, the primary group generates an event message and replication pauses. Replication resumes automatically when delegated space is large enough to hold the reserve. 3. The primary group copies the contents of the volume to replica reserve, decreasing the amount of free replica reserve.
Guidelines for Sizing Replica Reserve for a Volume To determine the amount of space that the secondary group must delegate to the primary group, you must obtain the replica reserve requirement for each primary group volume that you are replicating to the secondary group. When you configure a volume for replication, you specify the replica reserve size as a percentage (minimum 105 percent) of the replica volume reserve, which approximates in-use volume space.
NOTE: If your system has delegated space configured across multiple storage pools, the size of the space in at least one of the pools must be greater than the volume size. Otherwise, replications will fail. For example, if you have 4 pools with 20GB of space each, but the volume size is 30GB, one or more of the pools must be changed to greater than 30GB for replications to succeed.
About Replication Partners Before you can replicate volume and NAS container data between two PS Series groups, you must configure the groups as replication partners. Each partner plays a role in the replication of a volume, and you can monitor replication activity and manage replicas from either partner: • Primary group Location of the volume. The primary group administrator configures the secondary group as a replication partner and initiates the replication operation.
Primary Group Replication Attributes Table 49. Primary Volume Replication Attributes describes attributes that you set when you configure a volume for replication in the primary group. You can modify the replication configuration and change the attribute values. Table 49. Primary Volume Replication Attributes Attribute Description Replication partner Partner that stores the volume replicas. The partner must have space delegated to the group.
Attribute Description Passwords are case sensitive and can include up to 254 ASCII characters. Delegated space Amount of space to delegate to the partner. Required only if the group stores replicas from the partner. See About Delegated Space and Replica Reserve. NAS configuration If you are replicating NAS containers, the replication partner must have a compatible NAS configuration.
2. Expand Volume Replication and then select Inbound Replicas. The Inbound Replicas panel displays information about the replicas. You can also choose to perform the replication by using manual transfer replication. See the Dell EqualLogic Manual Transfer Utility Installation and User’s Guide for more information. Replication Partner Fields You need the information in the following table for both primary and secondary groups. Data Description Example Name Name of the primary group.
Modify Partner Contact Information To modify partner contact information: 1. Click Replication and then select the partner name. 2. Click Modify partner settings. 3. Change the contact name, email address, or phone numbers. 4. Click OK. Manage Space Delegated to a Partner You can modify the space delegated to a partner, subject to the following restrictions: • You cannot decrease the space delegated to a lower capacity than is currently reserved for the partner’s replicas.
NOTE: If the group is hosting a recovery volume from the partner, before you delete the partner either: • Demote the recovery volume to an inbound replica set (which is deleted when you delete the partner). Double-click the recovery volume in the far-left panel and click Demote to replica set. • Promote the recovery volume to a permanent volume. To delete a replication partner: 1. Determine whether NAS container replication is enabled. If so, delete all NAS container replication relationships. 2.
Pause and Resume Replication of a Volume You can pause and resume volume replication. For example, tasks such as promoting a replica set require you to first pause volume replication. To pause replication for a volume: 1. Click Volumes. 2. Expand Volumes and then select the volume name. 3. In the Activities panel, click Pause volume replication. 4. When prompted to confirm the decision, click Yes. To resume replication for a volume: 1. Click Volumes. 2.
• When you disable replication on a volume, the delegated space on the secondary group that is storing the replicas becomes an unmanaged space. You cannot manage this space from the primary group. If you do not need the replicas, log in to the secondary group and delete the replica set. • You cannot disable replication on a template volume if any attached thin clones have replication enabled. To disable replication for one volume: 1. Click Volumes. 2. Expand Volumes and then select the volume name.
Configure a Volume Collection for Replication You can simultaneously replicate data in related volumes by replicating the volume collection. The resulting set of replicas is called a replica collection. NOTE: To replicate a volume collection, you must configure all the volumes in the collection to replicate to the same partner. 1. Click Volumes. 2. Expand Volume Collections and then select the collection that you want to replicate. 3.
A volume and its replica set are always stored in different groups connected by a robust network link. Separating the groups geographically protects volume data in the event of a complete site disaster. All replicas are thin provisioned by default. Create a Replica The first time that you replicate a volume to a partner, the primary group copies the entire volume contents to replica reserve on the secondary group.
• To selectively delete replicas from a set, select the replica set and click Delete replicas in the Activities panel, or right-click the replica set and select Delete Replicas from the menu. In the dialog box that opens, you can select the replicas to be deleted. Hold down the Control key while clicking to select multiple replicas, then click OK to delete the selected replicas.
• • • To delete a replica collection set, select it in the Remote Replicas panel, then click Delete replica collection set. To delete a replica collection, expand the replica collection set, then select the replica collection and click Delete replica collection. To delete a single replica from a replica collection, expand the replica collection, then select the replica and click Delete replica.
Previous to v8.0, writing a significant amount of data to a volume might result in the day’s replica being unusually large. The size of this replica might be so large that older replicas are deleted from the replica set, which means that the replica set no longer retains three replicas. As of v8.0, replication borrowing allows replica sets to borrow enough space to hold the replicas that would otherwise be deleted.
About Schedules You can create schedules to automatically perform volume and NAS container operations at a specific time or on a regular basis (for example, hourly or daily). For example, you can create a schedule to create snapshots or replicas of a volume, volume collection, or NAS container. The following restrictions apply: • If a volume is part of a volume collection, make sure that a schedule for the collection does not overlap a schedule for the volume.
• Schedule type (snapshot or replication) • Schedule frequency (one time, hourly, or daily) Attributes that are available for modification depend on the type and frequency of the schedule. To modify a schedule: 1. Click Volumes, and then either: • Expand Volumes, select the volume, and click the Schedules tab. • Expand Volume Collections, select the collection, and click the Schedules tab. 2. Select the schedule in the Snapshot and Replication Schedules panel and click Modify. 3.
NOTE: You cannot change the schedule type. 5. Click OK. Monitor NAS Snapshot Schedules 1. Select Monitoring in the navigation menu. 2. Under NAS Schedules, select NAS Snapshot schedules. 3. Right-click any of the data fields to modify, delete, enable, or disable a schedule: • Name — User-defined schedule name. • Container — Name of the source NAS container. Click the name to go to the container’s data. • Create — Schedule action (in this case, create a snapshot).
17 About Data Recovery A PS Series group is part of a comprehensive backup and data protection solution. Snapshots provide quick recovery and offloading backup operations. Restore operations are more reliable because snapshots ensure the integrity of the backed-up data. Replication protects data from serious failures such as destruction of a volume during a power outage, or a complete site disaster.
How quickly you can replicate the recovery volume depends on the presence of the failback snapshot on the primary group. The failback snapshot establishes the failback baseline, which is the point in time at which the volume on the primary group and the most recent complete replica on the secondary group have the same data. If the failback snapshot exists, only the changes made to the recovery volume are replicated.
• You cannot use snapshots to restore data to template volumes. • Restoring the volume from a snapshot requires taking the volume offline and terminating any iSCSI connections to the volume. • You cannot restore a synchronous replication (SyncRep) volume from a snapshot if the snapshot’s size is different from that of the volume. • You cannot restore template volumes from snapshots. Failback to Primary Operation (Manual) The Failback to Primary operation consolidates multiple tasks.
• You can use manual replication if a large amount of data must be transferred. See the Dell EqualLogic Manual Transfer Utility Installation and User’s Guide or the online help for information. To replicate a recovery volume to a partner using individual tasks: 1. 2. Log in to the primary group and then: a. Set the original volume offline. b. Cancel any in-progress replication. c. Set any snapshots for the volume offline. d.
• Before you can make a template replica set promotion permanent, you must permanently promote all the attached thin clone replica sets. • You must specify: – A new volume name, must be unique name in the group Name can be up to 63 bytes and is case-insensitive. You can use any printable Unicode character except for ! " # $ % & ' ( ) * + , / ; < = > ?@ [ \ ] ^ _ ` { | } ~. First and last characters cannot be a period, hyphen, or colon.
Make a Temporary Volume Available on the Secondary Group You can make a temporary copy of a volume available on the secondary group, while providing continuous access to the original volume on the primary group. Using a temporary copy is helpful when you want to perform an operation (such as a backup) on the copy with no disruption to users. When the operation is completed, you can resume replicating the volume.
4. Click Replicate to partner to open the Replicate Recovery Volume dialog box. 5. Specify the group administrator account name and password. 6. Select whether to perform the replication by using manual transfer replication. 7. Select whether to save the primary group administrator account name and password for future use in the current GUI session. 8. Click OK. Monitor the Replicate to Partner operation to make sure that all tasks complete: 1.
• Retain the iSCSI target name of the original volume. • Keep the ability to demote to the replica set. (Unless you are permanently promoting the replica set, make sure that you keep this ability.) 7. Click Next to open the Promote Replica Set – iSCSI Access panel. 8. Specify the following information: • Conditions that a computer must match to connect to the recovery volume. Type a CHAP user name, IP address, or iSCSI initiator name. • Recovery volume permission (either read-only or read-write).
2. Expand the recovery volume to display the status of each task in the operation. If an individual task fails during a Replicate to Partner or Failback to Primary operation, correct the problem. After correcting the problem, in the Failback Operations panel, right-click the failed operation and click Retry task. The operation continues automatically. Fail Back to the Primary Group When you want to return to the original volume replication configuration, you can use the Failback to Primary operation.
3. a. Demote the original volume to a failback replica set on the primary group. b. Replicate the recovery volume to the primary group. If you kept the failback snapshot for the original volume, only the changes made to the recovery volume are replicated. When you are ready to fail back to the primary group, use the Failback to Partner operation to: a. Set the recovery volume offline. b. Perform a final replication to synchronize the volume data across both groups. c.
Prerequisites for Permanently Promoting a Replica Set to a Volume The following constraints apply: • • In some cases, you cannot permanently promote a replica set in a single operation. If you cannot deselect the Keep ability to demote to replica set option, you must temporarily promote the replica set and then make the promotion permanent. See Promote an Inbound Replica Set to a Recovery Volume and Make an Inbound Replica Set Promotion Permanent.
About Failing Over and Failing Back a Volume If a failure or maintenance in the primary group makes a volume unavailable, you can fail over to the secondary group and allow users to access the volume. If the primary group becomes available, you can fail back to the primary group. Restriction: You cannot replicate a recovery template volume, and you cannot demote a template volume to a failback replica set.
Figure 22. Primary Group Failure (Data Not Available) Figure 23. Step 1–Fail Over to the Secondary Group (Data Available) shows the first step in recovering data on the secondary group, which is to fail over the volume to the secondary group. To fail over the volume, promote the inbound replica set to a recovery volume and snapshots. The recovery volume contains the volume data represented by the most recent, complete replica. Users can connect to the recovery volume to resume access to volume data.
Figure 24. Step 2–Replicate to the Primary Group (Data Available and Protected) shows the second step in recovering data— replicate to the primary group. When the primary group is available: • Demote the original volume to a failback replica set. • Replicate the recovery volume to the primary group. NOTE: If the failback snapshot is not available on the primary group, the first replication transfers all the recovery volume data, instead of only the changes that users made to the recovery volume.
Figure 25. Step 3–Fail Back to the Primary Group About NAS Disaster Recovery Disaster recovery restores data on a primary storage resource and returns that resource to a full working state with minimal data loss after operation on that resource is interrupted. The interruption could be planned, such as a maintenance update, or unplanned, such as a power outage. CAUTION: If the site containing the source container incurs a catastrophic loss, contact Dell Technical Support for assistance.
opens several TCP ports to mirror differences across the network. Figure 26. Basic NAS Replication shows an example of basic NAS replication. When replication finishes, the system creates a replication snapshot and compares the replication snapshot on the destination NAS cluster to the replication snapshot on the source NAS cluster. Data flows in both directions in NAS replication, meaning that the same cluster can host both source and destination clusters.
1. On the source cluster, select NAS, expand Local Volumes, and select the NAS volume that you are replicating. 2. Click Replicate. 3. Click Yes. 4. (Optional) Display the Alarms and Operations toolbar and click the Failback Operations tab. NAS Replication Network Architecture NAS replication depends on specific network capabilities, such as TCP ports opened over a secure tunnel and communication using the storage area network (SAN). NAS replication opens several TCP ports.
• All EqualLogic SAN ports by way of the EqualLogic Group IP • NAS cluster SAN Management Virtual IP (VIP) address • Physical SAN ports on every NAS controller and SAN IP address NOTE: Ports referred to here are physical and do not refer to TCP/IP port numbers opened through the TCP/IP stack using an application. Table 51. TCP/IP Port Numbers shows the ports that must be open on the firewall. Table 51.
FS7610 Cluster Management and Replication Port CAUTION: The ports identified with an arrow in Figure 28. FS7610 NAS Cluster Management and Replication Port are used for NAS cluster management and replication functionality. It is critical that these ports remain connected and operational at all times. Figure 28. FS7610 NAS Cluster Management and Replication Port FS7600 Cluster Management and Replication Port CAUTION: The ports identified with an arrow in Figure 29.
Figure 30. FS7500 NAS Cluster Management and Replication Port Set Up Your NAS Replication Environment To help ensure successful replication, for each NAS container that you want to replicate, follow these steps to set up your replication environment: 1. Gather the following information to help you determine how much replication space you need: • Number of replicas that you want to keep • Average time span between each consecutive replica • Reported size of the volume • Whether thin-provisioned 2.
Pausing NAS container replication suspends any replication operations for the container that are in process. While replication is paused, scheduled replications do not take place. • You can pause NAS container replication for individual containers. Unlike volume replication, it cannot be paused for all replications to a specific partner. You can pause replication from either the source or destination group.
NOTE: • The requirements for NAS container replication are not validated when you create the replication partnership. The system will not allow you to configure replication for a container if the source and destination clusters do not meet the configuration requirements. • Restoring volume configuration on the destination cluster is not supported between major revisions (such as version 3 > version 4).
The configuration of your environment determines how you change the configuration. For example, if the source cluster uses Active Directory (AD) / Lightweight Directory Access Protocol (LDAP), the destination cluster must use the same AD/LDAP. This setup ensures all user information is retained in the new configuration. • Change the Domain Name System (DNS) server to point to the destination cluster instead of the source cluster.
• Both groups must be running PS Series firmware version 7.0 or later, and the clusters on those groups must be running FS Series firmware version 3.0 or later. To perform single-step failback to primary: 1. Click NAS, expand NAS Cluster, and expand Local Containers. 2. Select the recovery container. 3. Click Failback to primary. The Failback to Primary message is displayed. 4. Click Yes. The Replicate Recovery Container message is displayed. 5.
To replicate to a container in a cluster: 1. Log in to Group Manager and delete replication for the source container. 2. From the destination cluster, configure replication for the promoted recovery container, specifying that it replicate back to the original source container. 3. Manually perform replication on the promoted recovery container. 4. After replication completes, log in to the source cluster and promote the original source container. 5.
NOTE: To display replication history: 1. Click Monitoring. 2. Below NAS Replication, select Outbound Replica Containers. 3. Select the Replication History button. Activate the Source Cluster When you configure the source cluster to serve client requests, you reverse the changes that you made when you activated the destination cluster for failover. While the source cluster is being activated, client connections might fail and need to be reestablished.
1. Click Replication → Replication Partners. 2. In the Activities panel, click Configure partner. The Replication Partner Identification tab of the Configure Replication Partner wizard opens. 3. Provide the requested information in each step of the wizard and click Next. Refer to the online help resources that are available in the user interface to view descriptions of individual fields. 4. 5. 6.
Attribute Tested Authentication Status Action Test was not run An internal error prevented this test from running. If you continue to receive this error, contact Dell customer support. OK All of the following conditions have been validated: • • Invalid The configured partner's name matches the remote group name. The inbound and outbound passwords configured on the remote replication partner.
For disaster recovery on NAS containers, after you fail over to the destination container, you can fail back to the primary container in a single-step process. A properly configured system does not require a configuration restoration to perform a failover operation. However, if the source cluster configuration needs to be applied to the destination cluster, contact Dell Technical Support for assistance.
Figure 31. NAS Replication Failover Performing a failover to the NAS destination cluster involves the following steps: • Promoting each replication destination NAS container • Activating the destination cluster After resolving the cause of the failure on the NAS source cluster, fail back to the NAS source cluster. Fail Back to a Source Volume Failing back to the source, or primary, volume reestablishes the originally configured replication relationship.
5. Activate the source cluster. 6. Recreate the replication relationship. NOTE: You have to reinstall Dell Fluid File System (FluidFS) on the source cluster only if the source cluster is entirely new. See the Installation and Setup Guide if you must reconfigure the source cluster. About Promotions and Recovery Containers If a NAS container becomes unavailable, you can promote a replica container to a recovery container, preserving host access to the container’s data.
About Cloning Volumes Cloning a volume creates a new standard volume, template volume, or thin clone volume, with a new name and iSCSI target, but the same reported size, pool, and contents as the original volume at the time of the cloning. • [Parent volume] Templates are read-only (gold image) copies of a volume. • Thin clones are duplicate volumes that share space with their parent volume. Cloning a volume consumes space from the pool where the original volume resides.
• Cloning a thin clone replica creates a new thin clone volume, which remains attached to the thin clone replica set. By default, the new volume is set online, has read-write permissions, and uses the group default snapshot space and iSCSI settings. Cloning a replica consumes 100 percent of the original volume reserve from free secondary group pool space. If you want to create additional snapshots or replicas of the new volume, additional space is needed.
18 About Synchronous Replication Synchronous replication (SyncRep) is the simultaneous writing of data to two pools for a volume in the same PS Series group, resulting in two hardware-independent copies of the volume data. Each write must go to both pools before the write is acknowledged as complete. If one copy of the volume data is not available due to a power failure or resource outage, you can still obtain the data from the other pool.
Table 53. Comparing Synchronous Replication and Traditional Replication provides in-depth information about the differences between the two features. Table 53. Comparing Synchronous Replication and Traditional Replication Consideration Traditional Replication Synchronous Replication (SyncRep) Typical use case A point-in-time process that is conducted between two groups, often in geographically diverse locations.
Consideration Traditional Replication Synchronous Replication (SyncRep) See About Synchronous Replication and Snapshots for more information. Scheduling Replication operations can be scheduled using the same mechanism used for scheduling snapshots. Pool space requirements The primary group must have enough space for the volume reserve and local replication reserve, in addition to any snapshot reserve. Replication between the SyncActive and SyncAlternate volumes is continuous.
Figure 32. Synchronous Replication 1. The iSCSI initiator sends a write to the group. 2. Data is simultaneously written to both the SyncActive and SyncAlternate volumes. 3. The SyncActive and SyncAlternate volumes confirm to the group that the writes are complete. 4. The write is confirmed to the iSCSI initiator.
• Until all tracked changes are written, the data in the SyncAlternate volume is valid only up to the point in time when the volume went out of sync. • While changes are being tracked or when tracked changes are being written back to the SyncAlternate volume, performance might be temporarily degraded. Figure 34. Tracked Changes Written to SyncAlternate Volume 5. When all tracked changes are written, the volume goes back in sync. 6.
• Disconnect the SyncActive volume, as documented in the Group Manager online help. If the disconnect fails, try logging in to a different member in the SyncActive pool. NOTE: If the disconnect operation will not succeed from any member in the SyncActive pool, contact your Dell support provider for assistance. • Log in to the Group Manager GUI using an IP address that belongs to a group member in the pool containing the SyncAlternate volume. Do not use the group IP address. • Fail over the volume. 5.
A volume can become out of sync if synchronous replication is paused, or if one of the volumes becomes unavailable or has no free space. The volume can become out of sync when the snapshot reserve in the SyncAlternate volume is full, but only when the snapshot space recovery policy sets volumes offline when the snapshot reserve is depleted.
NOTE: If data reduction has been enabled on the volume, snapshot reserve is permanently disabled. When you create a snapshot of a volume for which synchronous replication (SyncRep) is enabled, the snapshot resides in the pool that is the SyncActive pool at the time the snapshot is created. If the SyncActive pool is switched, existing snapshots remain in the pool in which they were created, and subsequent snapshots reside in the new SyncActive pool.
You can switch the synchronous replication configuration if a failure is imminent for the active pool, or if maintenance needs to be performed on the array hardware in the active pool. You can also switch pools at any time, even if a failure has occurred, provided that the volume is in sync. Aside from the brief period when the volume is offline during the switch, switching eliminates downtime during a maintenance window on the active pool.
For example, assume you have a collection for which the active pool is Pool-A and the alternate pool is Pool-B, and Pool-C is a third pool in the group. Any synchronous replication-enabled volumes that use Pool-C must have their pool assignments changed to be using Pool-A and Pool-B before they can be added to the collection. • When you create the collection, active and alternate pools for the collection’s volumes are chosen based on the pool assignment of the first volume added to the collection.
• Before converting a synchronous replication volume into a template, it must be in sync. To convert an out-of-sync volume to a template, you must first disable synchronous replication for the volume. Configure Synchronous Replication (SyncRep) on a Volume Before you can configure synchronous replication for a group, the group must include at least two different storage pools. See the following sections for more information: Create an Empty Storage Pool and Create a Storage Pool from an Existing Member.
1. Click Volumes. 2. Expand Volumes and then select the volume. 3. In the Activities panel, click Pause SyncRep. When prompted, confirm the action. While synchronous replication is paused: • The volume’s SyncRep status in the General Volume Information panel indicates that it is paused. • If any data is written to the volume while synchronous replication is paused, the writes are tracked. The tracked changes are written to the SyncAlternate volume when synchronous replication is resumed.
Change the Pool Assignment of a Synchronous Replication (SyncRep) Volume To change the pool assignment for a synchronous replication volume, use the following steps. NOTE: The same free space requirements apply to changing the pool containing the SyncActive or SyncAlternate volume that also apply when moving a volume for which synchronous replication is not enabled. 1. Click Volumes. 2. Expand Volumes and then select the volume. 3.
If you fail over to the alternate pool while the volume is out of sync, any changes written to the volume since it went out of sync will be written to a snapshot. As with other snapshots, it can be cloned or restored, but is also subject to deletion by the group’s snapshot retention policy. If the snapshot is deleted, its data will be lost and cannot be recovered. If you fail over to the alternate pool while the active pool is disconnected, all unreplicated data will be lost and unrecoverable.
19 About Self-Encrypting Drives (SEDs) and AutoSED A self-encrypting drive (SED) performs Advanced Encryption Standard (AES) encryption on all data stored within that drive. SED hardware handles this encryption in real-time with no impact on performance. To protect your data, a SED will immediately lock itself whenever it is removed from the array (or otherwise powers down). If the drive is lost or stolen, its contents are inaccessible without the encryption key.
About Self-Encrypting Drives (SED) SEDs (self-encrypting drives) are disk drives that use an encryption key to secure the data stored on the disk. This encryption protects the PS series array from data theft when a drive is removed from the array. SED operates across all disks in an array at once. If one drive in a RAID set is removed from the array, a new set of encryption key shares is generated automatically and shared among the remaining disks.
• Loss of the entire array, or simultaneous loss of half of the drives in the array. If half of the drives on an array are lost, the data on those drives is compromised. The locking mechanism for the remaining drives is also compromised, leaving the data exposed. If more than half of the drives are lost, the array is rendered inoperable.
During normal operation, the array has the information it needs to operate SED disks. The key shares are stored across the array on the non-spare disks. If a disk fails and is replaced by a spare, the configuration generates a new set of key shares, and the original key shares are discarded. If a SED disk goes offline due to power failure, removal from the array, or disk failure, the disk is automatically locked, and any data residing in memory about that disk drive is automatically wiped.
Is it safe to discard or return a locked SED? Yes. Any data that you have written to the drive will be locked and inaccessible. When you return a drive to Dell, the only information that remains readable are its operating statistics (S.M.A.R.T. data), its RAID type, and its hardware error logs. Can I add SEDs to a non-SED array, or vice versa? No. Do not mix SEDs and non-SEDs in the same array.
Security is not compromised. Array Y cannot unlock the drive because it needs the SEDset key from array X. The drive can be manually converted to a spare, and doing so will instantly erase it. 7. SED array is operating normally. A drive and a controller are removed. Security is not compromised on the drive. The SEDset key cannot be found on the controller, even if it is pulled from a running system.
When AutoSED generates a backup set, this set consists of three shares with a threshold of two, which adds security and reliability to a sensitive process. To destroy a set of shares, you could erase every share. However, if you erase only two shares from a backup set, the remaining share cannot recover the key and is useless. Example: AutoSED Key Sharing Consider an enclosure with 22 active drives and 2 spares: 1.
20 About Monitoring You can review comprehensive data about your array groups. Monitoring your PS Series array groups provides data that enables you to assess the health of your storage environment to quickly identify hardware and software problems. Monitoring provides valuable data about the time and action of users on the system, protecting against security intrusions.
• • • • • • • • Replication schedules, replication (including inbound and outbound replication), and replication partners Alarms and operations (including critical and warning alarms, actions, group operations, and failback operations), and storage pool free space Group members (including a specific member), the member health status, and member space Member enclosures, including power supplies and other hardware Control modules Disk drives Network hardware Volumes, collections, and snapshots, including cur
NOTE: The “Go to” icons work only when monitoring is paused. Table 54. Performance Monitor Operation Iconss Icon Operation Start polling the data. Stop polling the data. Go to the start (first item). Go to the previous item. Go to the next item. Go to the end (last item). Add, Change, or Remove Statistics You can display up to four sets of statistics in the Performance Monitor window. To add more statistics: 1. Click Add statistics to open the Select Statistics dialog box. 2. Expand Members. 3.
To delete counter sets: 1. In the Performance Monitor window, click Add statistics. 2. In the Select Statistics dialog box, click Counter sets. 3. In the Counter Set Management dialog box, select the counter set that you want to delete. 4. Click the Delete link. 5. In the Delete Counter Set Confirmation dialog box, click Yes. Change How Data Is Displayed Table 55. Changing How Data Is Displayed shows the icons that you use to change the data display.
Figure 35. Performance Monitor – Select Data Point Customizing the Performance Monitor Within the Performance Monitor window, you can change the following items: • Colors used in graphs See Change the Display Colors. • Values for data points: – Length of time between which data points are collected – Number of data points to save See Change the Data Collection Values.
Figure 36. Performance Monitor – Select a Color Dialog Box Change the Data Collection Parameter Values You can change the following parameter values for data collection as needed: • Time between data points • Number of data points to save Table 56. Data Collection Values shows the parameters and their default and maximum values. Table 56.
About SAN Headquarters SAN Headquarters (SAN HQ) is a performance-monitoring tool that enables you to monitor multiple PS Series groups from a single graphical user interface (GUI). SAN HQ gathers and formats configuration and performance data into easy-to-view charts and graphs. Analyzing this data can help you improve performance and more effectively allocate group resources.
• Click the Hide SAN HQ reminder option from within the alarm. To reverse this configuration and reenable the display of SAN HQ monitoring alarms: 1. Click Group → Group Configuration → General. 2. In the SAN Headquarters section of the panel, select Enable reminder in Alarms panel. Monitor Group Members Member hardware problems typically cause event messages and alarms. Monitor the member hardware and replace any failed components immediately.
EIP or OPS Card • Some array models include an Enclosure Interface Processor (EIP) card, and others contain an OPS (operations) card. An array continues to operate if the EIP or OPS card fails. You can replace the failed EIP or OPS card with no impact on group operation. • In the Member Enclosure window, the EIP card panel shows the EIP card status. The OPS card panel shows the OPS card status. Channel Cards • Some array models include redundant channel cards.
The various panels display information about the selected member. Monitor Control Modules Each group member has one or two control modules installed. One control module is designated as active (responsible for serving I/O to the member). On the active control module the LED labeled ACT is lit. In a dual control module array, the other control module is secondary (mirrors cache data from the active control module).
Table 57. Control Module Status Status Description Solution active Serving I/O to the member None needed; informational secondary Mirroring cache data from the active control module None needed; informational Cache Battery Status Table 58. Cache Battery or Cache-to-Flash Module Status describes status values for control module cache batteries or cache-toflash modules, depending on the array model, and provides solutions for any problems. Table 58.
For information about replacing channel cards, see the Hardware Owner's Manual for your array model or contact your PS Series support provider. NVRAM Battery Status Table 60. NVRAM Battery Status describes status values for control module NVRAM coin cell batteries and provides solutions for any problems. NOTE: Some arrays do not have an NVRAM battery. Table 60. NVRAM Battery Status Status Description Solution good Battery installed and fully charged None needed; informational.
Status Description Solution copying to spare Data is being written to a spare drive. None needed; informational unsupported version Drive is running an unsupported firmware version. Contact your PS Series support provider. When a drive in a RAID set fails, a member behaves as follows: • If a spare drive is available — Data from the failed drive is reconstructed on the spare. During the reconstruction, the RAID set that contains the failed drive is temporarily degraded.
– down — Not operational, not connected to a functioning network, not configured with an IP address or subnet mask, or disabled • Port failover status (under Controller) — Current status of the controller: – Primary — no vertical port failover – Secondary — vertical port failover has occurred – Unknown — value cannot be determined • Requested status (under Network interface) — Status set by administrative action: – enabled — Configured and serving I/O – disabled — Not serving I/O, but might be configured
• Check your PS Series group capacity • A key component of the health of your PS Series group is capacity. To fully understand the capacity for new applications or support the growth of existing servers, you must examine the overall group and pool capacity, storage utilization statistics, thinprovisioned space, and space used for replication. To ensure a healthy SAN, it is important to detect any sudden or unexpected changes in capacity utilization.
• Are users getting the response time they expect? If not, identify which area might be causing the problem: – Operating system problem, as it interacts with storage – Network problem – Application being run or accessed – Storage environment • Use the 80/20 rule. By focusing on 20 percent of the most likely causes of a performance issue, you will solve 80 percent of the problems. • Keep the host perspective in mind when managing the arrays.
Damaged Hardware Typical Symptom Detected By Possible Corrective Actions Set up email alerts on group and SAN Headquarters As best practice, use the SAN Headquarters GUI to help identify hardware-related issues. SAN Headquarters easily tracks the array model, service tag, and serial number, plus RAID status and policy, and firmware version.
the maximum number of random IOPS that can be sustained. EqualLogic customer support or your channel partner can help size storage configurations for specific workloads. Also, review the latency on your servers. If the storage does not show a high latency but the server does, the source of the problem might be the server or network infrastructure. Consult your operating system, server, or switch vendor for appropriate actions to take.
Monitor Administrative Sessions To monitor administrative statistics: 1. Click Monitoring. 2. Select Administrative Sessions. The Active Sessions and Most Recent Login by Account panels display information about the different sessions. Monitor Snapshot Schedules To monitor snapshot schedules: 1. Click Monitoring. 2. Below Schedules, select Snapshot Schedules. To see more detail about a schedule, move the pointer over a schedule entry in the panel.
3. Expand a volume name. 4. Select a snapshot timestamp. 5. Click the tabs to display specific information about the snapshot. Information That You Should Monitor • Check the status of all volumes and snapshots. Each volume and snapshot has the following status values: – Current status — Actual status of the volume or snapshot, regardless of the requested status. – Requested status — Administrator-applied setting for the volume or snapshot.
NOTE: Replication history displays the last 10 replicas only. In addition, you should monitor the usage of delegated space. If free delegated space is not available, replica reserve cannot increase automatically. You can also monitor replica reserve for a volume. Insufficient replica reserve limits the number of replicas. Table 63.
About Monitoring Replication Operations If you are replicating volume data, you should monitor replication operations to make sure that each operation completes.
1. Click Monitoring. 2. Select Replication History. The Outbound Replication History panel displays a list of volumes. You should periodically examine the replication duration information. If you see long replication times, make sure that the network connection between the partners is sufficient. A slow network link between the partners can cause long replication times. If a replication operation makes no progress, the group generates a warning event.
Monitor a Specific Partner To display details about a specific partner: 1. Click Replication. 2. Select the replication partner. The General Partner Information panel shows the partner name, IP address, status, and contact information. The Volume Replication Status panel shows the status of outbound and inbound replications between this partner and the group. In this panel, check the amount of free delegated space.
• Critical (red circle with an X) and a count of all critical alarms • Warning (yellow triangle with an exclamation mark) and a count of all warning alarms • Actions (light bulb) with a count of all actions needed Operations header icons are as follows: • Group operations (gear) with a count of all in-process operations • Failback operations (volume cylinder with an arrow) and a count of all in-process failback operations Each alarm entry includes the severity, the member that reported the alarm (
– Control module cache has lost data – Cache battery is not charging because it exceeds the temperature limit – Cache contains data that does not belong to any of the installed disk drives • Cooling component fault: – Array temperature exceeds upper or lower limit – Missing fan tray or cooling module – Both fans failed on a fan tray or cooling module • Hardware component fault: – Failed NVRAM coin cell battery – Control modules are different models – Failed critical hardware component – Missing or failed
• – Active control module syncing with secondary – No communication between control modules Batteries: – Real-time-clock battery has low charge – Cache battery has less than 72 hours of charge Monitor Group Operations 1. Open the Alarms and Operations panel. 2. Click the Group Operations tab to view the group management operations (for example, moving a member to another pool) and actions that you might need to take.
• PS Series—Run on individual controllers on each array. These reports are automatically generated and stored on the array and can be emailed to your Dell customer support representative. • FS Series—Run on all controllers. These NAS cluster diagnostic reports are automatically generated and stored on the array. Because of the size of these reports, they cannot be emailed to your Dell customer support representative, but can be retrieved by File Transfer Protocol (FTP) or Service Control Point (SCP).
• If no SMTP server is configured, click Configure SMTP and add the IP address of the SMTP server that you want to configure. 9. Click Next. The Summary tab opens. 10. Review the information and then click either: • • • Copy to save the summary information in a text file Back to return to a previous page to change the diagnostics report settings Finish to perform the specified diagnostics and generate the report To generate diagnostic reports on a system with NAS configured: 1.
To enable FTP on a NAS cluster: 1. Click Group, expand Group Configuration, and select the NAS cluster. 2. Click the Advanced tab. 3. In the NAS Cluster Access panel, select the Enable FTP Access checkbox. Troubleshooting Performance Issues To effectively manage your storage performance, follow these basic steps to troubleshoot issues: • Fix any failed hardware components in the Dell storage array: – Ensure that the array is fully populated (drives, controllers, and power supplies).
Network Infrastructure Performance Recommendations Network performance is complex and depends on a number of components working with each other. You might be able to improve network performance by following these general recommendations: • • • • • • • • • Make sure that network components are recommended for an iSCSI SAN by Dell EqualLogic. Make sure the switches and interswitch links have sufficient bandwidth for the iSCSI I/O.
virtualization functions required for automatic optimization of the SAN. In addition, when storage pool free space is low, write performance on thin- provisioned volumes is automatically reduced to slow the consumption of free space. If pool capacity is low, try one or more of the following remedies: • Move volumes from the low-space pool to a different pool. • Reduce the amount of in-use storage space by deleting unused volumes or by reducing the amount of snapshot reserve.
• Ensure that MPIO is supported and properly configured, according to the documentation for the operating system. You can also monitor multiple PS Series groups with SAN Headquarters and can launch the Group Manager GUI from there; however, you cannot directly manage the storage from SAN Headquarters.
A Third-Party Copyrights All third-party copyrights for software used in the product are listed below. This product contains portions of the NetBSD operating system: For the most part, the software constituting the NetBSD operating system is not in the public domain; its authors retain their copyright. Copyright © 1999-2001 The NetBSD Foundation, Inc. All rights reserved.
This code is derived from software contributed to The NetBSD Foundation by Jonathan Stone. This code is derived from software contributed to The NetBSD Foundation by Jason R. Thorpe. This code is derived from software contributed to The NetBSD Foundation by UCHIYAMA Yasushi. This product includes software developed for the NetBSD Project by Wasabi Systems, Inc. Copyright © 2000-2001 Wasabi Systems, Inc. All rights reserved.
Copyright © 1995 - 2000 WIDE Project. All rights reserved. © UNIX System Laboratories, Inc. All or some portions of this file are derived from material licensed to the University of California by American Telephone and Telegraph Co. or UNIX System Laboratories, Inc. and are reproduced herein with the permission of UNIX System Laboratories, Inc. Copyright © 1999 Shuichiro URATA. This product includes software developed by Matthias Pfaller. Copyright © 1996 Matthias Pfaller. Copyright © 1993 Jan-Simon Pendry.
Copyright © 1995 by Wietse Venema. All rights reserved. Copyright © 1999 The OpenSSL Project. All rights reserved. Copyright © 1992 – 1999 Theo de Raadt. All rights reserved. Copyright © 1999 Dug Song. All rights reserved. Copyright © 2000-2002 Markus Friedl. All rights reserved. Copyright © 2001 Per Allansson. All rights reserved. Copyright © 1998 CORE SDI S.A., Buenos Aires, Argentina. Copyright © 2001-2002 Damien Miller. All rights reserved. Copyright © 2001 Kevin Steves. All rights reserved.
This product includes software developed by Alistair G. Crooks. Copyright © 1999 Alistair G. Crooks. All rights reserved. Copyright © 2001 Cerbero Associates Inc. Copyright © 1995-1998 Mark Adler. Copyright © 1995-1998 Jean-loup Gailly. Copyright © 1998-1999 Brett Lymn. All rights reserved. Copyright © 1996-1999 SciTech Software, Inc. Copyright © 2001,2002 Brian Stafford. Copyright © 1999-2001 Bruno Haible. Copyright © 2001 Alex Rozin, Optical Access. All rights reserved. Copyright © 1989 TGV, Incorporated.