CLI Storage-Management Guide Part Number: 810-0044-00, Revision G Acopia Networks®, Inc.
CLI Storage-Management Guide Copyright© 2006-2007, Acopia Networks®, Inc. All Rights Reserved, Printed in U.S.A. Revision History September 2006 - Rev A October 2006 - Rev B, updates for Software Release 2.4.2 January 2007 - Rev C, updates for Software Release 2.4.3 March 2007 - Rev D, updates for Software Release 2.5.0 May 2007 - Rev E, updates for Software Release 2.5.1 August 2007 - Rev F, add forest-to-forest trusts for Software Release 2.6.
Contents Chapter 1 Introduction The ARX ...................................................................................................................1-1 Back-end Storage and Servers ...........................................................................1-2 Front-end Services .............................................................................................1-2 Policy .................................................................................................................
Configuration Instructions................................................................................. 2-5 Adapting Storage to User Demands ......................................................................... 2-6 Migration for Capacity ...................................................................................... 2-7 Configuration Instructions.......................................................................... 2-8 Migration for Class of Storage: File-Placement Policy.................
Showing One Active-Directory Domain ..................................................3-21 Showing DC Status ...................................................................................3-22 Focusing On a Single Processor ........................................................3-26 Removing an Active-Directory Forest .............................................................3-27 Authorizing Windows-Management (MMC) Access .............................................
Removing a Permit Rule .......................................................................... 4-17 Changing the Anonymous User ID ................................................................. 4-18 Changing the Anonymous Group ID ....................................................... 4-18 Reverting to the Default Group ID ................................................... 4-18 Reverting to the Default User ID ............................................................. 4-19 Adding a Deny Rule ....
Removing the Description .................................................................................6-5 Setting the CIFS Port (optional)................................................................................6-6 Reverting to the CIFS-Port Default ...................................................................6-6 Listing External Filers...............................................................................................6-6 Showing External-Filer Details............................
Enabling the Namespace (optional)........................................................................ 7-22 Enabling All Shares in the Namespace ........................................................... 7-23 Taking Ownership of All Managed Shares (optional) ............................. 7-23 Disabling All Shares................................................................................. 7-24 Disabling the Namespace ................................................................................
Removing a Direct Share .................................................................................8-19 Selecting a VPU (optional) .....................................................................................8-19 Default-VPU Assignment ................................................................................8-20 Assigning the Volume to a VPU ......................................................................8-22 Splitting Namespace Processing within a VPU ............................
Running a No-Modify Import ......................................................................... 9-11 Allowing the Volume to Modify on Re-Import ............................................... 9-11 Preventing Modification On or After Re-Import ..................................... 9-12 Preventing Modifications ................................................................................ 9-12 Automatically Synchronizing Metadata (CIFS)..............................................
Finding SID Translations at All Filers......................................................9-35 Ignoring SID Errors from the Filer (CIFS) ......................................................9-36 Acknowledging SID Errors ......................................................................9-37 Designating the Share as Critical (optional) ....................................................9-38 Removing Critical-Share Status................................................................
Removing a Managed Volume................................................................................ 9-71 Chapter 10 Configuring a Global Server Concepts and Terminology ..................................................................................... 10-2 Adding a Global Server .......................................................................................... 10-2 Setting the Windows Domain (CIFS Only).....................................................
Enabling NLM ..........................................................................................11-6 Enabling NFS Service......................................................................................11-6 Disabling NFS...........................................................................................11-6 Notifications to NLM Clients ...................................................................11-7 Listing All NFS Services ..........................................................
Sample - Configuring a CIFS Front-End Service.......................................... 11-39 Removing a CIFS Service ............................................................................. 11-40 Removing All of a Volume’s Front-End Exports ................................................. 11-41 Showing All Front-End Services .......................................................................... 11-42 Showing Front-End Services for One Global-Server....................................
Disabling the Free-Space Threshold................................................12-22 Constraining New Files..................................................................................12-22 Distributing New Files............................................................................12-23 Constraining New Directories ................................................................12-23 Constraining Directories Below a Certain Depth ............................12-24 Not Constraining Directories..
Disabling the Rule.................................................................................. 12-44 Verifying That All Files Are Removed ......................................................... 12-45 Removing the Placement Rule ...................................................................... 12-46 Removing All Policy Objects from a Namespace ................................................ 12-47 Removing All Policy Objects from a Volume ...............................................
Removing the Fileset .....................................................................................13-13 Grouping Files by Age ..........................................................................................13-13 Selecting Files Based on their Ages...............................................................13-14 Removing a File Selection ......................................................................13-15 Choosing Last-Accessed or Last-Modified.....................................
Avoid Promoting CIFS Directories Based on Last-Accessed Time....... 14-10 Matching Directory Trees (Directories and Files) ................................. 14-11 Limiting the Selection to Particular Source Share(s) (optional) ................... 14-13 Removing all Source-Share Restrictions................................................ 14-13 Choosing the Target Storage.......................................................................... 14-14 Balancing Capacity in a Target Share Farm ................
Choosing a Shadow-Volume Target.................................................................15-9 Using a Path in the Shadow Volume.......................................................15-10 Removing the Shadow-Volume Target ...................................................15-11 Applying a Schedule ......................................................................................15-12 Configuring Progress Reports........................................................................
2-xx CLI Storage-Management Guide
Chapter 1 Introduction This manual contains instructions and best practices for setting up and managing storage on the Adaptive Resource Switch (ARX®). These instructions focus on the Command-Line Interface (CLI). Use this book after the ARX is installed and connected to its clients and servers through IP. The platform’s Hardware Installation manual explains how to install the ARX.
Introduction The ARX Back-end Storage and Servers The Adaptive Resource Switch aggregates heterogeneous file systems and storage into a unified pool of file storage resources. Through this unification, you can manage these resources to adapt to user demands and client applications. File storage assets can be differentiated based on user-defined attributes, enabling a class-of-storage model.
Introduction Audience for this Manual Resilient Overlay Network (RON) You can connect multiple ARXes with a Resilient Overlay Network (RON), which can reside on top of any IP network. This provides a network for distributing and accessing file storage. ARXes can replicate storage to other switches in the same RON, updating the replicas periodically as the writable master files change.
Introduction Document Conventions The remaining chapters are presented in the same order that you would use to configure storage on a new ARX. Before you begin, you must follow the instructions in your Hardware Installation Guide to install the switch, set up its management IP, and prepare it for CLI provisioning. A network engineer can then use the GUI Quick Start: Network Setup manual or the CLI Network-Management Guide to set up the required networking parameters.
Introduction CLI Overview • choice1 | choice2 - the vertical bar ( | ) separates argument choices; • {choice1 | choice2 | choice3} - curly braces ({ }) surround a required choice; • [choice1 | choice2]* - an asterisk (*) means that you can choose none of them, or as many as desired (for example, “choice1 choice2” chooses both); • {choice1 | choice2}+ - a plus sign (+) means that you must choose one or more. CLI Overview The Command-Line Interface (CLI) has its commands grouped into modes.
Introduction CLI Overview Priv-exec mode contains chassis-management commands, clock commands, and other commands that require privileges but do not change the network or storage configuration. Priv-exec has two sub modes, cfg and gbl. Cfg Mode To enter cfg mode, use the config command: bstnA6k# config bstnA6k(cfg)# Config mode contains all modes and commands for changing the configuration of the local switch, such as network configuration.
Introduction CLI Overview bstnA6k(gbl)# namespace wwmed bstnA6k(gbl-ns[wwmed])# This command places you into a new mode, as indicated by the new CLI prompt. The prompt shows the name of the mode, “gbl-ns,” and the name of the configuration object, a namespace called “wwmed.” Abbreviations are used for mode names (for example, “ns” instead of “namespace”) to conserve space on the command line. When you descend to lower modes in the config tree, the prompt offers more information.
Introduction Getting Started bstnA6k(gbl-ns[wwmed])# enable bstnA6k(gbl-ns[wwmed])# ... Getting Started For the initial login, refer to the instructions for booting and configuring the switch in the appropriate Hardware Installation Guide. For subsequent logins, use the following steps to log into the Acopia CLI: 1. If you are on-site, you can connect a serial line to the serial console port. This port is labeled ‘Console;’ you can find it on the front panel of the switch.
Introduction Getting Started SWITCH> enable SWITCH# configure SWITCH(cfg)# To enter gbl mode, use the global command instead: SWITCH> enable SWITCH# global SWITCH(gbl)# The command sequences in this manual all begin either in cfg mode or gbl mode.
Introduction Sample Network Sample Network The examples in this manual draw from a single, fictitious network. The network filers all live on a class-C subnet at 192.168.25.x. These filers are called back-end filers, since they are the storage behind the front-end services of the ARX. The filers can be heterogeneous: NAS devices and file servers (possibly with additional DAS) need only support CIFS or NFS to be on the back end of the ARX. DAS NAS 192.168.25.
Introduction Contacting Customer Service Contacting Customer Service You can use the following methods to contact Acopia Customer Service: E-mail support@acopia.com Telephone 1-866-4Acopia (1-866-422-6742) Acopia TAC Online http://www.acopia.
Introduction Contacting Customer Service 1-12 CLI Storage-Management Guide
Chapter 2 Product Overview Solutions to Common Storage Problems This chapter shows some of the problems inherent with today’s file-storage networks, then it demonstrates the solutions offered by the ARX. References to relevant chapters appear at the end of each solution, so that you can configure the solutions in your network. Today’s File Storage Today’s storage networks typically evolve over time into storage islands with imbalanced capacities.
Today’s File Storage Balancing the capacity between these islands means moving popular files between file servers, but this can be difficult. Each client connects to back-end storage statically, through an IP address or FQDN, and chooses from a list of shares and paths that reside at that storage device. Moving files from one storage device to another means updating the client-side view of IP addresses, share names, and/or file paths. File storage is therefore static and expensive to manage.
Optimizing Storage in a Namespace Volume NAS NAS NAS NAS file servers (DA clients Acopia’s Adaptive Resource Switch can optimize, adapt, and control your storage resources through namespace configuration and file migration. The sections below summarize the configuration steps for each of these solutions. Optimizing Storage in a Namespace Volume A namespace is a group of file systems under a single authentication domain.
Optimizing Storage in a Namespace Volume Consider three filers with one NFS export each. The figure below shows the filers behind a standard router. In this configuration, the client must issue three mounts to access all three exports.
Optimizing Storage in a Namespace Volume The ARX can aggregate all three exports into a single namespace volume, “/acct” in this example. The client then only needs to mount the single, aggregated volume. /work1/accting /acct /exports/budget /data/acct2 LAN/WAN /mnt/acct The client now connects to the ARX rather than the individual filers. This creates an opportunity for upgrading storage on the back-end without changing the front-end view.
Adapting Storage to User Demands 1. Chapter 6, Adding an External Filer, contains instructions for adding an external (NAS or DAS-enhanced) filer to the configuration. 2. Chapter 7, Configuring a Namespace, contains instructions for aggregating external-filer storage into a namespace. Once the namespace is configured, you must configure the server for clients to access the namespace: 1.
Adapting Storage to User Demands Migration for Capacity The ARX can keep all filers at or above the same minimum free space, so that overburdened filers can offload their files to other filers. This is called auto-migration off of the filer that is low on free space. You configure this by declaring a share farm and establishing the auto migration rule in the share farm. The internal policy engine balances capacity by migrating popular files from an over-filled share to the shares with more free space.
Adapting Storage to User Demands An auto-migrate rule migrates files off of the over-burdened filers and onto the filers with more available free space. In addition, the ARX ensures that no filer is over-filled; all filers in the share farm maintain the minimum free space until/unless they all fill up to this level. NAS NAS NAS NAS file servers (DAS) Configuration Instructions An auto-migrate rule is one of the share-farm rules described in Chapter 12, Policy for Balancing Capacity.
Adapting Storage to User Demands Consider a site with several tiers of storage: a gold tier of expensive file servers, a silver tier of more-plentiful (perhaps slower) filers, and a bronze tier of least-expensive filers. Initially, administrators distribute files among their filers based on best guesses at the usage of the various files. Periodically, files in the bronze tier become unexpectedly “hot,” impacting performance on that tier of the storage network.
Adapting Storage to User Demands File-placement policy can solve this problem. You can configure an age-based fileset that groups all files in the namespace based on their last-accessed times. This fileset could divide the files into weekly groups: files accessed this week, two-to-four weeks ago, and any time before four weeks ago. A file placement policy could then place the files on the correct tiers based on when they were last accessed.
Controlling Costs Configuration Instructions To configure a fileset, see Chapter 13, Grouping Files into Filesets. To group your namespace shares into a share farm, see “Adding a Share Farm” on page 12-15. For instructions on moving the fileset to your chosen storage, see Chapter 14, Migrating Filesets. Controlling Costs The ARX controls costs by optimizing network resources behind a namespace and adapting to client usage in real time.
Controlling Costs 2-12 CLI Storage-Management Guide
Chapter 3 Preparing for CIFS Authentication The ARX is a file proxy between clients and back-end filers; it must authenticate clients on the front end, and it must provide valid credentials to servers on the back end. To set up the switch proxy in an CIFS environment, you must configure two sets of authentication parameters in advance: • A proxy user, configured with a valid username and password.
Preparing for CIFS Authentication Concepts and Terminology Concepts and Terminology A namespace is an aggregated view of several back-end filers. Each namespace operates under a single authentication domain, the same domain supported by all of its back-end filers. This applies to both Windows and Unix domains. A global server is an client-entry point to the services of the ARX. A global server has a fully-qualified domain name (such as myserver.mycompany.com) where clients can access namespace storage.
Preparing for CIFS Authentication Adding a Proxy User bstnA6k(gbl-proxy-user[acoProxy2])# ... Specifying the Windows Domain The first step in configuring a proxy user is to specify its Windows domain. From gbl-proxy-user mode, use the windows-domain command to specify the domain: windows-domain name where name is 1-64 characters.
Preparing for CIFS Authentication Adding a Proxy User Specifying the Username and Password The final step in configuring a proxy user is to specify a username and password. This username/password must belong to the Backup Operator’s group to ensure that it has sufficient authority to move files freely from share to share. This applies to all back-end filers and shares; a user can be defined as part of the Backup Operator’s group on one CIFS filer but not on another.
Preparing for CIFS Authentication Adding a Proxy User Listing All Proxy Users You can use the show proxy-user command to get a list of all proxy users on the ARX: show proxy-user For example: bstnA6k(gbl)# show proxy-user Name Domain User -----------------------------------------------------------------------------acoProxy1 WWMEDNET jqprivate acoProxy3 FDTESTNET jqtester acoProxy2 MEDARCH jqpublic bstnA6k(gbl)# Showing One Proxy User To focus on one proxy user, you can specify a name in the sh
Preparing for CIFS Authentication Configuring the NTLM Authentication Server For example, the following command sequence removes a proxy user called proxyNYC: bstnA6k(gbl)# no proxy-user proxyNYC bstnA6k(gbl)# ... Configuring the NTLM Authentication Server Before you configure a namespace with Windows NTLM, you must also configure an NTLM-authentication server for the namespace. The NTLM Authentication Server is the Windows Domain Controller (DC) that is the host for the Acopia Secure Agent software.
Preparing for CIFS Authentication Configuring the NTLM Authentication Server Listing NTLM Authentication Servers Use the show ntlm-auth-server command to display summary information on one or more configured Secure Agent servers. This command shows where the servers are located in the local database. show ntlm-auth-server [name] where name (optional, 1-128 characters) is the name of a Secure Agent server instance. If no name is specified, this command shows names for all configured Secure Agent servers.
Preparing for CIFS Authentication Configuring the NTLM Authentication Server Displaying Detailed Server Status Use the show ntlm-auth-server status command to display detailed status information on one or more configured Secure Agent servers. This command shows how the servers are configured to a switch. show ntlm-auth-server status [name] where name (optional, 1-128 characters) is the name of a Secure Agent server instance.
Preparing for CIFS Authentication Configuring the NTLM Authentication Server Account Disabled : 0 Account Expired : 0 Password Expired : 0 Time Restricted : 0 API Error : 0 Filer Response Generation: Success count: 1 Failure count: 0 Connection #2 Source IP: 10.61.101.
Preparing for CIFS Authentication Adding an Active-Directory Forest (Kerberos) Adding an Active-Directory Forest (Kerberos) To prepare for a CIFS service that uses Kerberos to authenticate its clients, you must first create an Active Directory forest. This mimics the Active Directory (AD) forest in your Windows network. When a client accesses the CIFS front-end service from one of the domains in the AD forest, the switch uses this information to locate the appropriate DC.
Preparing for CIFS Authentication Adding an Active-Directory Forest (Kerberos) for the forest root. From gbl-forest mode, use the forest-root command: forest-root domain-name ip-address where domain-name (1-256 characters) identifies the AD domain of the forest root, and ip-address is the IP address (for example, 10.120.95.56) of the forest root’s DC. For example, this command sequence selects the forest root for the ‘medarcv’ forest, ‘MEDARCH.
Preparing for CIFS Authentication Adding an Active-Directory Forest (Kerberos) Use the no forest-root command to remove a DC for the forest root. no forest-root domain-name domain-controller where domain-name (1-256 characters) identifies the AD domain of the forest root, and domain-controller is the IP address of the DC to remove.
Preparing for CIFS Authentication Adding an Active-Directory Forest (Kerberos) To prepare for dynamic DNS, you identify the dynamic-DNS servers in this forest. Later chapters explain how to configure a front-end CIFS service to use these dynamic-DNS servers.
Preparing for CIFS Authentication Adding an Active-Directory Forest (Kerberos) To remove a name server from an AD domain, use the no name-server command: no name-server domain-name domain-controller where domain-name (1-255 characters) identifies the AD domain, and domain-controller is the IP address of the name server to remove. For example, this command sequence removes the second (redundant) name server from the ‘MEDARCH.ORG’ domain.
Preparing for CIFS Authentication Adding an Active-Directory Forest (Kerberos) For example, this command sequence mimics the forest illustrated below: forest root MEDARCH.ORG NE.MEDARCH.ORG child domains MA.NE.MEDARCH.ORG CT.NE.MEDARCH.ORG The first child, “NE.MEDARCH.ORG,” is a child of the root domain, “MEDARCH.ORG,” and the last two domains are children under “NE.MEDARCH.ORG:” bstnA6k(gbl)# active-directory-forest medarcv bstnA6k(gbl-forest[medarcv])# child-domain NE.MEDARCH.ORG 172.16.124.
Preparing for CIFS Authentication Adding an Active-Directory Forest (Kerberos) Removing a Child Domain The no child-domain command removes a child domain controller from a forest: no child-domain domain-name domain-controller You can do this only if the child domain has a redundant DC, or if it has no children. Otherwise you must first add a redundant DC or remove all of the child domain’s children.
Preparing for CIFS Authentication Adding an Active-Directory Forest (Kerberos) Adding a Tree Domain Some domains are outside the forest-domain namespace, but have two-way trust relationships with one or more of the forest’s domains. forest root MEDARCH.ORG FDTEST.NET tree domain NE.MEDARCH.ORG WESTCOAST.MEDARCH.ORG child domains MA.NE.MEDARCH.ORG These are called tree domains.
Preparing for CIFS Authentication Adding an Active-Directory Forest (Kerberos) You can specify redundant DCs for a tree domain; enter the tree-domain command once for each DC, using the same domain-name and a new domain-controller IP. You can later add one or more child domains to this tree: use the child-domain command described above. The parent-child relationship is established by the domain names: a tree domain of “myco.com” can be the parent of another domain named “mywan.myco.com” or “mylan.myco.com.
Preparing for CIFS Authentication Adding an Active-Directory Forest (Kerberos) From gbl mode, use the active-directory forest-trust command to identify a trust relationship between two AD forests: active-directory forest-trust forest-a forest-b where forest-a and forest-b (1-256 characters each) identify the AD forests with the trust relationship.
Preparing for CIFS Authentication Adding an Active-Directory Forest (Kerberos) For example: bstnA6k(gbl)# show active-directory Active Directory Domains -----------------------Forest Name: medarcv Domain Name Domain Type IP Address Service ----------------------------------- ------------- ------------- ---------- MEDARCH.ORG forest-root 192.168.25.102 KDC DNS MEDARCH.ORG forest-root 192.168.25.104 DNS BOSTONMED.ORG tree-domain 172.16.74.88 KDC FDTESTNET.NET tree-domain 172.16.168.
Preparing for CIFS Authentication Adding an Active-Directory Forest (Kerberos) Showing One Active-Directory Forest To focus on a single AD forest, use the forest keyword at the end of the show active-directory command. show active-directory forest forest-name where forest-name (1-256 characters) identifies the forest to show.
Preparing for CIFS Authentication Adding an Active-Directory Forest (Kerberos) For example: bstnA6k(gbl)# show active-directory domain MA.NE.MEDARCH.ORG Active Directory Domains -----------------------Forest Name: medarcv Domain Name Domain Type IP Address Service ----------------------------------- ------------- ------------- ---------- MEDARCH.ORG forest-root 192.168.25.104 DNS MA.NE.MEDARCH.ORG child-domain 192.168.25.
Preparing for CIFS Authentication Adding an Active-Directory Forest (Kerberos) For example: bstnA6k(gbl)# show active-directory status Processor 1.1: Transition Forest Last (UTC) Domain Controller ------------- ---------------------- -------------------- Domain Name Status ---------------------------------------- Total -------- vt.com 10.52.140.1 08:24:38 11/06/2007 MCNIELS.VT.COM Active 1 vt.com 10.52.150.1 08:24:26 11/06/2007 NH.ORG Active 1 vt.com 10.52.130.1 08:24:06 11/06/2007 VT.
Preparing for CIFS Authentication Adding an Active-Directory Forest (Kerberos) Processor 5.1: Transition Forest Last (UTC) Domain Controller ------------- ---------------------- -------------------- Domain Name Status ---------------------------------------- Total -------- vt.com 10.52.140.1 08:24:38 11/06/2007 MCNIELS.VT.COM Active 1 vt.com 10.52.150.1 08:24:26 11/06/2007 NH.ORG Active 1 vt.com 10.52.130.1 08:24:06 11/06/2007 VT.COM Active 1 ny.com 10.52.120.
Preparing for CIFS Authentication Adding an Active-Directory Forest (Kerberos) Transition Forest Last (UTC) Domain Controller ------------- ---------------------- -------------------- Domain Name Status ---------------------------------------- Total -------- vt.com 10.52.140.1 08:24:38 11/06/2007 MCNIELS.VT.COM Active 1 vt.com 10.52.150.1 08:24:26 11/06/2007 NH.ORG Active 1 vt.com 10.52.130.1 08:24:09 11/06/2007 VT.COM Active 1 ny.com 10.52.120.1 08:23:57 11/06/2007 CATSKILLS.NY.
Preparing for CIFS Authentication Adding an Active-Directory Forest (Kerberos) Focusing On a Single Processor On an ARX®6000, you can use the optional from clause to focus on a particular processor: show active-directory status [forest forest-name | domain domain-name] from slot.processor where forest forest-name and domain domain-name are described above, from is a required keyword, slot (1-6) is the slot number of the desired SCM or ASM, and processor (1-6) is the processor number.
Preparing for CIFS Authentication Adding an Active-Directory Forest (Kerberos) medarcv 172.16.74.88 08:22:57 11/06/2007 BOSTONMED.ORG Active 1 medarcv 192.168.25.103 08:22:39 11/06/2007 MA.NE.MEDARCH.ORG Active 1 medarcv 192.168.202.9 08:22:21 11/06/2007 WESTCOAST.MEDARCH.ORG Active 1 medarcv 172.16.124.73 08:22:02 11/06/2007 NE.MEDARCH.ORG Active 1 medarcv 192.168.25.102 08:21:43 11/06/2007 MEDARCH.
Preparing for CIFS Authentication Authorizing Windows-Management (MMC) Access Authorizing Windows-Management (MMC) Access You can define a group of Windows clients and their authority to use Windows-management applications, such as the MicroSoft Management Console (MMC). This group can use MMC and similar applications to view or edit CIFS shares, view and/or close open files, or view and/or close open client sessions.
Preparing for CIFS Authentication Authorizing Windows-Management (MMC) Access bstnA6k(gbl)# windows-mgmt-auth readOnly bstnA6k(gbl-mgmt-auth[readOnly])# user mhoward_md windows-domain MEDARCH.ORG bstnA6k(gbl-mgmt-auth[readOnly])# user zmarx_md windows-domain MEDARCH.ORG bstnA6k(gbl-mgmt-auth[readOnly])# user lfine_md windows-domain MEDARCH.ORG bstnA6k(gbl-mgmt-auth[readOnly])# user choward_md windows-domain MEDARCH.ORG bstnA6k(gbl-mgmt-auth[readOnly])# user cjderita_md windows-domain MEDARCH.
Preparing for CIFS Authentication Authorizing Windows-Management (MMC) Access All users in the management-authorization group have the permissions you set with this command. By default, all group members can browse all directories in the namespace, but cannot add or delete CIFS shares. Also, they cannot view or change CIFS-client sessions or open files.
Preparing for CIFS Authentication Authorizing Windows-Management (MMC) Access bstnA6k(gbl)# windows-mgmt-auth readOnly bstnA6k(gbl-mgmt-auth[readOnly])# no permit open-file bstnA6k(gbl-mgmt-auth[readOnly])# ...
Preparing for CIFS Authentication Authorizing Windows-Management (MMC) Access ---------------------- ----------------------Share Monitor Session Monitor bstnA6k(gbl)# ... Focusing on One Group To show a single management-authorization group, add the group name to the end of the show windows-mgmt-auth command: show windows-mgmt-auth name where name (1-64 characters) identifies the group to show.
Preparing for CIFS Authentication Authorizing Windows-Management (MMC) Access Removing a Management-Authorization Group You can only remove a management-authorization group if it is not referenced by any namespace. A later chapter describes how to configure a namespace and reference a management-authorization group. To remove a management-authorization group, use the no windows-mgmt-auth command in gbl mode: no windows-mgmt-auth name where name (1-64 characters) identifies the group to remove.
Preparing for CIFS Authentication Authorizing Windows-Management (MMC) Access 3-34 CLI Storage-Management Guide
Chapter 4 Preparing for NFS Authentication You can create NFS access lists that filter clients based on their IP addresses. You can enter IP addresses directly and/or refer to pre-defined netgroups at a Network Information Service (NIS) server. A NIS netgroup defines a group of host machines, and may also contain other NIS netgroups. This chapter pertains to NFS-client authentication only; you can skip this chapter unless you plan to offer NFS service with some level of client authentication.
Preparing for NFS Authentication Adding a NIS Domain The switch supports up to eight NIS domains. From gbl mode, use the nis domain command to add a new one: nis domain domain where domain (1-256 characters) is the name of the domain (for example, “acopia” in “server.acopia.com”). This must match the name of the NIS domain defined in one or more external NIS servers. This places you into gbl-nis-dom mode, where you identify one or more NIS servers that host this NIS domain.
Preparing for NFS Authentication Adding a NIS Domain Removing a NIS Server Use the no ip address command to remove one of the NIS servers from the list: no ip address ip-address where ip-address is in dotted-decimal format (for example, 192.168.25.122). If you remove the only NIS server for the current NIS domain, support for the domain is limited. The switch keeps a local cache with all NIS netgroups, but can never refresh this cache.
Preparing for NFS Authentication Adding a NIS Domain Showing Details for a NIS Domain Add the name of an NIS domain to show details: show nis domain name where name (1-256 characters) identifies the NIS domain. For example: bstnA6k(gbl)# show nis domain wwmed.com NIS Domain: wwmed.com Server(s): 192.168.25.201 192.168.25.204 192.168.25.
Preparing for NFS Authentication Adding a NIS Domain auto_1 ... medtechs surgeons Total Netgroups: 2396 bstnA6k(gbl)# ... Showing the Members of One Netgroup For a list of members in a NIS netgroup, add the name of the netgroup to the end of the show nis netgroup command: show nis netgroup domain netgroup where domain (1-256 characters) identifies the NIS domain, and netgroup (1-1024 characters) is the specific netgroup. This shows the host machines in the netgroup, along with their IP addresses.
Preparing for NFS Authentication Adding a NIS Domain Updating the NIS Database The ARX keeps an internal copy of all the NIS netgroups and their fully-resolved hosts. The database is built when you add the NIS domain to the switch; it is used for switch operation as well as the show commands above. To avoid excessive traffic to the DNS server, the switch does not update this database automatically. You can rebuild the database manually after any large-scale DNS or NIS changes.
Preparing for NFS Authentication Adding a NIS Domain Use show reports report-name, tail, or grep to read the file. To save the report off to an external FTP site, use the copy ... ftp command from priv-exec mode. To upload the report to an SCP host, use copy ... scp. All of these commands are documented in the CLI Reference manual. This report lists all the hosts in the NIS domain that had issues, such as the name not being found at the DNS server.
Preparing for NFS Authentication Adding a NIS Domain [HN ] london in group: sixthousands [HN ] montreal in group: sixthousands [HN ] lasvegas in group: sixthousands Netgroups Processed: 2,396 Hosts Processed: 48,043 Hostnames Not Found: 539 Netgroup Parsing Errors: 0 Netgroups Not Found: 0 Watched Netgroup Changes 0 **** Elapsed time: 00:00:17 **** NIS Update Report: DONE at Wed Dec 7 09:45:20 2005 **** bstnA6k(gbl)# ...
Preparing for NFS Authentication Adding an NFS Access List Removing the NIS Domain-Server Map From gbl mode, use no nis domain to remove a NIS domain-server map: no nis domain domain where domain (1-256 characters) is the name of the domain to remove. You cannot remove a domain that is referenced by an NFS access list. The next section describes how to use an NIS domain in an access list.
Preparing for NFS Authentication Adding an NFS Access List The ARX supports up to 512 NFS access lists. From gbl mode, use the nfs-access-list command to create a new one: nfs-access-list list-name where list-name (1-64 characters) is a name you choose for the access list. The CLI prompts for confirmation before creating the new NFS access list. Enter yes to proceed. This places you in gbl-nfs-acl mode, from which you can configure various permit and deny rules for specific subnets and/or NIS netgroups.
Preparing for NFS Authentication Adding an NFS Access List Showing One NFS Access List As you configure your NFS access lists, it will be convenient to see the current list settings. Use the show nfs-access-list command with the specific access-list name to see the full configuration for one access list: show nfs-access-list list-name where list-name (1-64 characters) identifies the access list. Among other configuration details, the output shows the order of all permit and deny rules in the access list.
Preparing for NFS Authentication Adding an NFS Access List Resolving All Netgroups in the Access List If the access list contains any netgroups, you can resolve those netgroups to see all of the hosts within them. To accomplish this, add the resolve-netgroups keyword to the end of the command: show nfs-access-list list-name resolve-netgroups This provides a complete view of the access list, resolving all netgroups to the IP addresses for their hosts.
Preparing for NFS Authentication Adding an NFS Access List Each access list can support a maximum of 2048 permit and deny rules, including the individual permit rules for every host in every netgroup. If you exceed the limit (perhaps because of an overly-large netgroup), this output shows the first 2048 entries followed by an error. Setting the NIS Domain If the access list will use NIS netgroups, you must set the access list’s NIS domain. The ARX needs the NIS domain to access the local NIS server.
Preparing for NFS Authentication Adding an NFS Access List bstnA6k(gbl)# nfs-access-list westcoast bstnA6k(gbl-nfs-acl[eastcoast])# no nis domain snemed.com bstnA6k(gbl-nfs-acl[eastcoast])# ... Adding a Permit Rule By default, a new NFS access list denies access to all subnets. You can selectively allow access by configuring a permit rule for each trusted subnet.
Preparing for NFS Authentication Adding an NFS Access List Permitting a Netgroup If you have configured a NIS domain for this access list (see above), you can refer to a netgroup configured in that domain. This leverages any netgroups that were configured before the introduction of the ARX.
Preparing for NFS Authentication Adding an NFS Access List Rule Ordering The order of rules is very important in an access list. Whenever a client tries to access an NFS service with an access list, the client’s IP address is compared to the rules in the order they were entered. If the IP address matches two rules, the first rule is used and the second rule is ignored. For example, consider the two permit rules below. Clients in 192.168.10.
Preparing for NFS Authentication Adding an NFS Access List For example, the following command sequence allows root access from clients at 172.16.204.0. To control the security problem, access is read-only for this rule: bstnA6k(gbl)# nfs-access-list eastcoast bstnA6k(gbl-nfs-acl[eastcoast])# permit 172.16.204.0 255.255.255.0 read-only root allow bstnA6k(gbl-nfs-acl[eastcoast])# ...
Preparing for NFS Authentication Adding an NFS Access List Changing the Anonymous User ID When permit rules have root-squash enabled, they translate the User ID (UID) of a root user to an anonymous UID. By default, the access list uses 65534 for this UID. To change the UID for anonymous, use the anonymous-uid command: anonymous-uid id where id is a number from 1-65535.
Preparing for NFS Authentication Adding an NFS Access List bstnA6k(gbl-nfs-acl[westcoast])# ... Reverting to the Default User ID As with the GID, an access list uses the default UID of 65534 when it performs root squashing. From gbl-nfs-acl mode, use the no anonymous-uid command to revert to this default: no anonymous-uid For example: bstnA6k(gbl)# nfs-access-list westcoast bstnA6k(gbl-nfs-acl[westcoast])# no anonymous-uid bstnA6k(gbl-nfs-acl[westcoast])# ...
Preparing for NFS Authentication Adding an NFS Access List You cannot deny a NIS netgroup. We recommend a subnet-deny rule after any permit netgroup rule, to ensure that all other hosts in the netgroup’s subnet are explicitly denied. Removing a Deny Rule From gbl-nfs-acl mode, use no deny to remove a deny rule from the current access list: no deny ip-address mask where ip-address identifies the subnet for the deny rule, and mask defines the network part of the ip-address.
Preparing for NFS Authentication Adding an NFS Access List These permit and deny rules have a subtle configuration error. The intention was to allow all clients from 192.168.0.0 except clients from 192.168.77.0 or 192.168.202.0. For example, a client at IP 192.168.77.29 is supposed to be blocked by the first deny rule, “deny 192.168.77.0 ...” However, that IP address matches the Class-B network (192.168.0.0) in the earlier permit rule. The deny rules can never actually be reached.
Preparing for NFS Authentication Adding an NFS Access List deny 192.168.202.0 255.255.255.0 Add back the permit rule and show that it is now at the end of the list: bstnA6k(gbl-nfs-acl[eastcoast])# permit 192.168.0.0 255.255.0.0 bstnA6k(gbl-nfs-acl[eastcoast])# show nfs-access-list eastcoast ... deny 192.168.77.0 255.255.255.0 deny 192.168.202.0 255.255.255.0 permit 192.168.0.0 255.255.0.0 read-write root squash bstnA6k(gbl-nfs-acl[eastcoast])# ...
Preparing for NFS Authentication Adding an NFS Access List Removing an Access List From gbl mode, use the no nfs-access-list command to remove an NFS access list: no nfs-access-list list-name where list-name (1-64 characters) identifies the access list to remove. You must remove all references to the access list before you can use this command to remove the list itself. An access list is referenced from an NFS service; instructions for configuring an NFS service appear below.
Preparing for NFS Authentication Adding an NFS Access List 4-24 CLI Storage-Management Guide
Chapter 5 Examining Filers Use the show exports and probe exports commands to examine filers in the server network. These commands make queries from proxy-IP addresses to test filer connectivity, find the services supported by the filer, discover filer shares, and discover permissions settings at the shares. You can use them to troubleshoot network connectivity to filers as well as any permissions issues.
Examining Filers • Capabilities - shows the transport protocols (TCP or UDP) and port numbers for NFS and CIFS. For NFS, this shows the same information for portmapper and the mount daemon: an NFS filer must support all three services. For CIFS servers, this also shows the server-level security settings.
Examining Filers Capabilities: NFS Port Mapper TCP/111, UDP/111 Mount Daemon V1 TCP/1016, V1 UDP/1013, V2 TCP/1016, V2 UDP/1013, V3 TCP/1016, V3 UDP/1013 Server V2 TCP/2049, V2 UDP/2049, V3 TCP/2049, V3 UDP/2049 CIFS Security Mode User level, Challenge/response, Signatures disabled Server TCP/445 Max Request 16644 bytes Shares: NFS Path (Owner) Access (Status) ---------------------------------- --------------------------- /disk2 * (Mounted,rsize=32768,wsize=32768) /exports * (Mounted,rsize
Examining Filers Examining CIFS Shares Examining CIFS Shares You can only examine CIFS shares if you have sufficient permissions at the filer. Use the user and windows-domain options to provide Windows credentials to the filer: show exports host filer user username windows-domain domain where username (1-64 characters) is the username, domain (1-64 characters) is the user’s Windows domain, and the other options are explained above. The CLI prompts for the user’s password. Enter the password to continue.
Examining Filers Examining CIFS Shares 3.5 192.168.25.55 64: Success 2000: Success 8820: Success 3.6 192.168.25.
Examining Filers Examining CIFS Shares Using Proxy-User Credentials If there is a proxy user that is already configured for the filer’s domain, you can use the proxy user configuration instead of a username, domain, and password. Use show proxy-user for a full list of all proxy users (recall “Listing All Proxy Users” on page 3-5).
Examining Filers Examining CIFS Shares 3.5 192.168.25.55 64: Success 2000: Success 8820: Success 3.6 192.168.25.
Examining Filers Examining CIFS Shares Showing the Physical Paths for CIFS Shares For the physical disk and path behind each CIFS share, use the optional paths keyword after the filer hostname/IP: show exports host filer paths [user username windows-domain domain | proxy-user proxy] where the options are explained above. This shows the relationships between shares. If a share is inside the directory tree of another share, it is called a subshare of its parent share.
Examining Filers Focusing on One Share Focusing on One Share To focus on one share, use the share argument in the show exports command: show exports host filer share share-name [user username windows-domain domain | proxy-user proxy] where share-name (1-1024 characters) identifies the share, and the other options are explained above. This shows the default report, but only with entries that are relevant to the share. For example, the following command focuses on the ‘histories’ share.
Examining Filers Showing Connectivity Only Storage Space Share Total (MB) Free (MB) Serial Num ------------------------------- ---------- ---------- ---------- 17351 16144 c883-8cc0 histories Time: CIFS Filer's time is the same as the switch's time. bstnA6k> ... Showing Connectivity Only Use the connectivity keyword to show the Connectivity table alone: show exports host filer [share share-path] connectivity where the options are explained above.
Examining Filers Showing Connectivity Only 3.3 192.168.25.33 64: Success 2000: Success 8820: Success 3.4 192.168.25.34 64: Success 2000: Success 8820: Success 3.5 192.168.25.55 64: Success 2000: Success 8820: Success 3.6 192.168.25.56 64: Success 2000: Success 8820: Success bstnA6k> ... Showing Capabilities Only The capabilities keyword shows only the Capabilities table: show exports host filer [share share-path] capabilities where the options are explained earlier in the chapter.
Examining Filers Showing Connectivity Only Showing Shares Only To list only the filer’s shares, use the shares keyword: show exports host filer shares [user username windows-domain domain | proxy-user proxy] where the options are explained earlier. The output shows two tables, one for NFS shares and one for CIFS shares. Only the CIFS table appears if you enter Windows credentials. The NFS table shows each share path and the machines and/or subnets that can access the share.
Examining Filers Showing CIFS Attributes Storage Space bstnA6k> ... Showing Time Settings Namespace policy (described in later chapters) requires that the ARX has its clock synchronized with those of its back-end filers. Kerberos authentication also requires synchronized time. You should configure the ARX to use the same NTP servers that the filers use; refer to “Configuring NTP” on page 4-15 in the CLI Network-Management Guide for instructions.
Examining Filers Probing for CIFS Security Each Windows volume has up to five CIFS attributes that are relevant to namespace imports. These attributes represent support for Compressed Files, Named Streams, Persistent ACLs, Sparse Files, and/or Unicode filenames on disk. This command shows a table of supported CIFS attributes at each of the filer’s shares. show exports host filer [share share-path] attributes [user username windows-domain domain | proxy-user proxy] where the options are explained earlier.
Examining Filers Probing for CIFS Security This filer examination is more intrusive than the others, so it is not invoked as part of show exports. From priv-exec mode, use the probe exports command to test some Windows credentials at a given back-end filer: probe exports host filer [share share-path] {user username windows-domain domain | proxy-user proxy-user} where the options match those of the show exports command, above.
Examining Filers Probing for CIFS Security 5-16 CLI Storage-Management Guide
Chapter 6 Adding an External Filer A Network-Attached Storage (NAS) filer or a file server with Direct-Attached Storage (DAS) is configured as an external filer on the ARX. An external filer defines how to access the storage in an external NAS/DAS-based device. From gbl mode, use the external-filer command to create an empty external-filer instance: external-filer name where name (1-64 characters) is a name you choose for this external filer.
Adding an External Filer Providing the Filer’s IP Address Providing the Filer’s IP Address The next step in external-filer configuration is to give the IP address of the filer. The address must be on the proxy-IP subnet (“Adding a Range of Proxy-IP Addresses” on page 4-6 of the CLI Network-Management Guide) or reachable through a gateway on that subnet (via static route: see “Adding a Static Route” on page 4-9 of the same manual).
Adding an External Filer Ignoring a Directory bstnA6k(gbl-ext-filer[nas1])# ip address 192.168.25.61 secondary bstnA6k(gbl-ext-filer[nas1])# ip address 192.168.25.62 secondary bstnA6k(gbl-ext-filer[nas1])# ... Removing a Secondary Address Use the no form of the command to remove a secondary IP address from the list: no ip address ip-address secondary where ip-address is the secondary address to remove. For example: bstnA6k(gbl)# external-filer nas1 bstnA6k(gbl-ext-filer[nas1])# no ip address 192.168.25.
Adding an External Filer Ignoring a Directory • Network Appliance: .snapshot, ~snapshot • BlueArc: .snapshot, ~snapshot Ignore only special, virtual directories designed for filer backups, or directories that only appear in the share’s root. If you ignore a standard directory below the root, a client cannot delete the directory’s parent.
Adding an External Filer Adding a Description (optional) Adding a Description (optional) You can add a description to the filer for use in show commands. The description can differentiate the external filer from others. From gbl-ext-filer mode, use the description command to add a description: description text where text is 1-255 characters. Quote the text if it contains any spaces.
Adding an External Filer Setting the CIFS Port (optional) Setting the CIFS Port (optional) By default, the ARX sends its CIFS messages to port 445 or 139 at the external filer. Port 445 supports raw CIFS communication over TCP, port 139 supports CIFS through NetBIOS; the ARX tries port 445 first and uses port 139 as a fall-back. For most (if not all) CIFS configurations, this should suffice.
Adding an External Filer Listing External Filers For example, the following command lists all of the external filers known to the ARX: bstnA6k(gbl)# show external-filer Name IP Address Description ------------------------ ------------- ---------------------------- das1 192.168.25.19 financial data (LINUX filer, rack 14) fs2 192.168.25.27 bulk storage server (DAS, Table 3) fs1 192.168.25.20 misc patient records (DAS, Table 3) nasE1 192.168.25.51 NAS filer E1 fs3 192.168.25.
Adding an External Filer Samples - Adding Two Filers Filer IP 192.168.25.19 CIFS Port default (auto-detect) NFS TCP Connections 1 (default) Managed Exports -------------------------------------------------------------------------------NFS Export: /exports/budget Namespace: wwmed Volume: /acct Directories to ignore for importing -----------------------------------.snapshot bstnA6k(gbl)# ...
Adding an External Filer Removing an External Filer bstnA6k(gbl-ext-filer[das1])# exit bstnA6k(gbl)# This command sequence creates a new filer, “fs1,” with two CIFS shares: bstnA6k(gbl)# external-filer fs1 This will create a new filer. Create filer 'fs1'? [yes/no] yes bstnA6k(gbl-ext-filer[fs1])# ip address 192.168.25.20 bstnA6k(gbl-ext-filer[fs1])# exit bstnA6k(gbl)# Removing an External Filer You can remove an external filer from back-end storage by deleting its configuration.
Adding an External Filer Next 6-10 CLI Storage-Management Guide
Chapter 7 Configuring a Namespace The ARX aggregates storage from external servers into one or more namespaces. A namespace is a collection of virtual file systems, called volumes. Each volume consists of storage space from any number of Network-Attached Storage (NAS) or filer servers with Direct-Attached Storage (DAS). A volume can contain multiple shares, where each share maps to an export or share from an external (NAS/DAS) filer.
Configuring a Namespace Concepts and Terminology The purpose of the namespace is to contain one or more volumes with a common set of access protocols (CIFS/NFS), authentication mechanisms, and character encoding. This chapter explains how to create a namespace. The next chapters explain how to aggregate your storage into various types of namespace volumes. From gbl mode, use the namespace command to create a new namespace. namespace name where name (1-30 characters) is a name you choose for the namespace.
Configuring a Namespace Listing All Namespaces The shadow volume is a frequently-updated duplicate of a managed volume, possibly hosted at a different ARX in the same Resilient-Overlay Network (RON, described in the CLI Network-Management Guide).
Configuring a Namespace Listing All Namespaces For example, the following command shows the configuration of the namespace named “wwmed.
Configuring a Namespace Listing All Namespaces Metadata shares: Filer Backend Path Contains Metadata Status ------------------------------------------------------------------nas1 /vol/vol1/meta1 Yes Online Share bills Filer das8 [192.168.25.25] NFS Export /work1/accting Features unix-perm Status Online Critical Share Yes Free space on storage 17GB (18,803,621,888 B) Free files on storage 18M Transitions 1 Last Transition Wed Apr 4 03:41:05 2007 Share bills2 Filer das3 [192.168.
Configuring a Namespace Listing All Namespaces Transitions 1 Last Transition Wed Apr 4 03:41:04 2007 Share it5 Filer das7 [192.168.25.
Configuring a Namespace Listing All Namespaces New File Placement Status Enabled bstnA6k# ...
Configuring a Namespace Listing All Namespaces \\fs1\prescriptions \\fs2\bulkstorage nas1:/vol/vol1/meta3* medarcv:/test_results chemLab \\fs1\chem_results/. hematologyLab \\fs3\hematology_results/.
Configuring a Namespace Setting the Namespace Protocol(s) Showing Shares Behind One Namespace Add a namespace name to show only the shares behind that particular namespace: show namespace mapping name where name (1-30 characters) is the name of the namespace.
Configuring a Namespace Setting the Namespace Protocol(s) For example, this command set allows two forms of NFS access to the “wwmed” namespace: bstnA6k(gbl)# namespace wwmed bstnA6k(gbl-ns[wwmed])# protocol nfs2 bstnA6k(gbl-ns[wwmed])# protocol nfs3 bstnA6k(gbl-ns[wwmed])# ... Removing a Protocol Use the no form of the protocol command to remove a protocol from the namespace.
Configuring a Namespace Setting NFS Character Encoding Changing Protocols After Import We strongly recommend that you choose your protocol set carefully, before configuring any volumes in the namespace. After a managed volume and at least one of its shares is enabled (as described later in the managed-volume chapter), the managed volume imports files and directories from its enabled shares. Protocol changes after this first import require greater care, since they may affect client access to the volume.
Configuring a Namespace Setting NFS Character Encoding Improper encoding can also present problems during managed-volume import. A file with a non-mappable CIFS character is imported using its NFS-side name; this may not have any resemblance to the intended CIFS-side name. A directory with an non-mappable character can be renamed during import to preserve its resemblance with the original CIFS-side name.
Configuring a Namespace Setting NFS Character Encoding Setting CIFS Character Encoding When a volume from a CIFS namespace is exported through a virtual server (described in a later chapter), the virtual server may register its NetBIOS name with a WINS server.
Configuring a Namespace Configuring Windows Authentication (CIFS) You cannot change the character encoding after any of the namespace’s managed volumes are enabled, as described in a later chapter. Configuring Windows Authentication (CIFS) This section applies only to a namespace that supports CIFS. Skip to the next section if this is an NFS-only namespace. To configure Windows NTLM or Kerberos authentication for the namespace, you first declare the proxy user for the namespace.
Configuring a Namespace Configuring Windows Authentication (CIFS) acoProxy3 FDTESTNET jqtester acoProxy2 MEDARCH jqpublic bstnA6k(gbl-ns[medarcv])# proxy-user acoProxy2 bstnA6k(gbl-ns[medarcv])# ... Using Kerberos for Client Authentication You can configure the namespace to authenticate its clients with Kerberos instead of (or in addition to) NTLM. If you plan to use NTLM only, skip ahead to the next section.
Configuring a Namespace Configuring Windows Authentication (CIFS) Identifying the NTLM Authentication Server NTLM authentication also requires a mechanism for authenticating Windows clients at the namespace’s back-end filers. Kerberos-only sites do not require any NTLM configuration, though you can configure a namespace that supports both authentication protocols. An NTLM-authentication server is a Windows Domain Controller (DC) that is the host for an Acopia Secure Agent (ASA).
Configuring a Namespace Configuring Windows Authentication (CIFS) Name Domain Name Server Port ------------------------------------------------------------------------------dc1 MEDARCH 192.168.25.102 25805 Mapped to the Following Namespaces ------------------------------------------------------------------------------insur bstnA6k(gbl-ns[medarcv])# ntlm-auth-server dc1 bstnA6k(gbl-ns[medarcv])# ...
Configuring a Namespace Configuring Windows Authentication (CIFS) Removing an NTLM-Authentication Server From gbl-ns mode, use no ntlm-auth-server to remove a server from the namespace: no ntlm-auth-server name where name identifies the NTLM authentication server to remove from the namespace. If you remove an NTLM-authentication server from the namespace, the server’s clients will no longer be able to authenticate through NTLM.
Configuring a Namespace Configuring Windows Authentication (CIFS) Enter this command once for each authorized group. For example, this command set applies three management-authorization groups to the “medarcv” namespace: bstnA6k(gbl)# namespace medarcv bstnA6k(gbl-ns[medarcv])# show windows-mgmt-auth ... bstnA6k(gbl-ns[medarcv])# windows-mgmt-auth testGroup bstnA6k(gbl-ns[medarcv])# windows-mgmt-auth fullAccess bstnA6k(gbl-ns[medarcv])# windows-mgmt-auth readOnly bstnA6k(gbl-ns[medarcv])# ...
Configuring a Namespace Configuring Windows Authentication (CIFS) Selecting a SAM-Reference Filer CIFS clients, given sufficient permissions, can change the users and/or groups who have access to a given file. For example, the owner of the “penicillin.xls” file can possibly add “nurses” or “doctors” to the list of groups with write permission. The list of groups in the network is traditionally provided by the Security Account Management (SAM) database on the file’s server.
Configuring a Namespace Adding a Volume fs3 192.168.25.28 Hematology lab server (DAS, Table 8) fs4 192.168.25.29 prescription records (DAS, Table 3) das2 192.168.25.22 DAS (Solaris) filer 2 (rack 16) das3 192.168.25.23 DAS (Solaris) filer 3 (rack 16) nas1 192.168.25.21 NAS filer 1 (rack 31) 192.168.25.61 (secondary) 192.168.25.62 (secondary) das7 192.168.25.24 Redhat-LINUX filer 1 das8 192.168.25.25 Redhat-LINUX filer 2 nas2 192.168.25.44 NAS filer 2 (rack 31) nas3 192.168.25.
Configuring a Namespace Enabling the Namespace (optional) For a new volume, the CLI prompts for confirmation before adding it to the namespace. Enter yes to proceed. This puts you into gbl-ns-vol mode, where you must declare at least one share. For example, this command set creates a single volume (“/acct”) for the “wwmed” namespace: bstnA6k(gbl)# namespace wwmed bstnA6k(gbl-ns[wwmed])# volume /acct This will create a new volume. Create volume '/acct'? [yes/no] yes bstnA6k(gbl-ns-vol[wwmed~/acct])# ...
Configuring a Namespace Enabling the Namespace (optional) Enabling All Shares in the Namespace From gbl-ns mode, you can enable all of the namespace’s shares with a single command. Use the enable shares command to accomplish this: enable shares For example, the following command sequence enables all shares in the “wwmed” namespace: bstnA6k(gbl)# namespace wwmed bstnA6k(gbl-ns[wwmed])# enable shares bstnA6k(gbl-ns[wwmed])# ...
Configuring a Namespace Enabling the Namespace (optional) The CLI prompts for confirmation before taking ownership of any shares. Enter yes to proceed. For example, the following command sequence enables all shares in the “insur_bkup” namespace and, if necessary, takes ownership of all of them: prtlndA1k(gbl)# namespace insur_bkup prtlndA1k(gbl-ns[insur_bkup])# enable shares take-ownership This command allows the switch to virtualize shares that are used by other Acopia switches.
Configuring a Namespace Showing Namespace Configuration Showing Namespace Configuration To review the configuration settings for a namespace, use the show global-config namespace command: show global-config namespace [name] where name (optional, 1-30 characters) identifies the namespace. If you omit this, the output includes all namespaces The output shows all of the configuration options required to recreate the namespace. The options are in order, so that they can be used as a CLI script.
Configuring a Namespace Showing Namespace Configuration enable exit share budget filer das1 nfs /exports/budget enable exit share it5 filer das7 nfs /lhome/it5 enable exit share-farm fm1 share budget share bills share bills2 maintain-free-space 2G auto-migrate 2G balance Capacity enable exit place-rule docs2das8 report docsPlc verbose from fileset bulky target share bills limit-migrate 50G enable exit vpu 1 domain 1 enable exit 7-26 CLI Storage-Management Guide
Configuring a Namespace Removing a Namespace exit bstnA6k# ... Removing a Namespace From priv-exec mode, you can use the remove namespace command to remove a namespace and all of its volumes: remove namespace name [timeout seconds] [sync] where: name (1-30 characters) is the namespace to remove, seconds (optional, 300-10,000) sets a time limit on each of the removal’s component operations, and sync (optional) waits for the removal to finish before returning.
Configuring a Namespace Removing a Namespace % INFO: Removing service configuration for namespace insur_bkup % INFO: Removing CIFS browsing for namespace insur_bkup % INFO: Removing volume policies for namespace insur_bkup % INFO: destroy policy insur_bkup /insurShdw % INFO: Removing shares for namespace insur_bkup % INFO: no share backInsur % INFO: Removing volume metadata shares for namespace insur_bkup % INFO: no metadata share nas-p1 path /vol/vol1/mdata_B % INFO: Removing volumes for namespace insur_bk
Chapter 8 Adding a Direct Volume Each share in a direct volume attaches one or more of its own virtual directories to real directories at back-end shares. These attach points are analogous to mount points in NFS and network-drive connections in CIFS. The back-end directory trees are all reachable from the same volume root.The volume does not record the contents of the back-end shares, so it does not keep metadata or support storage policies. This is called a presentation volume in the GUI.
Adding a Direct Volume Declaring the Volume “Direct” A direct volume is easier to set up than a managed volume, so this chapter is offered before the managed-volume chapter. As explained earlier (in “Adding a Volume” on page 7-21), you use the gbl-ns volume command to create a volume. This puts you into gbl-ns-vol mode, where you must declare this volume for use as a direct volume and create at least one direct share. Direct volumes do not support NFSv2.
Adding a Direct Volume Manually Setting the Volume’s Free Space (optional) Reverting to a Managed Volume If a direct volume has no attach points configured, you can use no direct to revert the volume back to a managed volume: no direct For example, this command sequence ensures that “wwmed~/acct” is a managed volume: bstnA6k(gbl)# namespace wwmed bstnA6k(gbl-ns[wwmed])# volume /acct bstnA6k(gbl-ns-vol[wwmed~/acct])# no direct bstnA6k(gbl-ns-vol[wwmed~/acct])# ...
Adding a Direct Volume Setting CIFS Options You can set this any time, even after the volume is enabled. For example, this command sequence makes the ‘access~/G’ volume count the free space in all back-end shares, even multiple shares that draw from the same back-end storage: prtlndA1k(gbl)# namespace access prtlndA1k(gbl-ns[access])# volume /G prtlndA1k(gbl-ns-vol[access~/G])# freespace calculation manual prtlndA1k(gbl-ns-vol[access~/G])# ...
Adding a Direct Volume Setting CIFS Options By default, new volumes conform to the CIFS options at the first-enabled share. If you try to enable another volume share that does not support one or more of those options, the you get an error and the share remains disabled. You can then disable the unsupported options (the error message tells you which ones) and retry the enable.
Adding a Direct Volume Setting CIFS Options Disabling CIFS Oplocks (optional) The CIFS protocol supports opportunistic locks (oplocks) for its files. A client application has the option to take an oplock as it opens a file. While it holds the oplock, it can write to the file (or a cached copy of the file) knowing that no other CIFS client can write to the same file. Once another client tries to access the file for writes, the server gives the first client the opportunity to finish writing.
Adding a Direct Volume Adding a Share bstnA6k(gbl)# namespace medarcv bstnA6k(gbl-ns[medarcv])# volume /test_results bstnA6k(gbl-ns-vol[medarcv~/test_results])# cifs oplocks-disable auto bstnA6k(gbl-ns-vol[medarcv~/test_results])# ...
Adding a Direct Volume Adding a Share bstnA6k(gbl)# namespace medco bstnA6k(gbl-ns[medco])# volume /vol bstnA6k(gbl-ns-vol[medco~/vol])# share corporate This will create a new share. Create share 'corporate'? [yes/no] yes bstnA6k(gbl-ns-vol-shr[medco~/vol~corporate])# ... Listing Filer Shares It is convenient to show the available back-end-filer shares as you add them into a direct volume.
Adding a Direct Volume Adding a Share • The CIFS table shows two disk-space measures and the serial number for the storage volume behind the share. If two shares have the same serial number, they are assumed to be shares for the same storage volume on the filer. For example, the following command shows all of the NFS shares on the “nas1” external filer: bstnA6k(gbl)# show exports external-filer nas1 shares Export probe of filer “nas1” at 192.168.25.
Adding a Direct Volume Adding a Share Server V2 TCP/2049, V2 UDP/2049, V3 TCP/2049, V3 UDP/2049 CIFS Security Mode User level, Challenge/response, Signatures disabled Server TCP/445 Max Request 16644 bytes bstnA6k(gbl)# ... Identifying the Filer and Share The next step in configuring a direct share is identifying its source share on an external filer.
Adding a Direct Volume Adding a Share Shares: NFS Path (Owner) Access (Status) ---------------------------------- --------------------------- /vol/vol0 (Mounted,rsize=32768,wsize=32768) ... bstnA6k(gbl-ns-vol-shr[medco~/vol~corporate])# filer nfs nas1 /vol/vol0 bstnA6k(gbl-ns-vol-shr[medco~/vol~corporate])# ... Identifying a Multi-Protocol Share In a multi-protocol (NFS and CIFS) namespace, you list both names for the share.
Adding a Direct Volume Adding a Share Using a Managed Volume as a Filer You can assign a managed volume to the share as though it were an external filer. (The next chapter describes how to configure a managed volume.) If the direct volume’s namespace supports CIFS, you can only use a managed volume from the same namespace.
Adding a Direct Volume Adding a Share Attaching a Virtual Directory to the Back-End Share The next step is to create a virtual attach-point directory, visible to clients from the root of the volume, and attach it to a physical directory on the back-end filer. For example, you can create an attach point named /vol0 (in the /vol volume) and attach it to the filer’s /usr/local directory: the client-viewable /vol/vol0 is then the same as /usr/local on the filer.
Adding a Direct Volume Adding a Share For example, this command sequence sets up the filer for the “corporate” share (as shown above), then attaches three directories to the filer: bstnA6k(gbl)# namespace medco bstnA6k(gbl-ns[medco])# volume /vol bstnA6k(gbl-ns-vol[medco~/vol])# share corporate bstnA6k(gbl-ns-vol-shr[medco~/vol~corporate])# filer nas1 nfs /vol/vol0/direct bstnA6k(gbl-ns-vol-shr[medco~/vol~corporate])# attach vol0/corp to shr bstnA6k(gbl-ns-vol-shr[medco~/vol~corporate])# attach vol0/notes t
Adding a Direct Volume Adding a Share Designating the Share as Critical (optional) If the current switch has a redundant peer, you have the option to designate the current share as critical. Skip to the next section if this switch is not configured for redundancy. If the direct volume software loses contact with one of its critical (and enabled) shares, the ARX initiates a failover.
Adding a Direct Volume Adding a Share bstnA6k(gbl-ns-vol[medco~/vol])# share generic bstnA6k(gbl-ns-vol-shr[medco~/vol~generic])# no critical bstnA6k(gbl-ns-vol-shr[medco~/vol~generic])# ... Ignoring the Share’s Free Space (optional) This option is only relevant in a volume where you are manually calculating free space (recall “Manually Setting the Volume’s Free Space (optional)” on page 8-3).
Adding a Direct Volume Adding a Share prtlndA1k(gbl)# namespace access prtlndA1k(gbl-ns[access])# volume /G prtlndA1k(gbl-ns-vol[access~/G])# share recsY2k prtlndA1k(gbl-ns-vol-shr[access~/G~recsY2k])# no freespace ignore prtlndA1k(gbl-ns-vol-shr[access~/G~recsY2k])# ... Adjusting the Free-Space Calculation You can also manually adjust the free-space that is advertised for the current share.
Adding a Direct Volume Adding a Share For example, this command sequence removes any free-space adjustment from the “corporate” share: bstnA6k(gbl)# namespace medco bstnA6k(gbl-ns[medco])# volume /vol bstnA6k(gbl-ns-vol[medco~/vol])# share corporate bstnA6k(gbl-ns-vol-shr[medco~/vol~corporate])# no freespace adjust bstnA6k(gbl-ns-vol-shr[medco~/vol~corporate])# ... Enabling the Share The final step in configuring a share is to enable it.
Adding a Direct Volume Selecting a VPU (optional) bstnA6k(gbl-ns-vol[medco~/vol])# share sales bstnA6k(gbl-ns-vol-shr[medco~/vol~sales])# no enable bstnA6k(gbl-ns-vol-shr[medco~/vol~sales])# ...
Adding a Direct Volume Selecting a VPU (optional) Each VPU supports up to 64 volumes from up to 2 namespaces. You can scale the number of namespaces and volumes on an ARX®6000 by adding more ASMs to the switch. Table 8-1. Numbers of Supported Volumes per Platform Platform # VPUs # Namespaces # Volumes ARX®500 1 2 64 ARX®1000 1 2 64 ARX®6000 with 1 ASM 2 4 128 ARX®6000 with 2 ASMs 4 8 256 Default-VPU Assignment By default, a newly-enabled volume is assigned to the least-subscribed VPU.
Adding a Direct Volume Selecting a VPU (optional) The namespace software uses the following rules for assigning a volume to a VPU: First volume in the namespace a. Choose an empty VPU. b. Choose a VPU that is supporting only one namespace. c. Fail if all VPUs have two namespaces already. Second volume in the namespace a. Choose the least-subscribed VPU that is different from the first. b. Choose the same VPU as the first volume. c. Fail if all VPUs have 64 volumes already.
Adding a Direct Volume Selecting a VPU (optional) Assigning the Volume to a VPU The default-VPU assignment algorithm can artificially limit the maximum number of namespaces on your ARX. Consider the above example with a single ASM. According to Table 8-1, the single ASM has two VPUs and can therefore support up to four namespaces. However, if the namespace volumes are enabled from left to right (as above), the first two namespaces claim both VPUs before any of namespace C’s volumes get enabled.
Adding a Direct Volume Selecting a VPU (optional) Do this before the volume is enabled; once the volume is enabled (below), you cannot re-assign it to another VPU. For example, the following command sequence assigns the current volume, “medco~/vol,” to VPU 1: bstnA6k(gbl)# namespace medco bstnA6k(gbl-ns[medco])# volume /vol bstnA6k(gbl-ns-vol[medco~/vol])# vpu 1 bstnA6k(gbl-ns-vol[medco~/vol])# ... Splitting Namespace Processing within a VPU Each VPU has two domains, one per namespace.
Adding a Direct Volume Selecting a VPU (optional) Reverting to Default-VPU Assignment Before you enable the direct volume, you can remove the manual-VPU assignment. This causes the namespace software to assign the volume according to the default rules (refer back to “Default-VPU Assignment” on page 8-20).
Adding a Direct Volume Selecting a VPU (optional) These limits are evaluated on a credit system; to create a new direct volume or share, its VPU must have sufficient credits. Volume limits are enforced whenever a volume is enabled, and share limits are enforced when both the share and its volume are enabled. The enable operation is denied if the VPU lacks sufficient credits.
Adding a Direct Volume Selecting a VPU (optional) --------- ------ ------ ----- medco 2 /vol Enabled wwmed 1 /acct Enabled 2 Namespaces 2 Volumes VPU 2 ----Physical Processor: 3.
Adding a Direct Volume Enabling the Volume For example, the following command shows VPU 1, with CPU and memory details: bstnA6k(gbl-ns-vol[medarcv~/rcrds])# show vpu 1 detailed Switch: bstnA6k ---------------------------------------------------------------------VPU 1 ----Physical Processor: 3.
Adding a Direct Volume Enabling the Volume For example, this command sequence enables the “/vol” volume in the “medco” namespace: bstnA6k(gbl)# namespace medco bstnA6k(gbl-ns[medco])# volume /vol bstnA6k(gbl-ns-vol[medco~/vol])# enable bstnA6k(gbl-ns-vol[medco~/vol])# ... Enabling All Shares in the Volume From gbl-ns-vol mode, you can enable all of the volume’s shares with a single command.
Adding a Direct Volume Showing the Volume prtlndA1k(gbl-ns-vol[access~/G])# no enable shares prtlndA1k(gbl-ns-vol[access~/G])# ... Disabling the Volume You can disable a volume to stop clients from accessing it. Use no enable in gbl-ns-vol mode to disable the volume: no enable This affects client service. As mentioned above, a disabled volume does not respond to clients; different client applications react to this in different ways. Some may hang, others may log errors that are invisible to the end user.
Adding a Direct Volume Showing the Volume For example, the following command shows the configuration of the ‘medco~/vol’ volume: bstnA6k# show namespace medco volume /vol Namespace “medco” Configuration Description Metadata Cache Size: 512 MB Domain Information ------------------ Supported Protocols ------------------nfsv3-tcp Participating Switches ---------------------bstnA6k (vpu 1) [Current Switch] Volumes ------/vol Volume freespace: 463GB (automatic) Metadata size: 28k State: Enabled Host Switch: bs
Adding a Direct Volume Showing the Volume Filer nas1 [192.168.25.21] NFS Export /vol/vol0/direct Status Online Critical Share Yes Free space on storage 45GB (49,157,705,728 B) Free files on storage 1M Virtual inodes 16M Transitions 1 Last Transition Wed Apr 4 03:39:50 2007 Share generic Filer nas3 [192.168.25.
Adding a Direct Volume Showing the Volume Showing One Share To show the configuration and status of one share in a volume, add the share clause after the volume clause: show namespace name volume volume share share-name where: name (1-30 characters) is the name of the namespace, volume (1-1024 characters) is the path name of the volume, and share-name (1-64 characters) identifies the share. This output shows the share that you chose in the command along with its volume and namespace.
Adding a Direct Volume Showing the Volume Volumes ------/vol Volume freespace: 463GB (automatic) Metadata size: 28k State: Enabled Host Switch: bstnA6k Instance: 1 VPU: 1 (domain 2) Files: 1 used, 31M free Share corporate Filer nas1 [192.168.25.21] NFS Export /vol/vol0/direct Status Online Critical Share Yes Free space on storage 45GB (49,157,705,728 B) Free files on storage 1M Virtual inodes 16M Transitions 1 Last Transition Wed Apr 4 03:39:50 2007 bstnA6k# ...
Adding a Direct Volume Showing the Volume Showing Filer Shares Behind One Volume You can use the show namespace mapping command to show the filer shares behind a particular namespace, as described in the namespace chapter. This shows all attach points in a direct volume and the physical directories behind them.
Adding a Direct Volume Showing the Volume Showing the Volume’s Configuration To review the configuration settings for a direct volume, identify the volume at the end of the the show global-config namespace command: show global-config namespace namespace volume where namespace (1-30 characters) identifies the namespace, and volume (1-1024 characters) is the volume. The output shows all of the configuration options required to recreate the volume.
Adding a Direct Volume Sample - Configuring a Direct Volume share sales filer nas2 nfs /vol/vol1/direct attach vol1/sales to export attach vol1/mtgMinutes to mtgs enable exit vpu 1 domain 2 enable exit exit bstnA6k# ...
Adding a Direct Volume Removing a Direct Volume bstnA6k(gbl-ns-vol-shr[medco~/vol~generic])# filer nas3 nfs /vol/vol2/direct bstnA6k(gbl-ns-vol-shr[medco~/vol~generic])# attach vol2 to data bstnA6k(gbl-ns-vol-shr[medco~/vol~generic])# exit bstnA6k(gbl-ns-vol[medco~/vol])# vpu 1 domain 2 bstnA6k(gbl-ns-vol[medco~/vol])# enable bstnA6k(gbl-ns-vol[medco~/vol])# show namespace status medco Namespace: medco Description: Share Filer Status NFS Export ------------------------- ----------------------------------
Adding a Direct Volume Removing a Direct Volume From priv-exec mode, you can use the remove namespace ... volume command to remove a volume: remove namespace name volume volume [timeout seconds] [sync] where: name (1-30 characters) is the name of the namespace, volume (1-1024 characters) is the path name of the volume, seconds (optional, 300-10,000) sets a time limit on each of the removal’s component operations, and sync (optional) waits for the removal to finish before returning.
Chapter 9 Adding a Managed Volume A managed volume aggregates one or more exports/shares from actual filers. The files from each filer share are imported into the top directory of the volume. During the share import, the volume catalogues all file and directory locations in its metadata. For example, an “/acct” volume with shares from three filers would aggregate files as shown in the figure below: /budget/ 3yearsPlus/ year3.xls year1.xls year2.xls /chestXrays/ amydoe.jpg johndoe.jpg zoedoe.
Adding a Managed Volume Storing Volume Metadata on a Dedicated Share Metadata facilitates storage policies, but it requires some management. A direct volume, described in the previous chapter, has no metadata and is therefore easier to set up and tear down. As explained in the namespace chapter, you use the gbl-ns volume command to create a volume (see “Adding a Volume” on page 7-21).
Adding a Managed Volume Storing Volume Metadata on a Dedicated Share From gbl-ns-vol mode, use the metadata share command to use a dedicated metadata share for the current volume: metadata share filer {nfs3 | nfs3tcp | cifs} path where filer (1-64 characters) is the name of the external filer, nfs3 | nfs3tcp | cifs chooses the protocol to access the share (this can be nfs3 or nfs3tcp for a CIFS-only volume), and path (1-1024 characters) is the specific export/share on the filer.
Adding a Managed Volume Storing Volume Metadata on a Dedicated Share Only one share is chosen during the import, and the volume uses that share to store metadata as long as it runs. After the metadata share is chosen, the volume ignores all of the remaining metadata shares. Removing a Metadata Share You can remove an unused metadata share from a managed volume. A managed volume requires metadata storage for a successful import.
Adding a Managed Volume Storing Volume Metadata on a Dedicated Share Designating the Metadata Share as Critical (optional) If the current switch has a redundant peer, you have the option to designate the volume’s metadata share as critical. Skip to the next section if this switch is not configured for redundancy. If the volume software loses contact with its metadata share, the ARX initiates a failover.
Adding a Managed Volume Dividing the Import into Multiple Scans bstnA6k(gbl)# namespace wwmed bstnA6k(gbl-ns[wwmed])# volume /acct bstnA6k(gbl-ns-vol[wwmed~/acct])# no metadata critical bstnA6k(gbl-ns-vol[wwmed~/acct])# ... Migrating Metadata to a New Share After Import After the managed volume is fully enabled, it chooses its metadata share and writes several database files onto it. You may discover that the metadata filer is not as fast or reliable as you would prefer.
Adding a Managed Volume Dividing the Import into Multiple Scans A multi-scan import is appropriate for an installation with a short cut-in window. From gbl-ns-vol mode, use the import multi-scan command to separate the file scan from the directory scan: import multi-scan This does not have any affect on an import that is currently underway; it applies to future imports only.
Adding a Managed Volume Dividing the Import into Multiple Scans bstnA6k(gbl)# namespace wwmed bstnA6k(gbl-ns[wwmed])# volume /acct bstnA6k(gbl-ns-vol[wwmed~/acct])# import protection bstnA6k(gbl-ns-vol[wwmed~/acct])# ... Reverting to Unprotected Metadata and Faster Import Protected metadata introduces a performance penalty during import. An unprotected import is often the best choice when it fits comfortably into the assigned cut-in window.
Adding a Managed Volume Allowing the Volume to Modify on Import bstnA6k(gbl-ns-vol[wwmed~/acct])# no import multi-scan bstnA6k(gbl-ns-vol[wwmed~/acct])# ... Safe Modes for Share Imports into Pre-Enabled Volumes After the managed volume is fully enabled, a newly added share always uses the multi-scan import with metadata protection. This is regardless of the volume-level settings for import multi-scan and/or import protection, which only apply to a full volume import.
Adding a Managed Volume Allowing the Volume to Modify on Import Redundant directories are only a problem if their file attributes (such as their permissions settings) do not match, or if they have the same name as an already-imported file. Collided directories are renamed according to the same convention as files. For a multi-protocol (NFS and CIFS) namespace, directories are also renamed if their CIFS names are not mappable to the NFS-character encoding.
Adding a Managed Volume Allowing the Volume to Modify on Import • after an import with no modify (assuming no file or directory collisions occurred on import). You cannot use the modify command if the volume is in the process of importing, if any imported shares have collisions, or if the nsck utility is being used on the volume.
Adding a Managed Volume Allowing the Volume to Modify on Import Preventing Modification On or After Re-Import Use the no reimport-modify command to keep the modify flag down after using nsck: no reimport-modify This is the default. For example: bstnA6k(gbl)# namespace wwmed bstnA6k(gbl-ns[wwmed])# volume /acct bstnA6k(gbl-ns-vol[wwmed~/acct])# no reimport-modify bstnA6k(gbl-ns-vol[wwmed~/acct])# ...
Adding a Managed Volume Allowing the Volume to Modify on Import Automatically Synchronizing Metadata (CIFS) This section only applies to volumes in namespaces that support CIFS. Skip to the next section if the namespace is NFS-only. If a file changes on a filer without the managed volume’s knowledge, the volume’s metadata is compromised. This can happen if one of the filer’s local applications, such as anti-virus software, deletes a file or moves it to a quarantine area.
Adding a Managed Volume Allowing the Volume to Modify on Import Allowing Renames on Collision An auto-sync job may discover a file that collides with another file in the volume (that is, in another share). By default, this prevents the operation from synchronizing that file; a managed volume cannot support two or more files whose path names collide. To work around these collisions, you can configure the volume to rename these files before using them.
Adding a Managed Volume Allowing the Volume to Modify on Import Disallowing Renames If auto-sync jobs are not allowed to rename files that collide, those files cannot be synchronized. The metadata for those files remains stale, so clients cannot access them.
Adding a Managed Volume Manually Setting the Volume’s Free Space (optional) Manually Setting the Volume’s Free Space (optional) The next step in creating a volume is to choose an algorithm for calculating its free space. This is the free-space calculation that is passed onto the client: whenever a user mounts a volume (NFS) or maps a network drive to it (CIFS), this total is the free space that they see.
Adding a Managed Volume Setting CIFS Options Setting CIFS Options The next step in configuring a volume is addressing its CIFS options, if necessary. Skip to the next section if this volume is in an NFS-only namespaces. There are five CIFS-volume attributes that back-end filers may or may not support. They are named streams, compressed files, persistent ACLs, Unicode file names on disk, and sparse files. Each volume can support any and all of these capabilities.
Adding a Managed Volume Setting CIFS Options Supporting Filers with Local Groups A Windows filer can support Global Groups, which are managed by Domain Controllers, and/or Local Groups, which are unique to the filer. Local groups have their own Security IDs (SIDs), unknown to any other Windows machine. When you aggregate shares from these filers into a single volume, some files tagged for local-group X are likely to migrate to another filer, which does not recognize the SID for that group (SID X).
Adding a Managed Volume Setting CIFS Options bstnA6k(gbl-ns-vol[insur~/claims])# cifs oplocks-disable bstnA6k(gbl-ns-vol[insur~/claims])# ... Allowing the Volume to Automatically Disable Oplocks You can configure the managed volume to automatically disable oplocks for a CIFS client that times out in response to an “oplock break” command. The “oplock break” command informs a client that it must finish its writes and release the oplock, so that another client can write to the file.
Adding a Managed Volume Setting CIFS Options Supporting Subshares and their ACLs Windows filers can share multiple directories in the same tree, and can apply a different share-level Access Control List (ACL) to each of them.
Adding a Managed Volume Setting CIFS Options To prepare the managed volume to pass connections through to the filer’s subshares, thereby using the subshares’ ACLs, use the gbl-ns-vol filer-subshares command: filer-subshares You cannot use this command while any of the volume’s shares are enabled. This command only prepares the volume for subshare support at the back-end. A later chapter describes how to configure the client-visible subshares in a front-end CIFS service.
Adding a Managed Volume Setting CIFS Options Required Windows Permissions To read the share and ACL definitions at the back-end filers, the volume requires proxy-user credentials with admin-level access. This is a significant increase in access from the standard proxy-user requirements; you may need to increase the permissions for the proxy user on all filers, or use a more-powerful proxy user for this namespace. For instructions on editing a proxy user, recall “Adding a Proxy User” on page 3-2.
Adding a Managed Volume Setting CIFS Options Replicating Subshares at all of the Volume’s Filers The managed volume must have consistent subshares and subshare ACLs under all of its back-end shares. Consistency is required so that clients have the same access point and permissions no matter which back-end share contains their files and directories. If a subshare definition is missing from any share, or has a different ACL, the volume cannot import the top-level share.
Adding a Managed Volume Setting CIFS Options You can issue this command in an enabled volume that already supports filer subshares. In this case, the volume replicates all subshares to any newly-added shares. For example, this command sequence replicates all subshares to new shares in the “/rcrds” volume: bstnA6k(gbl)# namespace medarcv bstnA6k(gbl-ns[medarcv])# volume /rcrds bstnA6k(gbl-ns-vol[medarcv~/rcrds])# filer-subshares replicate bstnA6k(gbl-ns-vol[medarcv~/rcrds])# ...
Adding a Managed Volume Adding a Share Disabling Filer Subshares From gbl-ns-vol mode, use the no filer-subshares command to disable volume support for CIFS subshares and their share-level ACLs: no filer-subshares You can only disable this feature when no CIFS services are sharing any of the volume’s subshares. CIFS front-end services are described in a later chapter, along with instructions on sharing CIFS subshares.
Adding a Managed Volume Adding a Share For example, this command set adds a share called “bills” to the “/acct” volume in the “wwmed” namespace: bstnA6k(gbl)# namespace wwmed bstnA6k(gbl-ns[wwmed])# volume /acct bstnA6k(gbl-ns-vol[wwmed~/acct])# share bills This will create a new share. Create share 'bills'? [yes/no] yes bstnA6k(gbl-ns-vol-shr[wwmed~/acct~bills])# ...
Adding a Managed Volume Adding a Share Identifying the Filer and Share The most important step in configuring a share is connecting it to an export/share on an external filer. The export/share must support all of the namespace’s protocols; a CIFS namespace can only import CIFS shares, and an NFSv3 namespace can only import NFSv3 exports. You use the filer command to identify the filer share behind the managed share.
Adding a Managed Volume Adding a Share Disconnecting From the Filer Before the Share is Enabled To correct a mistake, you can disconnect a share from its filer before you enable the share. (The process of enabling a share is described later.
Adding a Managed Volume Adding a Share For example, the following command sequence allows the “medarcv~/lab_equipment” volume to skip this check while importing the ‘equip’ share. bstnA6k(gbl)# namespace medarcv bstnA6k(gbl-ns[medarcv])# volume /lab_equipment bstnA6k(gbl-ns-vol[medarcv~/lab_equipment])# share equip bstnA6k(gbl-ns-vol-shr[medarcv~/lab_equipment~equip])# import skip-managed-check bstnA6k(gbl-ns-vol-shr[medarcv~/lab_equipment~equip])# ...
Adding a Managed Volume Adding a Share choose an alternative: instead of renaming the directory, synchronize its attributes with that of its already-imported counterpart. The volume presents the two directories as a single directory, with the aggregated contents of both and the attributes of the one that was imported first. For heterogeneous multi-protocol namespaces, always enable synchronization with the import sync-attributes command.
Adding a Managed Volume Adding a Share bstnA6k(gbl-ns-vol[wwmed~/acct])# share bills bstnA6k(gbl-ns-vol-shr[wwmed~/acct~bills])# no import sync-attributes bstnA6k(gbl-ns-vol-shr[wwmed~/acct~bills])# ... Preventing Directory Renames During Import Whether or not the managed volume is allowed to synchronize attributes on this share, it has occasion to rename any directories that collide with previously-imported files.
Adding a Managed Volume Adding a Share Allowing Directory Renames on Import If the share allows directory renames, the volume renames its colliding directories as specified by the modify command (refer back to “Allowing the Volume to Modify on Import” on page 9-9). From gbl-ns-vol-shr mode, use the import rename-directories command to permit the volume to rename this share’s directories: import rename-directories This is the default setting.
Adding a Managed Volume Adding a Share The resulting name is visible through NFS and CIFS, and can be correlated to the intended CIFS name for the directory. As mentioned above, you can look at the share’s import report to see the original name and the new name for each renamed directory.
Adding a Managed Volume Adding a Share Allowing File Renames in Import If the share allows file renames, the volume renames its colliding files as specified by the modify command. import rename-files This is the default setting.
Adding a Managed Volume Adding a Share Each share whose filer uses Local Groups must have SID translation enabled.
Adding a Managed Volume Adding a Share The output displays the translation at each share.
Adding a Managed Volume Adding a Share Some file servers issue these errors for unknown SIDs but accept the file anyway. Some EMC file servers have this setting as a default. As long as the file server is configured to accept the file or directory (perhaps erasing the unknown SIDs), the volume can safely ignore these errors. Do not ignore any SID errors from a file server that rejects the file or directory. The SID errors alert the volume to the failed migration.
Adding a Managed Volume Adding a Share Designating the Share as Critical (optional) If the current switch has a redundant peer, you have the option to designate the current share as critical. Skip to the next section if this switch is not configured for redundancy. If the volume software loses contact with one of its critical (and enabled) shares, the ARX initiates a failover.
Adding a Managed Volume Adding a Share Ignoring the Share’s Free Space (optional) This option is only relevant in a volume where you are manually calculating free space (see “Manually Setting the Volume’s Free Space (optional)” on page 9-16). Such a volume’s free space is the sum of the space from all of its shares, including multiple shares from the same back-end storage volume.
Adding a Managed Volume Adding a Share Adjusting the Free-Space Calculation You can also manually adjust the free-space that is advertised for the current share. From gbl-ns-vol-share mode, use the freespace adjust command. This was described in detail for direct volumes; see “Adjusting the Free-Space Calculation” on page 8-17.
Adding a Managed Volume Adding a Share For example, the following command sequence enables the “wwmed ~/acct~bills” share. bstnA6k(gbl)# namespace wwmed bstnA6k(gbl-ns[wwmed])# volume /acct bstnA6k(gbl-ns-vol[wwmed~/acct])# share bills bstnA6k(gbl-ns-vol-shr[wwmed~/acct~bills])# enable bstnA6k(gbl-ns-vol-shr[wwmed~/acct~bills])# ... If the managed volume is also enabled (as described below), it begins to import files and directories from the back-end share.
Adding a Managed Volume Adding a Share The CLI prompts for confirmation before taking ownership of the share. Enter yes to proceed.
Adding a Managed Volume Adding a Share The following changes were made to replicate nested shares and their attributes to the new share: Added share “CELEBS$” to the following filer: Filer Name: fs1 IP Address: 192.168.25.20 Path: d:\exports\histories\VIP_wing Added share “Y2004” to the following filer: Filer Name: fs1 IP Address: 192.168.25.20 Path: d:\exports\histories\2004 Added share “Y2005” to the following filer: Filer Name: fs1 IP Address: 192.168.25.
Adding a Managed Volume Adding a Share Disabling the Share You can disable a share to make it inaccessible to namespace clients. This stops access to all files on the share. As in a direct volume, use no enable in gbl-ns-vol-shr mode to disable the share. This suspends all policy rules in the current volume; the rules are enabled, but not enforced. To bring policy back online for the current volume, remove the share (described below) or re-enable it.
Adding a Managed Volume Selecting a VPU (optional) Selecting a VPU (optional) The next step in configuring a volume is to choose its Virtual-Processing Unit (VPU). A VPU is a virtual CPU that can fail over from one chassis to another in a redundant configuration. Every volume type, including managed and direct, runs on a particular VPU. The namespace software chooses a default VPU if you do not explicitly choose one.
Adding a Managed Volume Selecting a VPU (optional) Splitting Namespace Processing within a VPU Each VPU has two domains, one per namespace. If the metadata share fails badly for one volume in a VPU domain, the other volumes in the same domain also fail. For example, consider a system with 7 volumes in a single namespace, divided between 2 VPU domains.
Adding a Managed Volume Selecting a VPU (optional) To mitigate this problem, you can assign the same namespace to both VPU domains. This divides the namespace’s volumes between the domains. Each domain runs independently; one can have a metadata failure without affecting the other.
Adding a Managed Volume Selecting a VPU (optional) For example: bstnA6k(gbl)# namespace medarcv bstnA6k(gbl-ns[medarcv])# volume /test_results bstnA6k(gbl-ns-vol[medarcv~/test_results])# no vpu bstnA6k(gbl-ns-vol[medarcv~/test_results])# ... VPU Limits for Managed Volumes and Shares Managed volumes have stricter limits on shares than direct volumes. (For the maximum shares in a direct volume, see “VPU Limits for Direct Volumes and Shares” on page 8-24.
Adding a Managed Volume Selecting a VPU (optional) Changing the Number of Reserved Files Each VPU can support a limited number of files and directories in its managed volumes. Table 9-2.
Adding a Managed Volume Selecting a VPU (optional) Reverting to the Default Number of Reserved Files The no reserve files command reverts the volume to the default number of reserved files. no reserve files For example: bstnA6k(gbl)# namespace wwmed bstnA6k(gbl-ns[wwmed])# volume /acct bstnA6k(gbl-ns-vol[wwmed~/acct])# no reserve files bstnA6k(gbl-ns-vol[wwmed~/acct])# ...
Adding a Managed Volume Selecting a VPU (optional) VPU 2 ----Physical Processor: 3.
Adding a Managed Volume Enabling the Volume File credits: 4.0M files reserved (380M credits remain of total 384M) Namespace Domain Volume State --------- ------ ------ ----- medco 2 /vol Enabled wwmed 1 /acct Enabled 2 Namespaces 2 Volumes bstnA6k(gbl-ns-vol[medarcv~/rcrds])# ... Enabling the Volume The final step in configuring a managed volume is to enable it. This is the same as for a direct volume: from gbl-ns-vol mode, use the enable command to enable the current volume.
Adding a Managed Volume Enabling the Volume bstnA6k(gbl-ns[wwmed])# volume /acct bstnA6k(gbl-ns-vol[wwmed~/acct])# enable bstnA6k(gbl-ns-vol[wwmed~/acct])# ... The import happens asynchronously, so that you can issue more CLI commands while the import happens in the background. To check the progress of the import, use show namespace [status], as described below in “Monitoring the Import.
Adding a Managed Volume Enabling the Volume You can use the optional take-ownership flag for this special case. If the managed volume finds an ownership marker in the root of any of its shares, it overwrites the marker file. Otherwise, it imports the share as usual: enable shares take-ownership Do not use this option if it is possible that another ARX is managing one of the volume’s shares. This would unexpectedly remove the share(s) from service at the other ARX.
Adding a Managed Volume Monitoring the Import Disabling the Volume You can disable a volume to stop clients from accessing it. Just as described with a direct volume, you disable the volume with no enable in gbl-ns-vol mode. (See “Disabling the Volume” on page 8-29.) For example, the following command sequence disables the “/acct” volume: bstnA6k(gbl)# namespace wwmed bstnA6k(gbl-ns[wwmed])# volume /acct bstnA6k(gbl-ns-vol[wwmed~/acct])# no enable bstnA6k(gbl-ns-vol[wwmed~/acct])# ...
Adding a Managed Volume Monitoring the Import Share Filer Status NFS Export ------------------------- ------------------------------------- ----------Volume: /acct Enabled budget das1 Online das8 Online nas1 Online das3 Online NFS: /exports/budget bills NFS: /work1/accting metadata-share NFS: /vol/vol1/meta1 bills2 NFS: /data/acct2 bstnA6k(gbl)# ... The Status for each imported share should go through the following states: 1. Pending, 2. Importing, then 3. Online.
Adding a Managed Volume Monitoring the Import Canceling a Share Import From priv-exec mode, you can cancel the import of a single share with the cancel import command: cancel import namespace ns volume vol-path share share-name where: ns (1-30 characters) identifies the namespace. vol-path (1-1024 characters) is the share’s volume. share-name (1-64 characters) is the share.
Adding a Managed Volume Monitoring the Import For example, this shows the import report for the “shr1-old” share. The import contains several files and directories with multi-protocol issues, highlighted in bold text: bstnA6k# show reports import.10.shr1-old.22.rpt **** Share Import Report: Started at Sat Nov 18 03:26:32 2006 **** **** Namespace: insur **** Volume: /claims **** Share: shr1-old **** IP Addr: 192.168.25.
Adding a Managed Volume Monitoring the Import **** **** Issue **** NC : Name collision. **** AC : Attribute collision. **** RA : Attributes of share root are inconsistent. **** MG : Subdirectory of this share is already imported as managed share. **** CC : Case-blind collision (MPNS and CIFS-only). **** ER : Entry removed directory from filer during import. **** AE : Error accessing entry. **** RV : Reserved name on filer not imported. **** @@ : Other error.
Adding a Managed Volume Monitoring the Import [ F IC] /stats/in_home:2005/age:<10yrs.csv [ F IC] /stats/in_home:2005/age:11-21yrs.csv [ D IC] /Claims:2001 [ D IC] /claims:2005 [ D IC] /:7QD4210 [ D IC] /stats/in_home:2005 [ F CC] /stats/piechart.ppt [ F CC] /stats/PieChart.
Adding a Managed Volume Showing the Volume Showing the Volume The direct-volume chapter discussed some show commands that focus on volumes; recall “Showing the Volume” on page 8-29. These same commands work on all volume types, including managed volumes. The difference is the output; managed volumes support policy (described in the next chapter), so any rules in the volume appear here. To show only one volume in a namespace, add the volume clause to show namespace command.
Adding a Managed Volume Showing the Volume Windows Management Authorization Policies ----------------------------------------readOnly fullAccess Volumes ------/rcrds CIFS : compressed files: yes; named streams: yes; persistent ACLs: yes sparse files: yes; Unicode on disk: yes; case sensitive: no Volume freespace: 504GB (automatic) Auto Sync Files: Enabled Metadata size: 120k Metadata free space: 87GB Filer Subshares: Enabled; Replicate Oplock support: Enabled Notify-change mode: Normal CIFS path cache: Not
Adding a Managed Volume Showing the Volume CIFS Maximum Request Size 16644 SID Translation Yes Ignore SID errors No Status Online Free space on storage 414GB (444,650,885,120 B) Transitions 1 Last Transition Fri Nov 2 03:29:54 2007 Share charts Filer fs1 [192.168.25.
Adding a Managed Volume Showing the Volume ----------Share Farm medFm Share rx Share charts State Enabled Volume Scan Status Complete Migration Status Complete New File Placement Status Enabled Volume Rules --------------Rule Name dailyArchive Type Place Rule State Enabled Volume Scan Status Complete Migration Status Complete New File Placement Status Enabled bstnA6k# ...
Adding a Managed Volume Showing the Volume Domain Information ------------------ Supported Protocols ------------------nfsv3 Participating Switches ---------------------bstnA6k (vpu 1) [Current Switch] Volumes ------/acct Volume freespace: 100GB (automatic) Metadata size: 1.5M Metadata free space: 32GB Import Protection: On State: Enabled Host Switch: bstnA6k Instance: 2 VPU: 1 (domain 1) Files: 4.2k used (426 dirs), 3.
Adding a Managed Volume Showing the Volume NFS Export /work1/accting Features unix-perm Status Online Critical Share Yes Free space on storage 17GB (18,803,621,888 B) Free files on storage 18M Transitions 1 Last Transition Wed Apr 4 03:41:05 2007 Share Farms ----------Share Farm fm1 Share bills Share bills2 Share budget State Enabled Volume Scan Status Complete Migration Status Complete New File Placement Status Enabled Volume Rules --------------Rule Name docs2das8 Type Pla
Adding a Managed Volume Showing the Volume Showing Filer Shares Behind One Volume You can use the show namespace mapping command to show the filer shares behind a particular namespace, as described earlier in the namespace chapter. Add the volume clause to show only the shares behind that particular volume; this is the same for managed volumes as described earlier for direct volumes (see “Showing Filer Shares Behind One Volume” on page 8-34).
Adding a Managed Volume Showing the Volume ntlm-auth-server dc1 ntlm-auth-server dc1-oldStyle protocol cifs proxy-user acoProxy2 windows-mgmt-auth readOnly windows-mgmt-auth fullAccess sam-reference fs2 volume /rcrds filer-subshares replicate modify reimport-modify reserve files 4000000 auto sync files hosted-by bstnA6k metadata share nas1 nfs3 /vol/vol1/meta3 compressed-files named-streams persistent-acls sparse-files unicode-on-disk share bulk sid-translation filer fs2 cifs bulkstorage enable exit share c
Adding a Managed Volume Sample - Configuring a Managed Volume exit share-farm medFm share rx share charts auto-migrate 100M balance Latency enable exit place-rule dailyArchive schedule hourly from fileset dayOld target share bulk no inline-notify enable exit vpu 2 domain 1 enable exit exit bstnA6k# ... Sample - Configuring a Managed Volume For example, this command set configures the ‘/acct’ volume on the ‘wwmed’ namespace.
Adding a Managed Volume Sample - Configuring a Managed Volume This will create a new volume. Create volume '/acct'? [yes/no] yes bstnA6k(gbl-ns-vol[wwmed~/acct])# show external-filer Name IP Address -----------------------das1 ------------192.168.25.19 Description ---------------------------financial data (LINUX filer, rack 14) fs2 192.168.25.27 bulk storage server (DAS, Table 3) fs1 192.168.25.20 misc patient records (DAS, Table 3) nasE1 fs3 192.168.25.51 192.168.25.
Adding a Managed Volume Removing a Managed Volume bstnA6k(gbl-ns-vol-shr[wwmed~/acct~bills])# exit bstnA6k(gbl-ns-vol[wwmed~/acct])# enable bstnA6k(gbl-ns-vol[wwmed~/acct])# exit bstnA6k(gbl-ns[wwmed])# exit bstnA6k(gbl)# Removing a Managed Volume As with a direct volume, use the priv-exec remove namespace ... volume command to remove a managed volume. (Recall “Removing a Direct Volume” on page 8-37).
Adding a Managed Volume Removing a Managed Volume 9-72 CLI Storage-Management Guide
Chapter 10 Configuring a Global Server A global server is a client-entry point to the ARX’s various front-end services. The global server defines a Fully-Qualified-Domain Name (FQDN; for example, “www.acopia.com”) for accessing its services. A global server’s services are implemented by one virtual server on the ARX. Each virtual server listens at a unique virtual-IP (VIP) address. ARX global server www.wwmed.com global server insur.medarch.org virtual server virtual server VIP: 192.168.25.
Configuring a Global Server Concepts and Terminology Concepts and Terminology A front-end service is a service that is visible to clients. This is in contrast to the back-end filers and servers, whose services are aggregated by the ARX. A front-end service provides an interface for clients to access the aggregated back-end services. For example, the NFS and CIFS front-end services provide mount points and share names for accessing various back-end filers. A global-server configuration includes an FQDN.
Configuring a Global Server Adding a Global Server Create global server 'www.wwmed.com'? [yes/no] yes bstnA6k(gbl-gs[www.wwmed.com])# ... Setting the Windows Domain (CIFS Only) If the global server uses back-end servers that require Windows networking, the global server needs the Windows domain to integrate with the back-end servers. Use the windows-domain command to set the Windows domain: windows-domain domain where domain can be up to 64 characters long.
Configuring a Global Server Adding a Global Server windows-domain name (up to 15 characters, converted to uppercase). For most installations, this is sufficient. For sites that do not conform to this naming convention, you can use the pre-win2k-name option to use a different short-domain name: windows-domain domain pre-win2k-name short-name where short-name (optional) can be up to 15 characters long. The CLI converts all letters to uppercase.
Configuring a Global Server Adding a Global Server Use the virtual server command to create a virtual server for an ARX and assign a VIP address to the switch: virtual server switch-name virtual-ip-address mask [vlan vlan-id] where switch-name (1-128 characters) is the host name of the current ARX, and virtual-ip-address is one VIP for the switch, which you create with this command. mask establishes the subnet part of the virtual-IP address.
Configuring a Global Server Adding a Global Server If you identify a WINS server for this network, the virtual server registers its NetBIOS name with the WINS server. This makes it possible for other WINS clients to find the virtual server on this switch. Use the wins command to identify a WINS server: wins ip-address where ip-address is the address of the name server. If the WINS server supports multi-byte character encoding, set the proper character encoding at the namespace behind this virtual server.
Configuring a Global Server Adding a Global Server Setting the NetBIOS Name (optional, CIFS) This section only applies to virtual servers that support CIFS storage. The virtual server’s NetBIOS name is the server name that appears in Windows network browsers. This appears in the “Server Name” column when you issue a Windows net view command.
Configuring a Global Server Adding a Global Server Adding a NetBIOS Alias Some installations use multiple NetBIOS names for a single CIFS server. To mimic this configuration, use the wins-alias command (in gbl-gs-vs mode) for each additional NetBIOS name: wins-alias netbios-alias where netbios-alias (1-15 bytes) is a NetBIOS alias to be advertised to the WINS server. The first character must be a letter, and the remaining characters can be letters, numbers, or underscores (_).
Configuring a Global Server Adding a Global Server Reverting to the Default NetBIOS Name You can revert to the default NetBIOS name with the no wins-name command. The default NetBIOS name is the first component of the global server’s FQDN (for example, “\\FTP1” for global server “ftp1.government.gov”). no wins-name The CLI prompts for confirmation before deleting the name. For example, the following command sequence sets the NetBIOS name back to its default (“\\MYCO”) for the virtual server at “192.168.25.
Configuring a Global Server Adding a Global Server Disabling a Virtual Server Disabling a virtual server makes it impossible for clients to access the particular switch’s front-end services (such as CIFS or NFS) through that virtual server’s IP address. Use no enable in gbl-gs-vs mode to disable a virtual server. no enable This command gracefully terminates all client connections at the VIP address, allowing current transactions and sessions to finish while blocking any new connections.
Configuring a Global Server Adding a Global Server Enabling the Global Server The final step in global-server configuration is to enable it. Use the enable command to activate the global server: enable For example, the following command sequence enables the global server at “www.wwmed.com:” bstnA6k(gbl)# global server www.wwmed.com bstnA6k(gbl-gs[www.wwmed.com])# enable bstnA6k(gbl-gs[www.wwmed.com])# ...
Configuring a Global Server Adding a Global Server Domain Name State Windows Domain --------------------------------------------------------------------------ac1.medarch.org Switch Enabled State MEDARCH.ORG (NTNET) VIP VLAN WINS Server VMAC WINS Name ------------------------------------------------------------------------bstnA6k Enabled 192.168.25.15/24 25 192.168.25.
Configuring a Global Server Adding a Global Server WINS Server WINS Name ------------------------------------------------------------------------bstnA6k Enabled 192.168.25.14/24 25 192.168.25.20 00:0a:49:00:0a:c0 INSURANCE Description ------------------------------------------------------------------------CIFS and NFS server for hospital insurance claims Domain Name State Windows Domain --------------------------------------------------------------------------www.wwmed.
Configuring a Global Server Adding a Global Server bstnA6k(gbl)# show global server www.wwmed.com Domain Name State Windows Domain --------------------------------------------------------------------------www.wwmed.com Enabled Switch State VIP VLAN WINS Server VMAC WINS Name ------------------------------------------------------------------------bstnA6k Enabled 192.168.25.
Configuring a Global Server Sample - Configuring a Global Server bstnA6k(gbl)# ... Sample - Configuring a Global Server The following command sequence sets up a global server for www.wwmed.com. Create the global server: bstnA6k(gbl)# global server ac1.medarch.org This will create a new global server. Create global server 'ac1.medarch.org'? [yes/no] yes Join a Windows domain, MEDARCH.ORG: bstnA6k(gbl-gs[ac1.medarch.org])# windows-domain MEDARCH.ORG Bind to the current ARX, “bstnA6k.
Configuring a Global Server Next 10-16 CLI Storage-Management Guide
Chapter 11 Configuring Front-End Services Front-end services provide client access to namespace storage. Supported front-end services include • Network File System (NFS), and • Common Internet File System (CIFS). You can enable one or more of these services on a global server, so that clients can access them through the global server’s fully-qualified domain name (FQDN) or the virtual server’s VIP. For each service, you determine which namespace volumes are available as storage resources.
Configuring Front-End Services Before You Begin Before You Begin To offer any front-end services, you must first • add one or more NAS filers, as described in Chapter 6, Adding an External Filer; • create at least one namespace as a storage resource, as described in Chapter 7, Configuring a Namespace; • create at least one volume in the namespace, a direct volume (recall Chapter 8, Adding a Direct Volume) or a managed volume (see Chapter 9, Adding a Managed Volume); and • configure a global server t
Configuring Front-End Services Configuring NFS From gbl-nfs mode, you must export at least one namespace volume and then enable the NFS service, as described in the following subsections. Exporting a Namespace Volume If a namespace volume is configured for NFS, you can offer it as an NFS export through a global server. Each NFS service can support volumes from a single namespace.
Configuring Front-End Services Configuring NFS bstnA6k(gbl-nfs[www.wwmed.com])# show global-config namespace ;=============================== namespace =============================== ... ;=============================== namespace =============================== namespace wwmed description “namespace for World-Wide Medical network” protocol nfs3 volume /acct import protection metadata critical ... bstnA6k(gbl-nfs[www.wwmed.com])# export wwmed /acct access-list eastcoast bstnA6k(gbl-nfs[www.wwmed.com])# ...
Configuring Front-End Services Configuring NFS Disabling NLM (optional) The NFS service implements the NFS Lock Manager (NLM) protocol. NLM is a voluntary protocol that NFS-client applications can use to write-protect a file or file region. NFS client A can use NLM to lock a region of a file; if clients B and C are also NLM-compliant, they will not write to that region until client A releases the lock. While the NFS service is disabled, you have the option to disable NLM.
Configuring Front-End Services Configuring NFS Enabling NLM While the NFS service is disabled, you can use nlm enable to re-enable NLM processing: nlm enable This causes the front-end service to answer all NLM requests. For example: bstnA6k(gbl)# nfs www.wwmed.com bstnA6k(gbl-nfs[www.wwmed.com])# nlm enable bstnA6k(gbl-nfs[www.wwmed.com])# ... Enabling NFS Service The final step in NFS configuration is to enable it.
Configuring Front-End Services Configuring NFS bstnA6k(gbl-nfs[www.wwmed.com])# no enable bstnA6k(gbl-nfs[www.wwmed.com])# ... Notifications to NLM Clients As described above, the NFS service can implement the NFS Lock Manager (NLM) protocol. If you used no nlm enable to stop NLM, skip to the next section. In an NFS service where NLM is enabled, a no enable followed by an enable triggers a notification to NLM clients.
Configuring Front-End Services Configuring NFS Showing One NFS Service Identify a particular FQDN with the show nfs-service command to focus on one NFS service: show nfs-service fqdn where fqdn (1-128 characters) is the fully-qualified domain name (for example, www.company.com) for the global server. This shows detailed configuration information for the service. For example, the following command shows the NFS configuration for the global server at “www.wwmed.com:” bstnA6k(gbl)# show nfs-service www.wwmed.
Configuring Front-End Services Configuring NFS Sample - Configuring an NFS Front-End Service The following command sequence sets up NFS service on a global server called “www.wwmed.com:” bstnA6k(gbl)# nfs www.wwmed.com bstnA6k(gbl-nfs[www.wwmed.com])# show global-config namespace wwmed ;=============================== namespace =============================== namespace wwmed description “namespace for World-Wide Medical network” protocol nfs3 volume /acct import protection ... bstnA6k(gbl-nfs[www.wwmed.
Configuring Front-End Services Configuring NFS Removing an NFS Service You can remove an NFS service from a global server to both disable the service and remove its configuration. Use the no form of the nfs command to remove an NFS-service configuration from a global server: no nfs fqdn where fqdn (1-128 characters) is the fully-qualified domain name (for example, “www.organization.org”) for the service’s global server. The CLI prompts for confirmation before removing the service; enter yes to proceed.
Configuring Front-End Services Configuring NFS Showing the NFS/TCP Timeout Use the show nfs tcp command to view the current client-connection behavior and timeout period for NFS/TCP timeouts: show nfs tcp For example, this system has the behavior configured above: bstnA6k(gbl)# show nfs tcp Transaction Timeout Behavior: Return I/O Error Inactivity: 30 seconds bstnA6k(gbl)# ...
Configuring Front-End Services Configuring CIFS Configuring CIFS From gbl mode, use the cifs command to instantiate CIFS service for a global server: cifs fqdn where fqdn (1-128 characters) is the fully-qualified domain name for the global server (for example, “myserver.organization.org”).
Configuring Front-End Services Configuring CIFS Use the export command to share a namespace volume through the current CIFS service: export namespace vol-path [as share-name] [description description] where namespace (1-30 characters) can be any namespace that supports CIFS. vol-path (1-1024 characters) is the path to one of the namespace’s volumes (for example, “/oneVol”) or volume sub paths (“oneVol/apps/myApps”). as share-name (optional; 1-1024 characters) sets a share name for the volume.
Configuring Front-End Services Configuring CIFS protocol cifs proxy-user acoProxy2 windows-mgmt-auth readOnly windows-mgmt-auth fullAccess sam-reference fs2 volume /lab_equipment ... enable exit volume /rcrds filer-subshares replicate modify ... bstnA6k(gbl-cifs[ac1.medarch.org])# export medarcv /rcrds as ARCHIVES description “2-year-old medical records” bstnA6k(gbl-cifs[ac1.medarch.org])# ...
Configuring Front-End Services Configuring CIFS Exporting a Filer Subshare (and Using its ACL) This section only applies to managed volumes. Skip all of the “subshare” sections if you are sharing a direct volume. The CIFS service accesses each back-end share through its root, whether or not you export a directory below the root of the volume. For example, suppose you export “/rcrds/2005” as a subshare of the above “/rcrds” share.
Configuring Front-End Services Configuring CIFS The volume and filers must be properly prepared before your CIFS service can offer this subshare service. A subshare must have the same name, ACL, and position in the directory tree (relative to the share root) on every filer behind the volume.
Configuring Front-End Services Configuring CIFS Exposing Hidden Subshares Some filer subshares can be hidden by having a dollar sign ($) at the ends of their share names (for example, “myshare$”). Most views of the filer’s CIFS shares do not show these names. The CIFS front-end service can expose a hidden subshare by using a slightly-different name for its front-end subshare; the back-end name without the “$” (such as “myshare”).
Configuring Front-End Services Configuring CIFS Exporting all Filer Subshares at Once This section only applies to managed volumes. Skip all of the “subshare” sections if you are sharing a direct volume. You can use a single command to export multiple filer subshares. This presumes that the volume is prepared with multiple subshares, perhaps through subshare replication (recall “Replicating Subshares at all of the Volume’s Filers” on page 9-23).
Configuring Front-End Services Configuring CIFS The warning indicates that some subshares were previously exported. In this example, the warning is expected; an earlier command sequence already exported the “Y2005” share. The report confirms this: bstnA6k# show reports cifsExportSubshares_20061221140808.rpt **** Cifs Export Subshares Report: Started at Thu Dec 21 14:08:08 2006 **** **** Software Version: 2.05.000.
Configuring Front-End Services Configuring CIFS **** Elapsed time: 00:00:00 **** Cifs Export Subshares Report: DONE at Thu Dec 21 14:08:08 2006 **** bstnA6k# ... Exposing All Hidden Subshares For filers with hidden CIFS subshares (such as “CELEBS$” from the above example), you can expose them all as shares from the front-end CIFS service. The front-end shares are renamed without the dollar sign ($) suffix (“CELEBS”). All remaining subshares are also exported, as shown above, without any name changes.
Configuring Front-End Services Configuring CIFS Adding New Subshares This section only applies to managed volumes. Skip to the next section if you are sharing a direct volume. The previous sections explained how to export pre-existing subshares, created on the back-end filers before their CIFS shares were imported. To add new subshares, you must directly connect to one of the back-end filers and create them there; a volume manages files and directories, but does not manage share definitions or ACLs.
Configuring Front-End Services Configuring CIFS bstnA6k(gbl)# cifs ac1.medarch.org bstnA6k(gbl-cifs[ac1.medarch.org])# no export medarcv /cifstest bstnA6k(gbl-cifs[ac1.medarch.org])# ... Allowing Clients to Use Windows Management (MMC) As an alternative to managing the CIFS service from the CLI, Windows clients can use Windows-management applications to manage the service from a remote PC. You can use this feature instead of the export command, above.
Configuring Front-End Services Configuring CIFS Client Experience: Using MMC to Manage a Namespace A properly-enabled client can manage this CIFS service using MMC. For example, the following client session adds a share to the “ac1.medarch.org” service from a Windows 2000 machine. The session starts from Start -> Control Panel -> Administrative Tools -> Computer Management, where you connect to the VIP for the service.
Configuring Front-End Services Configuring CIFS This shows all managed volumes in the CIFS service’s namespace under the C drive. Each direct volume in the namespace appears as another drive. In this example, the two managed volumes appear as folders under the C drive, one direct volume appears as the D drive: You use the interface to export the other managed volume with the share name, “EQUIPMENT.
Configuring Front-End Services Configuring CIFS bstnA6k(gbl)# cifs beta_service bstnA6k(gbl-cifs[beta_service])# no browsing bstnA6k(gbl-cifs[beta_service])# ... Setting a Server Description (optional) You can optionally set the CIFS-service description that will appear in Windows network browsers.
Configuring Front-End Services Configuring CIFS bstnA6k(gbl-cifs[ac1.medarch.org])# ... Enabling CIFS Service The next step in CIFS configuration is to enable it. Use the enable command from gbl-cifs mode to activate the CIFS service: enable For example, the following command sequence enables CIFS for the global server at “ac1.medarch.org:” bstnA6k(gbl)# cifs ac1.medarch.org bstnA6k(gbl-cifs[ac1.medarch.org])# enable bstnA6k(gbl-cifs[ac1.medarch.org])# ...
Configuring Front-End Services Configuring CIFS To enable Kerberos authentication by the CIFS service, you must join the CIFS service to the Active-Directory (AD) domain. This process is similar to adding client computers to the AD domain: this action causes the DC to declare the CIFS service as Trusted for Delegation. The CIFS service uses this authority to access back-end filers on behalf of its clients. Trusting an Acopia server for delegation poses no security threat to your network.
Configuring Front-End Services Configuring CIFS bstnA6k(gbl)# cifs ac1.medarch.org bstnA6k(gbl-cifs[ac1.medarch.org])# enable bstnA6k(gbl-cifs[ac1.medarch.org])# domain-join MEDARCH.ORG Username: acoadmin Password: aapasswd 'ac1' successfully joined the domain. bstnA6k(gbl-cifs[ac1.medarch.org])# ... Support for Both NTLM and Kerberos The domain-join operation does not preclude any clients from authenticating with NTLM; the CIFS service can support both authentication protocols concurrently.
Configuring Front-End Services Configuring CIFS RFCs 1034 and 1035 define basic DNS, and RFC 3645 defines Microsoft-specific authentication extensions to for dynamic DNS. The ARX implementation of dynamic DNS adheres to all of these RFCs. Before you use dynamic DNS, the name server(s) for this service’s Windows Domain must be included in the AD forest. For instructions on setting up name servers for the AD forest, recall “Identifying a Dynamic-DNS Server” on page 3-12.
Configuring Front-End Services Configuring CIFS • \\fs2.medarch.org\lab_data • \\fs7.medarch.org\tests You can import each of these shares into a single namespace, where each share is in a separate volume. From gbl-cifs mode, you can then export each CIFS volume under its original share name (“xrays,” “lab_data,” and “tests,” respectively). Finally, you can use the dynamic-dns command to register all three of the original host names as DNS aliases for the CIFS service (“fs1,” “fs2,” and “fs5”).
Configuring Front-End Services Configuring CIFS bstnA6k(gbl-cifs[ac1.medarch.org])# end bstnA6k# dynamic-dns update ac1.medarch.org bstnA6k# ... Removing a Host Name You can use the no dynamic-dns command to remove one host name from the current CIFS service. This causes the CIFS service to withdraw all references to this host name from DNS. If you remove all registered host names for this CIFS service, you must manually update the domain’s DNS server to support Kerberos.
Configuring Front-End Services Configuring CIFS Svc Global Server Domain Name ----------------------------------------------------------------------------CIFS ac1.MEDARCH.ORG Status MEDARCH.ORG Host Name VIP Operation Retries Last Update DNS Server --------------------------------------------------------------------------OK ac1.MEDARCH.ORG Add 192.168.25.15 0 Wed Oct 4 06:56:24 2006 192.168.25.
Configuring Front-End Services Configuring CIFS Status Host Name Operation VIP Retries Last Update DNS Server --------------------------------------------------------------------------Failed test.MEDARCH.ORG Remove OK 192.168.25.15 15 Wed Oct 4 07:09:33 2006 ac1.MEDARCH.ORG Add Retry 192.168.25.15 0 Wed Oct 4 06:56:24 2006 fs7.MEDARCH.ORG Add 192.168.25.102 192.168.25.104 192.168.25.15 19 Wed Oct 4 07:12:23 2006 192.168.25.
Configuring Front-End Services Configuring CIFS Supporting Aliases with Kerberos This section does not apply to a CIFS service that only uses NTLM authentication. You can also skip this section if you have not registered a WINS name, any WINS aliases, or any DNS aliases for your CIFS service. When a CIFS service joins its AD domain, it registers its FQDN name in the Active-Directory database.
Configuring Front-End Services Configuring CIFS For each alias, we recommend mapping both the simple host name and the full FQDN. For example, the following DOS-command sequence maps three DNS aliases to the “ac1.medarch.org” CIFS service: C:\Program Files\Resource Kit> setspn -A HOST/fs1 ac1.medarch.org C:\Program Files\Resource Kit> setspn -A HOST/fs1.medarch.org ac1.medarch.org C:\Program Files\Resource Kit> setspn -A HOST/fs2 ac1.medarch.org C:\Program Files\Resource Kit> setspn -A HOST/fs2.medarch.
Configuring Front-End Services Configuring CIFS This shows a configuration summary followed by a table of CIFS shares. For example: bstnA6k> show cifs-service ac1.medarch.org Domain Name: ac1.medarch.
Configuring Front-End Services Configuring CIFS Shares -----ARCHIVES Directory /rcrds Description 2 year-old medical records State Online Filer-subshare No Y2005 Directory /rcrds/2005 Description State Online Filer-subshare Yes CELEBS Directory /rcrds/VIP_wing Description State Online Filer-subshare Yes (hidden) Y2004 Directory /rcrds/2004 Description State Online Filer-subshare Yes CELEBS$ Directory /rcrds/VIP_wing Description State Online Filer-subshare Yes Y2006 Directory
Configuring Front-End Services Configuring CIFS Description State Online Filer-subshare Yes bstnA6k> Showing All CIFS Services To show all CIFS front-end services, use show cifs-service all: show cifs-service all [detailed] where detailed (optional) adds details to the CIFS shares. For example, this shows a summary view of every CIFS service on the ARX: bstnA6k> show cifs-service all Domain Name: ac1.medarch.
Configuring Front-End Services Configuring CIFS Share Name Directory State ---------------------------------------------------------------------------CLAIMS /claims Online ... Sample - Configuring a CIFS Front-End Service The following command sequence sets up CIFS service on a global server called “ac1.medarch.org:” bstnA6k(gbl)# cifs ac1.medarch.org bstnA6k(gbl-cifs[ac1.medarch.
Configuring Front-End Services Configuring CIFS Removing a CIFS Service You can remove a CIFS service from a global server to both disable the service and remove its configuration.Use the no form of the cifs command to remove an CIFS-service configuration from a global server: no cifs fqdn where fqdn is the fully-qualified domain name (for example, “www.organization.org”) for the global server. The CLI prompts for confirmation before removing the service; enter yes to proceed.
Configuring Front-End Services Removing All of a Volume’s Front-End Exports Removing All of a Volume’s Front-End Exports You can use a single command to remove all of the front-end exports, NFS and/or CIFS, for a given volume. This is convenient for a volume that has been exported through multiple global servers and front-end services.
Configuring Front-End Services Showing All Front-End Services % INFO: no export insur_bkup /insurShdw as CLAIMS_BKUP prtlndA1k# ... Showing All Front-End Services Front-end services are identified by the FQDN of their respective global servers. Use the show global service command to show all front-end services configured on the ARX: show global service For example: bstnA6k(gbl)# show global service Domain Name Service State -------------------------------------------------www.wwmed.
Configuring Front-End Services Showing All Front-End Services Domain Name Service State -------------------------------------------------www.wwmed.com NFS Enabled bstnA6k(gbl)# ... Showing Front-End Services per Virtual-Server You can show the front-end services running at each virtual server, with the VIP and current health of each service.
Configuring Front-End Services Showing All Front-End Services www.insurBkup.com Switch 192.168.74.92 CIFS Ready Virtual IP Address Service State prtlndA1kB -----------------------Global Server ------------------------------------------------------------------------ prtlndA1k# ...
Configuring Front-End Services Showing Server Maps Showing Server Maps You can show the map between front-end services and the back-end servers behind them. From any mode, use the show server-mapping command: show server-mapping This displays a two-column table, where the left column shows the client-side view and the right column shows the server side. Each front-end export has its own listing of back-end filers.
Configuring Front-End Services Showing Server Maps \\192.168.25.14\CLAIMS insur:/claims nas1:/vol/vol1/meta2* \\nas1\insurance \\nasE1\patient_records \\192.168.25.14\SPECS insur:/claims nas1:/vol/vol1/meta2* \\nas1\insurance \\nasE1\patient_records \\192.168.25.14\STATS insur:/claims nas1:/vol/vol1/meta2* \\nas1\insurance \\nasE1\patient_records \\192.168.25.15\ARCHIVES medarcv:/rcrds \\fs1\histories \\fs2\bulkstorage \\fs4\prescriptions nas1:/vol/vol1/meta3* \\192.168.25.
Configuring Front-End Services Showing Server Maps \\fs4\prescriptions nas1:/vol/vol1/meta3* \\192.168.25.15\Y2005 medarcv:/rcrds \\fs1\histories \\fs2\bulkstorage \\fs4\prescriptions nas1:/vol/vol1/meta3* Where * denotes metadata only physical server. bstnA6k(gbl)# ...
Configuring Front-End Services Showing Server Maps vol0/corp 192.168.25.21:/vol/vol0/direct/shr vol0/notes 192.168.25.21:/vol/vol0/direct/notes ... bstnA6k(gbl)# ... Showing the Servers Behind One Virtual Server To focus on one virtual server, add its VIP to the end of the command: show server-mapping virtual-ip vip [ip-addresses] where vip identifies the VIP (for example, 172.16.77.75), and ip-addresses (optional) is explained above. For example, the following command shows the filers behind “192.
Configuring Front-End Services Showing Server Maps \\192.168.25.15\Y2004 medarcv:/rcrds \\fs1\histories \\fs2\bulkstorage \\fs4\prescriptions nas1:/vol/vol1/meta3* \\192.168.25.15\Y2005 medarcv:/rcrds \\fs1\histories \\fs2\bulkstorage \\fs4\prescriptions nas1:/vol/vol1/meta3* Where * denotes metadata only physical server. bstnA6k(gbl)# ...
Configuring Front-End Services Showing Server Maps For example, the following command shows the filers behind the “wwmed” namespace. This shows IP addresses instead of external-filer names: bstnA6k(gbl)# show server-mapping namespace wwmed ip-addresses Virtual Server Namespace/Volume Virtual Path Physical Server ----------------------------------------------------------------------192.168.25.10:/acct wwmed:/acct 192.168.25.19:/exports/budget 192.168.25.23:/data/acct2 192.168.25.24:/lhome/it5 192.168.
Configuring Front-End Services Showing Server Maps bstnA6k(gbl)# show server-mapping status Virtual Server Physical Server --------------------------------------------------------------192.168.25.12:/vol Status -------Ready nas1:/vol/vol0/direct/shr Online nas1:/vol/vol0/direct/notes Online nas2:/vol/vol1/direct/export Online nas2:/vol/vol1/direct/mtgs Online nas3:/vol/vol2/direct/data Online 192.168.25.
Configuring Front-End Services Showing Server Maps \\fs1\histories Online \\fs2\bulkstorage Online \\192.168.25.14\CLAIMS Ready \\nas1\insurance Online \\nasE1\patient_records Online \\192.168.25.14\SPECS Ready \\nas1\insurance Online \\nasE1\patient_records Online \\192.168.25.14\STATS Ready \\nas1\insurance Online \\nasE1\patient_records Online 192.168.25.14:/claims Ready nas1:/vol/vol1/NTFS-QTREE/insurance Online nasE1:/root_vdm_4/patient_records Online bstnA6k(gbl)# ...
Chapter 12 Policy for Balancing Capacity Namespace policy uses file migration and replication to balance the usage of various back-end filers. This chapter explains how to configure policies for managing free space.
Policy for Balancing Capacity Before You Begin Before You Begin You must configure a namespace and at least one managed volume before you configure the policies described in this chapter. See Chapter 7, Configuring a Namespace, and Chapter 9, Adding a Managed Volume. Concepts and Terminology A rule is a condition or set of conditions for moving files between back-end storage devices. Namespace policy is a series of namespace rules. The ARX can move files between external storage devices.
Policy for Balancing Capacity Showing All Policy Rules wwmed Complete /acct medarcv Complete /rcrds medarcv Complete /rcrds fm1 Complete dailyArchive Complete medFm Complete bstnA6k# ... Showing Details Add the details keyword to the end of the command to show details for all policies on the ARX: show policy details The output is divided into namespaces, volumes, and rules. All namespaces and volumes are listed, even those without any rules or share farms.
Policy for Balancing Capacity Showing All Policy Rules Constrain Files: No Constrain Directories: No Balance Mode: Capacity Maintain Freespace: 2G Auto Migrate: 2G State: Enabled Status: Volume Scan Status: Complete File Migration Status: Complete New File Placement Status: Enabled Cumulative Statistics: Total Files Migrated: 0 Total Directories Promoted: 0 Total Failed Migrations: 0 Total Failed Directory Promotes: 0 Total Retried Migrations: 0 Total Canceled Migrations: 0 Tot
Policy for Balancing Capacity Showing All Policy Rules Configuration: From fileset: bulky (files only) Target share: bills Report: docsPlc, Verbose Migrate limit: 50G Volume Scan: Enabled Inline Notifications: Enabled Promote Directories: Disabled State: Enabled Status: Volume Scan Status: Complete File Migration Status: Complete New File Placement Status: Enabled Cumulative Statistics: Total Files Migrated: 68 Total Directories Promoted: 0 Total Failed Migrations: 0 Total Faile
Policy for Balancing Capacity Showing All Policy Rules Last Scan Statistics: Scan Started: Wed Apr 4 03:41:53 2007 Scan Completed: Wed Apr 4 03:55:20 2007 Elapsed Time: 00:13:27 Scan Report: docsPlc_20070404034142.
Policy for Balancing Capacity Showing All Policy Rules Auto Migrate: 100M State: Enabled Status: Volume Scan Status: Complete File Migration Status: Complete New File Placement Status: Enabled Cumulative Statistics: Total Files Migrated: 0 Total Directories Promoted: 0 Total Failed Migrations: 0 Total Failed Directory Promotes: 0 Total Retried Migrations: 0 Total Canceled Migrations: 0 Total Hard Links Skipped: 0 Total Files Placed Inline: 0 Total File Renames Processed Inline: 0
Policy for Balancing Capacity Showing All Policy Rules Schedule: hourly Migrate limit: 0 Volume Scan: Enabled Inline Notifications: Disabled Promote Directories: Disabled State: Enabled Status: Volume Scan Status: Complete File Migration Status: Complete New File Placement Status: Enabled Cumulative Statistics: Total Files Migrated: 0 Total Directories Promoted: 0 Total Failed Migrations: 0 Total Failed Directory Promotes: 0 Total Retried Migrations: 0 Total Canceled Migrations:
Policy for Balancing Capacity Showing All Policy Rules Elapsed Time: 00:00:00 Scan Report: None Number of Files Scanned: 93 Number of Directories Scanned: 25 Number of Files in Fileset: 0 Number of Files Migrated: 0 Size of Files Migrated: 0 (0 on source) Number of Directories Promoted: 0 Number of Failed Migrations: 0 Number of Failed Directory Promotes: 0 Volume: /lab_equipment Namespace: insur Volume: /claims Filename Fileset: images Configuration: Name Is: Path Is: /images/
Policy for Balancing Capacity Showing All Policy Rules For example, the following command lists the rule and share farm for the “wwmed” namespace: bstnA6k# show policy wwmed Namespace: wwmed Rule Status Priority Volume --------- ------------------------- ------------------------- ------------------------- 1 Paused /acct docs2das8 Volume Paused /acct fm1 2 Rule Vol. Scan Migration Complete Volume Complete bstnA6k# ...
Policy for Balancing Capacity Showing All Policy Rules New File Placement Rule: fm1 Configuration: Constrain Files: No Constrain Directories: No Balance Mode: Capacity ... bstnA6k# ... Focusing on One Volume Add the volume name after the namespace name to focus on the volume: show policy namespace volume where namespace (optional, 1-30 characters) is the namespace, and volume (optional, 1-1024 characters) identifies the volume.
Policy for Balancing Capacity Showing All Policy Rules Showing Details for the Volume As with namespaces, you can add the details keyword for details about the volume: show policy namespace volume details This lists details about all the rules and share farms in the volume.
Policy for Balancing Capacity Showing All Policy Rules This expands the output to show the full details for the share farm or rule. These details include configuration parameters and usage statistics.
Policy for Balancing Capacity Showing All Policy Rules Total Files Placed Inline: 39 Total File Renames Processed Inline: 0 Total Directories Placed Inline: 0 Total Directory Renames Processed Inline: 0 Number of Scans Performed: 1 Queue Statistics: First-time Migrates: 0 Requeued Migrates: 0 Queued Directory Promotes: 0 Last Scan Statistics: Scan Started: Wed Apr 4 03:41:53 2007 Scan Completed: Wed Apr 4 03:55:20 2007 Elapsed Time: 00:13:27 Scan Report: docsPlc_20070404034142.
Policy for Balancing Capacity Adding a Share Farm Adding a Share Farm You configure your usage-balancing policies in a share farm. A share farm is a group of shares in a volume. You can apply file-distribution rules to a share farm, with the aim of balancing the usage of its back-end shares. Instructions for setting these rules appear later in this chapter. share /budget share /bills share farm "fm1" volume "/acct" namespace "wwmed" A volume can contain one or more share farms.
Policy for Balancing Capacity Adding a Share Farm bstnA6k(gbl)# namespace wwmed bstnA6k(gbl-ns[wwmed])# volume /acct bstnA6k(gbl-ns-vol[wwmed~/acct])# share-farm fm1 bstnA6k(gbl-ns-vol-sfarm[wwmed~/acct~fm1])# ... Adding a Share to the Farm The next step in creating a share farm is to add a share to the farm. The share farm can hold multiple shares. Use the share command to add one: share name where name (1-64 characters) identifies the share.
Policy for Balancing Capacity Adding a Share Farm The default weight is 1. From gbl-ns-vol-sfarm mode, use the weight clause with the share command to set the share’s weight: share name weight weight where name (1-64 characters) identifies the share, and weight (0-100) is the weight of the share, relative to the weights you set for other shares in the same farm. A 0 (zero) makes the share ineligible for new files.
Policy for Balancing Capacity Adding a Share Farm Auto Migrating Existing Files You can configure an auto-migrate policy to migrate files off of a share that is low on free space. The files migrate to shares that are not low on free space, if there are any such shares in the same share farm.
Policy for Balancing Capacity Adding a Share Farm Balancing New Files Based on Free Space New files, created by the volume’s clients, are distributed round-robin amongst the shares in the share farm. For example, consider a share farm with shares s1 and s2: the first new file goes to s1, the second goes to s2, the third goes to s1, and so on. This is done without regard to free space on each share. You can configure the share farm to assign new files based on the current free space at each share.
Policy for Balancing Capacity Adding a Share Farm Based on Latency (Bandwidth) The NSM continuously updates its measure of the average latency (round-trip packet time) between its ports and each share. A low latency for a share indicates high currently-available bandwidth at the share. You can use the balance command to distribute new files based on latency measures instead of free-space measures.
Policy for Balancing Capacity Adding a Share Farm prtlndA1k(gbl-ns[nemed])# volume /acctShdw prtlndA1k(gbl-ns-vol[nemed~/acctShdw])# share-farm farm1 prtlndA1k(gbl-ns-vol-sfarm[nemed~/acctShdw~farm1])# share back1 weight 20 prtlndA1k(gbl-ns-vol-sfarm[nemed~/acctShdw~farm1])# share back2 weight 10 prtlndA1k(gbl-ns-vol-sfarm[nemed~/acctShdw~farm1])# balance round-robin prtlndA1k(gbl-ns-vol-sfarm[nemed~/acctShdw~farm1])# ...
Policy for Balancing Capacity Adding a Share Farm bstnA6k(gbl-ns-vol-sfarm[wwmed~/acct~fm1])# ... New-File Placement When All Shares Reach the Free Space Threshold If all shares fill up to their maintain-free-space measures, the share farm distributes each new file to the same share as its parent directory. Disabling the Free-Space Threshold You can allow the balance rule to continue placing new files on shares that are close to filling up.
Policy for Balancing Capacity Adding a Share Farm bstnA6k(gbl-ns-vol-sfarm[ns2~/usr~fm4])# constrain-files bstnA6k(gbl-ns-vol-sfarm[ns2~/usr~fm4])# ... Distributing New Files Use the no form of the constrain-files command to balance new files in the current share farm.
Policy for Balancing Capacity Adding a Share Farm bstnA6k(gbl)# namespace ns2 bstnA6k(gbl-ns[ns2])# volume /usr bstnA6k(gbl-ns-vol[ns2~/usr])# share-farm fm4 bstnA6k(gbl-ns-vol-sfarm[ns2~/usr~fm4])# constrain-directories bstnA6k(gbl-ns-vol-sfarm[ns2~/usr~fm4])# ... Constraining Directories Below a Certain Depth You can apply the directory constraint to any level in the volume’s directory tree.
Policy for Balancing Capacity Adding a Share Farm Not Constraining Directories Use no constrain-directories to remove directory placement restrictions and have new directories distributed as directed by one of the balance commands. no constrain-directories For example: bstnA6k(gbl)# namespace ns2 bstnA6k(gbl-ns[ns2])# volume /var bstnA6k(gbl-ns-vol[ns2~/var])# share-farm fm2 bstnA6k(gbl-ns-vol-sfarm[ns2~/var~fm2])# no constrain-directories bstnA6k(gbl-ns-vol-sfarm[ns2~/var~fm2])# ...
Policy for Balancing Capacity Adding a Share Farm Stopping All Share-Farm Rules You can stop all auto migrations and/or new-file balancing on a share farm by disabling it. This reverts all shares to standard behavior; no auto migrations as free space gets low on a share, and any new file or directory is created on the same share as its parent. To do this, use the no enable command from gbl-ns-vol-sfarm mode.
Policy for Balancing Capacity Creating a Schedule Creating a Schedule Several policy rules use a schedule, which determines when (and how frequently) a rule runs. Each rule can have a unique schedule. Conversely, several rules can share the same schedule.
Policy for Balancing Capacity Creating a Schedule bstnA6k(gbl)# schedule hourly bstnA6k(gbl-schedule[hourly])# every 1 hours bstnA6k(gbl-schedule[hourly])# ... Setting the Duration (optional) The next step in creating a schedule is to set a duration. The duration is the amount of time that a rule can run. The duration is applied every time the schedule fires: if you set a 5-minute duration for the schedule, each rule that uses the schedule has 5 minutes to run every time it runs.
Policy for Balancing Capacity Creating a Schedule For example, the following command sequence removes the duration from the “hourly” schedule: bstnA6k(gbl)# schedule hourly bstnA6k(gbl-schedule[hourly])# no duration bstnA6k(gbl-schedule[hourly])# ... Setting the Start Time (optional) A schedule’s start time determines the start of each interval: if a daily schedule has a start time of 2:42 PM, the schedule will fire at 2:42 PM every day.
Policy for Balancing Capacity Creating a Schedule bstnA6k(gbl-schedule[daily])# no start bstnA6k(gbl-schedule[daily])# ... Showing All Schedules To list all schedules on the switch, use the show policy schedule command: show policy schedule This shows each schedule’s configuration parameters as well as the time of the next scheduled run.
Policy for Balancing Capacity Creating a Schedule Showing One Schedule To focus on a single schedule, add the desired schedule name to the command: show policy schedule name where name (1-64 characters) identifies the schedule to show. For example, this shows the “hourly” schedule: bstnA6k(gbl)# show policy schedule hourly Schedule: hourly Start Time: Sun Oct 24 01:00:00 2004 Previous Run: Wed Apr 4 05:00:00 2007 Runs Next: Wed Apr 4 06:00:00 2007 Interval: 1 hours bstnA6k(gbl)# ...
Policy for Balancing Capacity Pausing All Rules in a Volume Start Time: Sun Sep 4 03:00:00 2005 Previous Run: Wed Apr 4 03:00:00 2007 Runs Next: Thu Apr 5 03:00:00 2007 Interval: 1 days Duration: 02:00:00 End Time: Thu Apr Schedule: backupWindow Start Time: Sun Nov 12 13:00:00 2006 Previous Run: Tue Apr 3 13:00:00 2007 Runs Next: Wed Apr 4 13:00:00 2007 Interval: 1 days Duration: 04:00:00 End Time: Wed Apr 5 05:00:00 2007 4 17:00:00 2007 bstnA6k(gbl)# no schedule daily4am bs
Policy for Balancing Capacity Pausing All Rules in a Volume This pauses all of the volume’s rules, so that they stop all volume scans and migrations. Clients may change files or directories so that they match a rule and therefore should be migrated; these migrations are queued until policy processing is resumed later. All file-placement rules continue to direct new files and directories to their configured storage. (New objects are created at the correct share, so no migrations are necessary.
Policy for Balancing Capacity Pausing All Rules in a Volume Pausing on a Schedule Some installations want to schedule “off hours” for file migrations; for example, you may want to pause all migrations during regularly scheduled backup windows. You can create a schedule (as described earlier) to define the off hours, then pause a volume according to that schedule. This reduces resource contention between clients and the policy engine.
Policy for Balancing Capacity Draining One or More Shares bstnA6k(gbl-ns-vol[medarcv~/rcrds])# no policy pause bstnA6k(gbl-ns-vol[medarcv~/rcrds])# ... Draining One or More Shares You can move all files from one share (or share farm) to one or more other shares in the same volume. A placement rule accomplishes this, and prevents any new files from being created on the source share(s). This is a method to prepare one or more shares for removal without affecting any clients.
Policy for Balancing Capacity Draining One or More Shares Identifying the Source Share(s) The next step in configuring a placement rule is to identify the source share or shares. The placement rule places all files from the source share(s) onto the target storage.
Policy for Balancing Capacity Draining One or More Shares Choosing the Target Storage The next step in configuring a placement rule is to choose the target storage for the share’s files. You can choose one target: another share or share farm in the current volume. From gbl-ns-vol-plc mode, use one of two target rules to set the share’s storage target: target share share-name where share-name (1-64 characters) is a share from the current volume.
Policy for Balancing Capacity Draining One or More Shares To migrate files off of any share that is running low on free space, you can configure auto migration for the share farm. Refer back to “Auto Migrating Existing Files” on page 12-18 to configure auto migration. Applying a Schedule (optional) You can optionally run the placement rule at a later start time, set by a schedule. By default, placement rule runs as soon as you enable it, empties the share(s) of all files, and then keeps the share empty.
Policy for Balancing Capacity Draining One or More Shares bstnA6k(gbl-ns-vol-plc[wwmed~/acct~emptyRH])# no schedule bstnA6k(gbl-ns-vol-plc[wwmed~/acct~emptyRH])# ... Limiting Each Migration (optional) You can use the limit-migrate command to put a ceiling on the amount of data migrated. The policy engine migrates files until it meets this limit; it stops migrating as soon as it discovers that the next file would exceed the limit.
Policy for Balancing Capacity Draining One or More Shares Removing the Limit By default, a placement rule migrates until the source share is drained.
Policy for Balancing Capacity Draining One or More Shares bstnA6k(gbl-ns-vol[wwmed~/acct])# place-rule emptyRH bstnA6k(gbl-ns-vol-plc[wwmed~/acct~emptyRH])# report emptyRH_ bstnA6k(gbl-ns-vol-plc[wwmed~/acct~emptyRH])# ... Generating a Verbose Report The placement report is terse by default. You should make the file verbose to give the best-possible chance of diagnosing any problems.
Policy for Balancing Capacity Draining One or More Shares Disabling Reports From gbl-ns-vol-plc mode, use no report to prevent the rule from generating a report: no report This has no effect after the first (and only) run of the placement rule.
Policy for Balancing Capacity Draining One or More Shares bstnA6k(gbl)# namespace wwmed bstnA6k(gbl-ns[wwmed])# volume /acct bstnA6k(gbl-ns-vol[wwmed~/acct])# share it5 bstnA6k(gbl-ns-vol-shr[wwmed~/acct~it5])# migrate retain-files bstnA6k(gbl-ns-vol-shr[wwmed~/acct~it5])# ...
Policy for Balancing Capacity Draining One or More Shares Tentatively Enabling the Rule A tentatively-enabled rule is configured to appear in the system logs (syslog) as “tentative,” showing the potential effects of the rule if it was enabled. (The log component, POLICY_ACTION, creates the syslog messages; syslog access and log components are described in the CLI Maintenance Guide.
Policy for Balancing Capacity Draining One or More Shares Verifying That All Files Are Removed You can use a metadata-only report to verify that the share is empty. Use the nsck ... metadata-only command to generate a report about the share (as described in “Focusing on One Share” on page 5-10 of the CLI Maintenance Guide), and then use show reports to view the report.
Policy for Balancing Capacity Draining One or More Shares **** Total Files: 0 **** Total Directories: 2 **** Total Links: 0 **** Total Locking Errors: 0 **** Total items: **** Elapsed time: 2 00:00:00 **** Metadata-Only Report: DONE at Wed Sep 21 12:10:59 2005 **** Removing the Placement Rule You can remove a placement rule to both disable it and delete its configuration.
Policy for Balancing Capacity Removing All Policy Objects from a Namespace Removing All Policy Objects from a Namespace You can use a single command to remove all rules, share farms, and other policy objects from a namespace.
Policy for Balancing Capacity Migrations in a Multi-Protocol Namespace Removing All Policy Objects from a Volume The optional volume argument focuses the remove namespace command on one volume: remove namespace name volume volume policy-only [timeout seconds] [sync] where: name (1-30 characters) is the name of the namespace, volume (1-1024 characters) is the path name of the volume, policy-only is the option to remove only the policy objects, and seconds (optional, 300-10,000) sets a time limit on each of
Policy for Balancing Capacity Migrations in a Multi-Protocol Namespace File-Attribute Migrations The policies in this chapter migrate file attributes along with the files themselves. File attributes are permission settings, the name or ID of the user who owns the file, the group or groups who have access to the file, last-access times, named streams, and other external data associated with the file.
Policy for Balancing Capacity Migrations in a Multi-Protocol Namespace From a NetApp Filer, UNIX Qtree The following table shows how file attributes are migrated from a NetApp filer with a UNIX-based Qtree: Vendor for Destination Filer NetApp, UNIX Qtree UNIX Permission Bits UID, GID, time stamps, ...
Policy for Balancing Capacity Migrations in a Multi-Protocol Namespace Vendor for Destination Filer UNIX Permission Bits UID, GID, time stamps, ...
Policy for Balancing Capacity Migrations in a Multi-Protocol Namespace From an EMC Filer The following table shows how file attributes are migrated from an EMC filer: Vendor for Destination Filer NetApp, UNIX Qtree UNIX Permission Bits UID, GID, time stamps, ...
Policy for Balancing Capacity Migrations in a Multi-Protocol Namespace Vendor for Destination Filer UNIX Permission Bits UID, GID, time stamps, ... NTFS Security Descriptor (SD) NetApp, NTFS Qtree Migrates from SMB (no ACLs) to NetApp/NTFS are not supported.
Policy for Balancing Capacity Some CIFS Applications Block Out Migrations Some CIFS Applications Block Out Migrations If any client holds a CIFS file open during a file migration, the migration may fail for that file. Applications have the option to universally block all CIFS-read access to a file. This is a commonly-used CIFS feature, and it blocks all users including members of the Backup Operator’s group.
Chapter 13 Grouping Files into Filesets A fileset is a group of files and/or directories to which you can apply replication and migration policies. You can configure filesets based on filename, directory path, size, and/or age. You can create complex filesets by joining multiple filesets in a union or taking the intersection of two or more filesets. Direct volumes, which contain no metadata, do not support any filesets. This chapter is relevant to managed volumes only.
Grouping Files into Filesets Grouping Files by Filename Create object 'xmlFiles'? [yes/no] yes bstnA6k(gbl-ns-vol-fs-name[xmlFiles])# ... Setting a Directory Path (optional) You can set a directory path to narrow the scope of the fileset. Only matching files/subdirectories under this path are included in the fileset; the default is the root directory in the managed volume.
Grouping Files into Filesets Grouping Files by Filename For example, the following command set matches files in /www/xml, including all subdirectories: bstnA6k(gbl)# policy-filename-fileset website bstnA6k(gbl-ns-vol-fs-name[website])# path /www/xml bstnA6k(gbl-ns-vol-fs-name[website])# recurse bstnA6k(gbl-ns-vol-fs-name[website])# ...
Grouping Files into Filesets Grouping Files by Filename • ? is any single character, or no character. The * and ? match any character, including the “/” character. (The “/” is the Unix delimiter between directories.) Therefore, path match /usr/*/bin matches both “/usr/local/bin” and “/usr/src/mydir/tmp/bin.” This may be unexpected for Unix users. • [...] matches any one of the enclosed characters. For example, [xyz] matches x, y, or z. • [a-z] matches any character in the sorted range, a through z.
Grouping Files into Filesets Grouping Files by Filename For example, the following command set uses a regular expression to match all hidden Unix directories (directories that start with “/.” and have something other than “.” as their second character): bstnA6k(gbl)# policy-filename-fileset hiddenFiles bstnA6k(gbl-ns-vol-fs-name[hiddenFiles])# path regexp “/\.[^\.]” bstnA6k(gbl-ns-vol-fs-name[hiddenFiles])# ...
Grouping Files into Filesets Grouping Files by Filename Shorthand for Character Groups \d matches any numeric digit, 1-9. \D matches any character except a numeric digit. \t matches a . \n, \f, and \r match various flavors of . They are ,
Grouping Files into Filesets Grouping Files by Filename Regular-Expression Samples ^/var matches a path with “/var” at its root (for example, “/var/tmp” or “/variable/data,” but not “/bin/var”). ^/(var | tmp)/ matches two root directories, “/var/” or “/tmp/”. ^/home/[^/]+/$ matches any subdirectory of “/home” (such as “/home/juser/”) but does not match any directories below that level (such as “/home/juser/misc/”).
Grouping Files into Filesets Grouping Files by Filename bstnA6k(gbl)# policy-filename-fileset forAllUsers bstnA6k(gbl-ns-vol-fs-name[forAllUsers])# path regexp not “^/\.” bstnA6k(gbl-ns-vol-fs-name[forAllUsers])# ...
Grouping Files into Filesets Grouping Files by Filename Matching Filenames (optional) You can use the same methods (above) for specifying the fileset’s files. These apply to any files in the chosen path(s). By default, all files match.
Grouping Files into Filesets Grouping Files by Filename Excluding Files As with paths, you can use the not keyword to select every file that does not match the string: name not filename [ignore-case] matches any file except the one specified. This only excludes an exact match for filename. name match not “wild-card-string” [ignore-case] excludes any file that fits the pattern in the wild-card string.
Grouping Files into Filesets Grouping Files by Size Grouping Files by Size You can create filesets based on file size. Each filesize fileset contains files “larger-than” or “smaller-than” a size of your choosing, or files in a range between two sizes. From gbl mode, use the policy-filesize-fileset command to create a filesize fileset: policy-filesize-fileset name where name (1-64 characters) is the name that you assign to the fileset.
Grouping Files into Filesets Grouping Files by Size Selecting Files Based on their Sizes The next step in configuring a filesize fileset is to determine a size range for its files. You can select files larger-than (or equal-to) a certain size, smaller-than a certain size, or between two sizes.
Grouping Files into Filesets Grouping Files by Age bstnA6k(gbl)# policy-filesize-fileset veryLarge bstnA6k(gbl-ns-vol-fs-filesize[veryLarge])# no select-files smaller-than bstnA6k(gbl-ns-vol-fs-filesize[veryLarge])# ... Removing the Fileset Removing a fileset affects file metadata only; it does not delete any files. From gbl mode, use no policy-filesize-fileset to remove a filesize fileset: no policy-filesize-fileset name where name (1-64 characters) identifies the fileset to be removed.
Grouping Files into Filesets Grouping Files by Age For example, the following command sequence creates a new simple-age fileset: bstnA6k(gbl)# policy-simple-age-fileset dayOld This will create a new policy object. Create object 'dayOld'? [yes/no] yes bstnA6k(gbl-ns-vol-fs-simple-age[dayOld])# ... Selecting Files Based on their Ages The next step in configuring a simple-age fileset is to determine the age for its files.
Grouping Files into Filesets Grouping Files by Age Removing a File Selection Use the no select-files command to remove a file selection: no select-files {older-than | newer-than} where older-than | newer-than chooses the selection to remove. For example, this removes the “older-than” selection from a simple-age fileset: bstnA6k(gbl)# policy-simple-age-fileset 2mo This will create a new policy object.
Grouping Files into Filesets Grouping Files by Age Identifying a Source Fileset (optional) By default, the fileset selects its files from all of files in the current volume. You can narrow this scope by choosing a source fileset (for example, a filename fileset with a particular directory path). If this is set, the select-files command chooses from the pool of files in the source fileset.
Grouping Files into Filesets Grouping Files by Age Setting the Age-Evaluation Interval (optional) By default, the simple-age fileset selects all of its files whenever it is used by a rule. For example, suppose a file-placement rule uses a simple-age fileset that selects files newer than 3 hours. (A later chapter explains how to use a file-placement rule with filesets.) Every time the file-placement rule runs, the simple-age fileset selects the files that are newer than three hours at that moment.
Grouping Files into Filesets Grouping Files by Age Reverting to the Default Evaluation Interval By default, a simple-age fileset selects its files whenever it is used by a rule.
Grouping Files into Filesets Grouping Files by Age bstnA6k(gbl-ns-vol-fs-simple-age[dayOld])# ... Reverting to the Default Start Time By default, a simple-age fileset selects its files whenever it is used by a rule. If the fileset uses a schedule set by the every command, the default start time is the time that an administrator entered the every command.
Grouping Files into Filesets Joining Filesets Joining Filesets You can join two or more filesets in a union fileset. A union fileset contains all the files in all of its source filesets. A file that is common to two or more of the source filesets is only included once in the resulting union. From gbl mode, use the policy-union-fileset command to create a union fileset: policy-union-fileset name where name (1-64 characters) is a required name that you assign to the fileset.
Grouping Files into Filesets Joining Filesets bstnA6k(gbl-ns-vol-fs-union[bulky])# from fileset xmlFiles bstnA6k(gbl-ns-vol-fs-union[bulky])# ... Removing a Source Fileset Use the no form of the from fileset command to remove a fileset from the list of sources: no from fileset fileset-name where fileset-name (1-64 characters) identifies the fileset to remove. You cannot remove the last source fileset if the union fileset is in use (that is, referenced by another fileset or used in a rule).
Grouping Files into Filesets Intersecting Filesets Removing the Fileset Removing a fileset affects the switch configuration only; it does not delete any files. Use no policy-union-fileset to remove a union fileset from the current volume: no policy-union-fileset name where name (1-64 characters) identifies the fileset to be removed. You cannot remove a fileset that is referenced by another fileset or used in a rule. The next chapter has configuration instructions for referencing a fileset from a rule.
Grouping Files into Filesets Intersecting Filesets Identifying a Source Fileset The final step in configuring an intersection fileset is to identify two or more source filesets. You can include as many source filesets as desired, but you must have at least two for the intersection to be meaningful. From gbl-ns-vol-fs-isect mode, use the from fileset command to include a source fileset: from fileset fileset-name where fileset-name identifies the source fileset.
Grouping Files into Filesets Listing all Filesets Removing All Source Filesets To remove all filesets with a single command, use no from all: no from all As above, you cannot remove all source filesets if the intersection fileset is in use. For example, the following command set removes all source filesets from the “paidBills” fileset: bstnA6k(gbl)# policy-intersection-fileset paidBills bstnA6k(gbl-ns-vol-fs-isect[paidBills])# no from all bstnA6k(gbl-ns-vol-fs-isect[paidBills])# ...
Grouping Files into Filesets Listing all Filesets bstnA6k(gbl)# show policy filesets Global Policy: Filename Fileset: website Configuration: Name Does Not Match Regular Expression: \.(wmv|avi)$ Path Is: /www/xml/ Recurse: Yes Filename Fileset: hiddenFiles Configuration: Name Is: Path Matches Regular Expression: /\.[^\.] Recurse: No Filename Fileset: fm_pdf Configuration: Name Matches Regular Expression: \.
Grouping Files into Filesets Listing all Filesets Configuration: Select Files Larger Than Or Equal To: Fileset Union: 5.
Grouping Files into Filesets Listing all Filesets Filename Fileset: images Configuration: Name Is: Path Is: /images/ Recurse: Yes bstnA6k(gbl)# ... Showing One Global Fileset To show a single global fileset, add the global-fileset argument to the end of the show policy filesets command: show policy filesets global-fileset fileset-name where fileset-name (optional, 1-1024 characters) chooses the fileset.
Grouping Files into Filesets Sample - Configuring Age-Based Filesets Showing Filesets in a Managed Volume All filesets can be alternatively configured within a managed volume. To show the configuration for such a fileset, specify the namespace and volume name before the fileset name: show policy filesets namespace volume fileset-name where namespace (1-30 characters) is the fileset’s namespace. volume (optional, 1-1024 characters) identifies the fileset’s volume.
Grouping Files into Filesets Sample - Configuring Age-Based Filesets Create object 'office_files'? [yes/no] yes bstnA6k(gbl-ns-vol-fs-union[office_files])# from fileset xls_files bstnA6k(gbl-ns-vol-fs-union[office_files])# from fileset doc_files bstnA6k(gbl-ns-vol-fs-union[office_files])# exit In the “wwmed~/acct” volume, create an age-based fileset that only takes the office files that were accessed this month: bstnA6k(gbl)# namespace wwmed bstnA6k(gbl-ns[wwmed])# volume /acct bstnA6k(gbl-ns-vol[wwmed~/a
Grouping Files into Filesets Sample - Configuring Age-Based Filesets 13-30 CLI Storage-Management Guide
Chapter 14 Migrating Filesets A fileset is a group of files and/or directories to which you can apply replication and migration policies. This chapter explains how to configure policies for migrating filesets to desired back-end storage. Direct volumes, which contain no metadata, do not support any filesets. This chapter is relevant to managed volumes only. Before You Begin You must create one or more filesets for the fileset policy. See Chapter 13, Grouping Files into Filesets.
Migrating Filesets Directing File Placement Directing File Placement You can use fileset policies to steer files and/or directories onto specific storage targets. You choose the files/directories by configuring a fileset for them, and you choose the storage target by creating a placement rule in the volume.
Migrating Filesets Directing File Placement Identifying the Source Fileset The next step in configuring a placement rule is to identify the source fileset. This chooses a set of files and/or directories based on their names, sizes, ages, or other criteria; this set of files and/or directories changes as clients create, edit, and delete them in the volume.
Migrating Filesets Directing File Placement Note that the master copies of the directories remain on their original filer, das3; this means that new files in those directories go to das3 by default, if they are outside the fileset. All of their new subdirectories go to das3, too. In this illustration, only one new file matches and is steered onto das8. All new directories and non-matching files are created on das3, which has all of the master directories.
Migrating Filesets Directing File Placement bstnA6k(gbl-ns-vol[wwmed~/acct])# place-rule docs2das8 bstnA6k(gbl-ns-vol-plc[wwmed~/acct~docs2das8])# from fileset fm_pdf match files bstnA6k(gbl-ns-vol-plc[wwmed~/acct~docs2das8])# ... Matching Directories Only Consider a directory tree on a filer that is filled to a comfortable level, but clients are likely to add subdirectories that may overfill the share.
Migrating Filesets Directing File Placement This configuration mainly focuses on new directories and their sub-trees. The placement rule steers new directories to das8. Since the new directories are created on das8, the das8 instance of the directory is master. By default, all of its child files and directories follow it onto das8. Directory /a/b/c therefore grows on das8, as would any other new directories in the volume.
Migrating Filesets Directing File Placement bstnA6k(gbl)# policy-filename-fileset all bstnA6k(gbl-ns-vol-fs-name[all])# recurse bstnA6k(gbl-ns-vol-fs-name[all])# exit bstnA6k(gbl)# namespace wwmed bstnA6k(gbl-ns[wwmed])# volume /acct bstnA6k(gbl-ns-vol[wwmed~/acct])# place-rule noNewDirs bstnA6k(gbl-ns-vol-plc[wwmed~/acct~noNewDirs])# from fileset all match directories bstnA6k(gbl-ns-vol-plc[wwmed~/acct~noNewDirs])# ...
Migrating Filesets Directing File Placement For example, suppose das8 is supposed to be master of /a/b. You can promote it and all of its descendant directories without migrating any files. After the placement rule runs, the file already under /a/b stays on das3. The master for /a/b is now on das8. The /a directory is striped to das8 so that it can hold the /a/b directory; its master remains on das3.
Migrating Filesets Directing File Placement As clients add new files and subdirectories to /a/b, they go onto das8 instead of das3. New files in /a, which is outside the fileset, continue to gravitate to das3. /a das3 /a das8 ! /a/b /a/b ! ! /a/b/c ! ! !! /a ! /a/b ! ! /a/b/c ! ! is a new directory is a new file ! ! !! /acct wwmed In the from fileset command, you can add the promote-directories flag to promote the chosen directories.
Migrating Filesets Directing File Placement bstnA6k(gbl-ns[wwmed])# volume /acct bstnA6k(gbl-ns-vol[wwmed~/acct])# place-rule reset bstnA6k(gbl-ns-vol-plc[wwmed~/acct~reset])# from fileset a_b_tree match directories promote-directories bstnA6k(gbl-ns-vol-plc[wwmed~/acct~reset])# ... Promoting Directories on a Target Share Farm If the file-placement target is a share farm, the share that gets the directory also gets the directory promotion.
Migrating Filesets Directing File Placement Matching Directory Trees (Directories and Files) By combining files, directories, and directory promotion, you can move an entire directory tree from das3 to das8 and make it grow on das8. For example, you can migrate all existing files in directory /a/b in addition to ensuring that new files and directories get created on das8.
Migrating Filesets Directing File Placement Each new file and directory, as in the previous example, follows its parent’s master directory. Directory /a/b grows on das8 while new files in /a continue to gravitate to das3: /a /a/b das3 /a das8 ! /a/b ! ! /a/b/c ! ! !! /a /a/b ! ! ! /a/b/c ! ! is a new directory is a new file ! ! !! /acct wwmed In the from fileset command, you can use match all to match both files and directories.
Migrating Filesets Directing File Placement bstnA6k(gbl-ns-vol-plc[wwmed~/acct~mvtree])# from fileset a_b_tree match all promote-directories bstnA6k(gbl-ns-vol-plc[wwmed~/acct~mvtree])# ... Limiting the Selection to Particular Source Share(s) (optional) You can select the above files and/or directories from a particular share or share farm. By default, the selected files and directories come from all shares in the volume.
Migrating Filesets Directing File Placement bstnA6k(gbl)# namespace wwmed bstnA6k(gbl-ns[wwmed])# volume /acct bstnA6k(gbl-ns-vol[wwmed~/acct])# place-rule mvtree bstnA6k(gbl-ns-vol-plc[wwmed~/acct~mvtree])# no source bstnA6k(gbl-ns-vol-plc[wwmed~/acct~mvtree])# ... Choosing the Target Storage You choose the target storage for a fileset with the same commands that are used for emptying a share.Either a share or a share farm is a valid target for the fileset.
Migrating Filesets Directing File Placement bstnA6k(gbl-ns-vol-plc[wwmed~/acct~distributeFiles])# ... Balancing Capacity in a Target Share Farm When a share farm is a file-placement target, the first configured share in the farm is the default share for placed files. Most files are placed on the same share as their parent directory, but a file defaults to the first share if its parent directory is outside the share farm. The first share in the farm can therefore take a heavier file burden over time.
Migrating Filesets Directing File Placement For a placement rule without a schedule, this limit applies to the one-and-only run of the rule. If the original fileset exceeds this limit, the left-over files from that fileset remain on their source share(s) indefinitely. New files that belong in the fileset are created at the target share(s), before they have any size, so they are not blocked by this limit.
Migrating Filesets Directing File Placement Whether or not you use a schedule, the rule places all new files as clients create them. By default, the rule also watches all client changes inline, and migrates any file that changes to match the source fileset. A schedule has no effect on new or newly-changed files. “Creating a Schedule” on page 12-27 has full details on creating a schedule.
Migrating Filesets Directing File Placement Disabling Inline Notifications (optional) Clients make changes to files that may cause them to be selected by a file-placement rule; for example, a client could rename a file or change its size. By default, the file-placement rule monitors all client changes inline and migrates any files that newly-match the source fileset. This occurs on an unscheduled basis.
Migrating Filesets Directing File Placement bstnA6k(gbl-ns-vol-plc[wwmed~/acct~emptyRH])# ... Configuring Progress Reports The next step in configuring a placement rule is optional but strongly recommended: setting up progress reports. Progress reports show all the milestones and results of a file-placement execution. The policy engine generates a report each time the schedule fires and invokes the rule. If the rule has no schedule, the rule generates a single report when it first runs.
Migrating Filesets Directing File Placement Generating Verbose Reports Placement reports are terse by default. To make them verbose, use the optional verbose flag at the end of the report command: report prefix verbose where prefix is explained above.
Migrating Filesets Directing File Placement For example, the following command sequence disables reporting for the rule, “mvTars:” bstnA6k(gbl)# namespace archives bstnA6k(gbl-ns[archives])# volume /home bstnA6k(gbl-ns-vol[archives~/home])# place-rule mvTars bstnA6k(gbl-ns-vol-plc[archives~/home~mvTars])# no report bstnA6k(gbl-ns-vol-plc[archives~/home~mvTars])# ... Enabling the Placement Rule The final step in configuring any rule is to enable it.
Migrating Filesets Directing File Placement bstnA6k(gbl)# namespace wwmed bstnA6k(gbl-ns[wwmed])# volume /acct bstnA6k(gbl-ns-vol[wwmed~/acct])# place-rule docs2das8 bstnA6k(gbl-ns-vol-plc[wwmed~/acct~docs2das8])# enable tentative bstnA6k(gbl-ns-vol-plc[wwmed~/acct~docs2das8])# ... Disabling the Rule Disabling the rule removes it from consideration. Use no enable from gbl-ns-vol-plc mode to disable a placement rule.
Migrating Filesets Changing Rule Order Removing the Placement Rule You can remove a placement rule to both disable it and delete its configuration. Many file-placement rules can manipulate directory mastership so that directory trees grow naturally on desired filers. If all directory masters are placed correctly, the managed volume creates new files and directories under them by default; the file-placement rule is no-longer needed.
Migrating Filesets Changing Rule Order You can change the rule order only for placement rules that use filesets as their sources. The priority cannot change for shadow-copy rules (which are always highest priority), placement rules that drain shares (which are second priority), and share farms (which are lowest priority). Fileset-placement rules are grouped together between the second and last priority groups; they are lower-priority than drain rules, but they take precedence over all share-farm rules.
Migrating Filesets Changing Rule Order bstnA6k(gbl-ns[wwmed])# policy order-rule placeTest after docs2das8 bstnA6k(gbl-ns[wwmed])# ... Moving the Rule to the Beginning or End You can use the first or last keyword to move the file-placement rule to the beginning or end of the list: policy order-rule rule1 {first | last} where first | last sets the new position for rule1.
Migrating Filesets Changing Rule Order 14-26 CLI Storage-Management Guide
Chapter 15 Shadowing a Volume You can create a continuously-updated copy of a managed volume, called a shadow volume. The shadow volume can be on the same switch as its source volume (as shown below), or it can be hosted on another switch in the same RON (shown on the next page). /etc /shadowEtc archives Shadow volumes have applications for implementing a Content-Delivery Network (CDN), backing up client data, or disaster recovery.
Shadowing a Volume The examples in this chapter configure a source volume on a single ARX®6000 and its shadow volume on a redundant pair. The redundant pair is two ARX®1000 switches: RON tunnel /acct /acctShdw "wwmed" namespace "nemed" namespace The switch with the source volume is called the source switch and any switch with a shadow volume is called a target switch.
Shadowing a Volume Before You Begin Before You Begin Shadow volumes are commonly deployed on separate switches from the source volume, as pictured above. Before you configure the shadow volume on a target switch, you must first 1. make a RON tunnel from the source switch to the target switch (see Chapter 5, Joining a RON, in the CLI Network-Management Guide), and 2. add a namespace and source volume at the source switch (see Chapter 7, Configuring a Namespace and Chapter 9, Adding a Managed Volume). 3.
Shadowing a Volume Adding a Shadow Volume (Target Switch) Choose a shadow volume with at least as much storage capacity as its source volume(s). The volume must be disabled when you change it into a shadow volume. For example, the following command sequence creates a shadow volume to be used for the “wwmed~/acct” volume later.
Shadowing a Volume Adding a Shadow Volume (Target Switch) Allowing Modifications The shadow-copy rule will be modifying the metadata in the target volume, so the target volume must permit metadata modifications. This does not affect clients, who will have read-only access to the volume. It only applies to the shadow-copy rule itself. Use the modify command to enable file modifications by the rule.
Shadowing a Volume Adding a Shadow Volume (Target Switch) prtlndA1k(gbl-ns-vol-shr[nemed~/acctShdw~back2])# exit prtlndA1k(gbl-ns-vol[nemed~/acctShdw])# ... Turning Off Shadowing You can turn off shadowing to convert a shadow volume back to a managed volume. This makes the volume ineligible for shadow copies and opens it up for client writes; all the shadow-copied files become fully accessible.
Shadowing a Volume Specifying a Fileset to Copy (Source Switch) Specifying a Fileset to Copy (Source Switch) The next step in shadowing a volume is to choose a fileset to be “shadowed.” This occurs at the source volume, on the source switch. The fileset can include all files in the volume or a smaller set of files based on file names and/or file-access times. You can create complex filesets by intersecting two or more filesets (for example, choosing the *.
Shadowing a Volume Configuring a Shadow-Copy Rule (Source Switch) Configuring a Shadow-Copy Rule (Source Switch) The final step in volume shadowing is to create a shadow-copy rule. A shadow-copy rule replicates the fileset in the source volume over to the shadow volume; if a file changes later, that file is replicated again. If a file is deleted, then its copy is also deleted.
Shadowing a Volume Configuring a Shadow-Copy Rule (Source Switch) You can only use one source fileset. If you want to use additional filesets as sources, create a union fileset (see “Joining Filesets” on page 13-20). You can re-issue the from command to change from one source fileset to another.
Shadowing a Volume Configuring a Shadow-Copy Rule (Source Switch) Switch Name HA Peer Switch Status UUID Uptime Management Addr -----------------------------------------------------------------------------bstnA6k (None) 0 days, 00:45:01 ONLINE 7eafc74e-6fa9-11d8-9ed7-a9126cfbac40 provA5c (None) ONLINE df3d1b6e-8459-11d9-8899-d2d1d3d64a34 prtlndA1k prtlndA1kB ONLINE 9a6eb9ac-6c6d-11d8-9444-9e00f495ff7e prtlndA1kB prtlndA1k ONLINE 88babb94-8373-11d8-963c-8cd82b83827e 10.1.1.
Shadowing a Volume Configuring a Shadow-Copy Rule (Source Switch) prtlndA1k(gbl-ns[testns])# volume /users prtlndA1k(gbl-ns-vol[testns~/users])# shadow-copy-rule bkup prtlndA1k(gbl-ns-vol-shdwcp[testns~/users~bkup])# target volume /shdw path /users prtlndA1k(gbl-ns-vol-shdwcp[testns~/users~bkup])# exit prtlndA1k(gbl-ns[testns])# volume /admin prtlndA1k(gbl-ns-vol[testns~/admin])# shadow-copy-rule bkAdm prtlndA1k(gbl-ns-vol-shdwcp[testns~/admin~bkAdm])# target volume /shdw path /admin prtlndA1k(gbl-ns-vol-sh
Shadowing a Volume Configuring a Shadow-Copy Rule (Source Switch) Applying a Schedule A shadow-copy rule requires a schedule. Use the gbl schedule command to create one; refer back to “Creating a Schedule” on page 12-27 for details. You cannot use a schedule with a fixed duration (see “Setting the Duration (optional)” on page 12-28). If a duration was too short for the shadow copy to finish, the shadow copy would fail.
Shadowing a Volume Configuring a Shadow-Copy Rule (Source Switch) By default, no reports are generated. From gbl-ns-vol-shdwcp mode, use the report command to generate shadow-copy reports for the current rule: report prefix where prefix (1-64 characters) is the prefix to be used for the rule’s reports. Each report has a unique name in the following format: prefixYearMonthDayHourMinute.rpt (for example, home_backup200403031200.rpt for a report with the “home_backup” prefix).
Shadowing a Volume Configuring a Shadow-Copy Rule (Source Switch) Including Identical Files in Reports If files are identical on both the source and shadow volumes, the rule does not transfer the file. By default, identical files are omitted from the shadow-copy reports. Use the optional list-identical flag to include these files in the report: report prefix [verbose] list-identical where prefix and [verbose] are explained above.
Shadowing a Volume Configuring a Shadow-Copy Rule (Source Switch) For example, the following command sequence disables reporting for the rule, “buHome:” bstnA6k(gbl)# namespace archives bstnA6k(gbl-ns[archives])# volume /home bstnA6k(gbl-ns-vol[archives~/home])# shadow-copy-rule buHome bstnA6k(gbl-ns-vol-shdwcp[archives~/home~buHome])# no report bstnA6k(gbl-ns-vol-shdwcp[archives~/home~buHome])# ...
Shadowing a Volume Configuring a Shadow-Copy Rule (Source Switch) Translating Local SIDs After all local groups are duplicated on all source and destination filers, you must configure the shadow-copy rule to translate them. When SID translation is enabled, the rule finds a file’s group name (such as “doctors”) at the source volume, then looks up the SID for that group name at the destination filer.
Shadowing a Volume Configuring a Shadow-Copy Rule (Source Switch) bstnA6k(gbl-ns-vol-shdwcp[insur~/claims~insurDR])# ... Some filer servers can be configured to return an error for an invalid SID (STATUS_INVALID_SID, STATUS_INVALID_OWNER, and/or STATUS_INVALID_PRIMARY_GROUP) but accept the file or directory anyway. You may want to discount these errors from these particular file servers.
Shadowing a Volume Configuring a Shadow-Copy Rule (Source Switch) left open for the duration of the shadow-copy run. Some common Microsoft applications, such as Microsoft Word, hold a file open for writes as long as the client application is working with the file. (Note that other applications, such as Notepad and WordPad, open the file only long enough to read it into memory; these applications rarely pose a problem for the shadow-copy operation.
Shadowing a Volume Configuring a Shadow-Copy Rule (Source Switch) bstnA6k(gbl-ns-vol-shdwcp[insur~/claims~insurDR])# ... This increases the chances of the rule failing to copy some open files; you can use two CLI commands to find and close all open files while the rule is running. The show cifs-service open-files command displays a list of all files that are held open by a client application (see “Listing Open Files in a CIFS Service” on page 9-9 of the CLI Maintenance Guide).
Shadowing a Volume Configuring a Shadow-Copy Rule (Source Switch) For example, the following command sequence disables pruning for the shadow-copy rule, DRrule: bstnA6k(gbl)# namespace wwmed bstnA6k(gbl-ns[wwmed])# volume /acct bstnA6k(gbl-ns-vol[wwmed~/acct])# shadow-copy-rule DRrule bstnA6k(gbl-ns-vol-shdwcp[wwmed~/acct~DRrule])# no prune-target bstnA6k(gbl-ns-vol-shdwcp[wwmed~/acct~DRrule])# ...
Shadowing a Volume Configuring a Shadow-Copy Rule (Source Switch) bstnA6k(gbl-ns-vol-shdwcp[wwmed~/acct~DRrule])# ... Copying All Directories Some applications require the presence of empty directories at specific paths; for those cases, you can use the directory-copy full command. This reverts to the default behavior: directory-copy full This copies the full directory tree from the source volume, even if the source fileset resides in a small number of directories.
Shadowing a Volume Configuring a Shadow-Copy Rule (Source Switch) bstnA6k(gbl-ns-vol-shdwcp[ns3~/cad1~cpRemote])# ... Publishing Individual Files By default, a shadow-copy rule publishes all files that successfully transfer, whether or not some of the transfers fail. To return to this default, use the publish individual command: publish individual This is generally recommended for CIFS volumes, since the shadow-copy rule does not transfer files opened by clients.
Shadowing a Volume Configuring a Shadow-Copy Rule (Source Switch) For example, this command sequence limits transfers by the rule, “DRrule,” to five million BPS: bstnA6k(gbl)# namespace wwmed bstnA6k(gbl-ns[wwmed])# volume /acct bstnA6k(gbl-ns-vol[wwmed~/acct])# shadow-copy-rule DRrule bstnA6k(gbl-ns-vol-shdwcp[wwmed~/acct~DRrule])# bandwidth-limit 5M bstnA6k(gbl-ns-vol-shdwcp[wwmed~/acct~DRrule])# ...
Shadowing a Volume Configuring a Shadow-Copy Rule (Source Switch) From gbl-ns-vol-shdwcp mode, use the delta-threshold command to set a different threshold: delta-threshold minimum-size[k|M|G|T] where minimum-size (1-64) is the minimum size of a file that is eligible for delta transfer, and k|M|G|T (optional) sets the size units: kilobytes, Megabytes, Gigabytes, or Terabytes. The default is bytes if you omit this.
Shadowing a Volume Configuring a Shadow-Copy Rule (Source Switch) Enabling the Shadow-Copy Rule The final step in configuring the shadow-copy rule is to enable it. By default, the rule is disabled and ignored by policy software.
Shadowing a Volume Configuring a Shadow-Copy Rule (Source Switch) For every shadow-copy rule in every namespace, this shows the current progress of all shadow copies. Each rule has an overview section, a Target Status section listing all of the shadow volume targets, and two or more sections with more detail. The additional sections describe the most time-consuming parts of a shadow copy: 1. Copy Phase, where files are copied from the source volume to a staging area in the shadow volume. 2.
Shadowing a Volume Configuring a Shadow-Copy Rule (Source Switch) Copy Phase Information ---------------------Phase Started : Jan 24 04:13 Phase Completed : Jan 24 04:24 Elapsed Time : 00:11:31 Average Transmission Rate : 3.4 Mb/s (433.
Shadowing a Volume Configuring a Shadow-Copy Rule (Source Switch) Shadow Rule : insurDR Report File : insurDR_200701240417.rpt Fileset : worthSaving ============================================================================ Processing Started : Jan 24 04:17 Processing Completed : Jan 24 04:17 Elapsed Time : 00:00:07 Operating Mode : Inline notification Publishing Mode : Individual ... bstnA6k(gbl)# ...
Shadowing a Volume Configuring a Shadow-Copy Rule (Source Switch) For example: bstnA6k(gbl)# show shadow namespace wwmed volume /acct ... Focusing on One Rule Add the Rule clause to specify one shadow-copy rule: show shadow namespace name volume vol-name rule rule-name where name (1-30 characters) identifies the source namespace, vol-name (1-1024 characters) identifies the source volume, and rule-name (1-64 characters) is the name of the rule.
Shadowing a Volume Configuring a Shadow-Copy Rule (Source Switch) ExMp=Export Mapping, Imp=Import, Inc=Inconsistencies, MdO=Metadata Only, MdU=Metadata Upgrade, MgMd=Migrate Metadata, NIS=NIS Update, Plc=Place Rule, Rbld=Rebuild, Rm=Remove, RmNs=Remove Namespace, RmSh=Remove Share, RsD=Restore Data, SCp=Shadow Copy, Snapshot=Snapshot, SuEn=Enable Subshare Inconsistencies, SuIn=Export Subshare Inconsistencies, Sum=Summary, SuSh=Export Subshares, Sync=Sync Files/Dirs, SySh=Sync Shares adminSessions_2007012403
Shadowing a Volume Configuring a Shadow-Copy Rule (Source Switch) /planner/showRunning.fm 209,920 New(1) /planner/filesetPolicy.fm nemed:/acctShdw/: Full update (221,432 bytes sent) prtlndA1k: /planner/filesetPolicy.fm 221,184 New(1) /planner/securityMgmtSvcs.fm nemed:/acctShdw/: Full update (225,528 bytes sent) prtlndA1k: /planner/securityMgmtSvcs.fm 225,280 New(1) /planner/cliOperatorIX.fm nemed:/acctShdw/: Full update (285,944 bytes sent) prtlndA1k: ...
Shadowing a Volume Configuring a Shadow-Copy Rule (Source Switch) Target Information -----------------prtlndA1k: nemed:/acctShdw/ Shadow Copy File Details ======================== Filename Status Size --------------------------------------------------------------------------------------------- -----------------------------/layer3.fm.lck prtlndA1k: nemed:/acctShdw/: Full update (212 bytes sent) 84 /layer3.fm.lck New(1) /rework_vpn.fm.
Shadowing a Volume Configuring a Shadow-Copy Rule (Source Switch) Namespace : wwmed Source Volume : wwmed:/acct Shadow Rule : DRrule Fileset : worthSaving Shared Access Allowed : Yes Bandwidth Limit : 5.0 Mb/s (625.
Shadowing a Volume Configuring a Shadow-Copy Rule (Source Switch) Elapsed Time : 00:00:05 Database Records Scanned : 4,460 Files/Directories Published : 4,907 New Files/Directories : 4,460 Renamed Files/Directories : 0 Updated Files/Directories : 447 Removed Files/Directories : 0 Total processed: Elapsed time: 4,460 00:12:47 **** Shadow Copy Report: DONE at Thu Jan 24 04:23:55 2007 **** Reformatting the Report You can use the copy reports command to duplicate the report in a differen
Shadowing a Volume Configuring a Shadow-Copy Rule (Source Switch) Truncating the Report To conserve CPU cycles and/or internal-disk space, you may want to stop a shadow-copy report before it is finished. An oversized, CPU-intensive report could possibly have an effect on namespace performance. From priv-exec mode, use the truncate-report command to stop all report processing and truncate the report file: truncate-report name where name (1-255 characters) specifies report to truncate.
Shadowing a Volume Configuring a Shadow-Copy Rule (Source Switch) Copying the Source Volume to Multiple Targets To copy a source volume to more than one target, create a separate shadow-copy rule for each target. Each shadow-copy rule can operate on its own schedule, or they can all use the same schedule.
Shadowing a Volume Configuring a Shadow-Copy Rule (Source Switch) CLI Storage-Management Guide 15-37
Shadowing a Volume Configuring a Shadow-Copy Rule (Source Switch) 15-38 CLI Storage-Management Guide
Copyrights Copyright (c) 1990, 1993, 1994, 1995 The Regents of the University of California. All rights reserved. Copyright 2000 by the Massachusetts Institute of Technology. All Rights Reserved. Export of this software from the United States of America may require a specific license from the United States Government. It is the responsibility of any person or organization contemplating export to obtain such a license before exporting. Copyright 1993 by OpenVision Technologies, Inc.
Copyright-2 CLI Storage-Management Guide
Index A ACLs support for CIFS share-level ACLs, 9-20, 11-15 active-directory forest-trust, 3-18 active-directory-forest, 3-10 Adaptive Resource Switch (ARX), 1-1 allow-shared-access, 15-17 anonymous-gid, 4-18 anonymous-uid, 4-18 ARX, 1-1 Supported VPUs for each model, 8-19, 9-45 attach, 8-13 auto sync files, 9-13 auto-migrate, 12-18 B balance, CIFS front-end CIFS service, 11-12 disabling, 11-26 enabling, 11-26 identifying a WINS server for, 10-5 stopping and removing, 11-21 listing all front-end CIFS
DNS VIP to FQDN mapping required for Kerberos, 11-28 filer, 9-27 filer-subshares, See also Dynamic DNS.
removing, 13-22 removing a source fileset, 13-21 removing all source filesets, 13-21 using for a shadow copy, 15-7 forest-root, setting the NIS domain, 4-13 setting the Windows Domain, 10-3 showing all, 10-11 showing all front-end services for, 11-42 showing details, 10-13 topology, 10-1 3-10 FQDN for a global server, setting, 10-2 freespace adjust, 8-17, 9-40 freespace calculation manual, 8-3, 9-16 freespace ignore, 8-16, 9-39 from (gbl-ns-vol-plc), 14-3 from (gbl-ns-vol-shdwcp), 15-8 from fileset (gb
L last, N 13-15 Last-accessed time, used to select files for migration, 13-15 Last-modified time, used to select files for migration, 13-15 limit-migrate, 12-39, 14-15 M maintain-free-space, 12-21 Managed volume adding, 9-1 assigning to a direct-volume share, 8-12 managed-volume, 8-12 Master directory, 14-1 promoting a stripe to a master after file placement, 14-7 Matching files. See Filesets.
Network Acopia switch’s place, 1-10 NFS access list, 4-9 disabling NFS service, 11-6 disabling NLM, 11-5 enabling NFS service, 11-6 listing front-end NFS services, 11-7 offering a namespace volume through NFS, 4-23, 11-2 removing an instance of NFS service, 11-10 setting the NFS version for a namespace, 7-9 supported NFS versions for a direct volume, 8-2 nfs tcp timeout, nfs-access-list, 11-10 4-9 NIS adding an NIS domain, 4-1 identifying one NIS server, 4-2 listing configured domains, 4-3 listing net
for name-based filesets, 13-9 reimport-modify, 9-11 remove namespace, 7-27, 9-71 remove namespace ... policy-only, 12-47 remove namespace ... volume ...
show nis netgroup, 4-4 show ntlm-auth-server, 3-7 show ntlm-auth-server status, 3-8 show policy, 12-2 show policy filesets, 13-24 show policy schedule, 12-30 show proxy-user, 3-5 show server-mapping, 11-45 show shadow, 15-25 show sid-translation, 9-35 show virtual service, 11-43 show vpu, 8-25, 9-50 show windows-mgmt-auth, 3-31 sid-translation, 9-34 sid-translation (gbl-ns-vol-shdwcp), 15-16 source share, 12-36 source share-farm, 12-36 sparse-files, 8-4 start for a schedule, 12-29 start, for a simple-age fi
windows-domain (gbl-proxy-user), 3-3 windows-mgmt-auth, 3-28 windows-mgmt-auth (gbl-ns), 7-18 WINS recreating WINS name and aliases for Kerberos, 11-34 wins, 10-5 wins-alias, 10-8 wins-name, 10-7 Index-8 CLI Storage-Management Guide