Veritas Storage Foundation 5.1 SP1 Cluster File System Administrator's Guide HP-UX 11i v3 HP Part Number: 5900-1738 Published: April 2011 Edition: 1.
© Copyright 2011 Hewlett-Packard Development Company, L.P. Legal Notices Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation and Technical Data for Commercial Items are licensed to the U.S. Government under vendor’s standard commercial license. The information contained herein is subject to change without notice.
Contents Technical Support ............................................................................................... 4 Chapter 1 Technical overview of Storage Foundation Cluster File System ..................................................................... 13 Storage Foundation Cluster File System architecture .......................... About the symmetric architecture ............................................. About Storage Foundation Cluster File System primary/secondary failover ...........
8 Contents Single network link and reliability ................................................... Configuring low priority a link .................................................. Split-brain and jeopardy handling ................................................... Jeopardy state ....................................................................... Jeopardy handling .................................................................. About I/O fencing ....................................................
Contents Establishing CVM cluster membership manually .......................... 64 Changing the CVM master manually .......................................... 64 Importing a shared disk group manually .................................... 67 Deporting a shared disk group manually .................................... 67 Starting shared volumes manually ............................................ 67 Evaluating the state of CVM ports .............................................
10 Contents Verifying that Oracle Disk Manager is configured ............................ Disabling the Oracle Disk Manager feature ...................................... Using Cached ODM ..................................................................... Enabling Cached ODM for file systems ..................................... Modifying Cached ODM settings for individual files .................... Adding Cached ODM settings via the cachemap .........................
Contents Configuring Common Internet File System in user mode .................... Configuring Common Internet File System in domain mode ............... Configuring Common Internet File System in ads mode ..................... Administering Common Internet File System .................................. Sharing a CFS file system previously added to VCS ..................... Unsharing the previous shared CFS file system .......................... Sample main.cf ..............................................
12 Contents Appendix A Creating a starter database ............................................. 197 Creating a database for Oracle 10g or 11g ....................................... 197 Creating database tablespace on shared raw VxVM volumes (option 1) ...................................................................... 197 Creating database tablespace on CFS (option 2) .......................... 199 Glossary ............................................................................................
Chapter 1 Technical overview of Storage Foundation Cluster File System This chapter includes the following topics: ■ Storage Foundation Cluster File System architecture ■ About Veritas File System features supported in cluster file systems ■ Storage Foundation Cluster File System benefits and applications Storage Foundation Cluster File System architecture The Veritas Storage Foundation Cluster File System (SFCFS) allows clustered servers to mount and use a file system simultaneously as if all appli
14 Technical overview of Storage Foundation Cluster File System About Veritas File System features supported in cluster file systems the old master/slave or primary/secondary concept. The first server to mount each cluster file system becomes its primary; all other nodes in the cluster become secondaries. Applications access the user data in files directly from the server on which they are running. Each SFCFS node has its own intent log.
Technical overview of Storage Foundation Cluster File System About Veritas File System features supported in cluster file systems ■ Fast recovery from system crashes using the intent log to track recent file system metadata updates ■ Online administration that allows file systems to be extended and defragmented while they are in use Every VxFS manual page has a section on Storage Foundation Cluster File System Issues with information on whether the command functions on a cluster-mounted file system and
16 Technical overview of Storage Foundation Cluster File System About Veritas File System features supported in cluster file systems Table 1-1 Veritas File System features in cluster file systems (continued) Features Description Memory mapping Shared memory mapping established by the mmap() function is supported on SFCFS. See the mmap(2) manual page.
Technical overview of Storage Foundation Cluster File System Storage Foundation Cluster File System benefits and applications Table 1-2 Veritas File System features not in cluster file systems (continued) Unsupported features Comments Cached Quick I/O This Quick I/O for Databases feature that caches data in the file system cache is not supported.
18 Technical overview of Storage Foundation Cluster File System Storage Foundation Cluster File System benefits and applications failover becomes more flexible because it is not constrained by data accessibility. ■ Because each SFCFS file system can be on any node in the cluster, the file system recovery portion of failover time in an n-node cluster can be reduced by a factor of n by distributing the file systems uniformly across cluster nodes.
Technical overview of Storage Foundation Cluster File System Storage Foundation Cluster File System benefits and applications ■ For single-host applications that must be continuously available, SFCFS can reduce application failover time because it provides an already-running file system environment in which an application can restart after a server failure.
20 Technical overview of Storage Foundation Cluster File System Storage Foundation Cluster File System benefits and applications
Chapter 2 Storage Foundation Cluster File System architecture This chapter includes the following topics: ■ About Storage Foundation Cluster File System and the Group Lock Manager ■ Storage Foundation Cluster File System namespace ■ About asymmetric mounts ■ Primary and secondary ■ Determining or moving primaryship ■ Synchronize time on Cluster File Systems ■ File system tunables ■ Setting the number of parallel fsck threads ■ About Storage Checkpoints ■ Storage Foundation Cluster File
22 Storage Foundation Cluster File System architecture About Storage Foundation Cluster File System and the Group Lock Manager ■ About Veritas Volume Manager cluster functionality ■ Storage Foundation Cluster File System and Veritas Volume Manager cluster functionality agents ■ Veritas Volume Manager cluster functionality About Storage Foundation Cluster File System and the Group Lock Manager SFCFS uses the Veritas Group Lock Manager (GLM) to reproduce UNIX single-host file system semantics in clust
Storage Foundation Cluster File System architecture Primary and secondary primary mounts "ro". Otherwise, the primary mounts either "rw" or "ro,crw", and the secondaries have the same choice. You can specify the cluster read-write (crw) option when you first mount the file system, or the options can be altered when doing a remount (mount -o remount). See the mount_vxfs(1M) manual page.
24 Storage Foundation Cluster File System architecture Determining or moving primaryship For CVM, a single cluster node is the master for all shared disk groups and shared volumes in the cluster. See “About Veritas Volume Manager cluster functionality” on page 46. Determining or moving primaryship The first node of a cluster file system to mount is called the primary node. Other nodes are called secondary nodes.
Storage Foundation Cluster File System architecture About Storage Checkpoints The number of parallel fsck threads that could be active during recovery was set to 4. For example, if a node failed over 12 file systems, log replay for the 12 file systems will not complete at the same time. The number was set to 4 since parallel replay of a large number of file systems would put memory pressure on systems with less memory.
26 Storage Foundation Cluster File System architecture Storage Foundation Cluster File System backup strategies only the name space (directory hierarchy) of the file system, but also the user data as it existed at the moment the file system image was captured. You can use a Storage checkpoint in many ways. For example, you can use them to: ■ Create a stable image of the file system that can be backed up to tape.
Storage Foundation Cluster File System architecture Parallel I/O overall snapshot overhead. Therefore, running a backup application by mounting a snapshot from a relatively less loaded node is beneficial to overall cluster performance. The following are several characteristics of a cluster snapshot: ■ A snapshot for a cluster mounted file system can be mounted on any node in a cluster. The file system can be a primary, secondary, or secondary-only.
28 Storage Foundation Cluster File System architecture I/O error handling policy Traditionally, the entire file is locked to perform I/O to a small region. To support parallel I/O, SFCFS locks ranges in a file that correspond to I/O requests. Two I/O requests conflict if at least one is a write request, and the I/O range of the request overlaps the I/O range of the other. The parallel I/O feature enables I/O to a file by multiple threads concurrently, as long as the requests do not conflict.
Storage Foundation Cluster File System architecture Single network link and reliability Single network link and reliability Certain environments may prefer using a single private link or a public network for connecting nodes in a cluster, despite the loss of redundancy for dealing with network failures. The benefits of this approach include simpler hardware topology and lower costs; however, there is obviously a tradeoff with high availability.
30 Storage Foundation Cluster File System architecture Split-brain and jeopardy handling Changes take effect immediately and are lost on the next reboot. For changes to span reboots you must also update the /etc/llttab file. Note: LLT clients will not know how things are going until you only have one LLT link left and GAB declares jeopardy Split-brain and jeopardy handling A split-brain occurs when the cluster membership view differs among the cluster nodes, increasing the chance of data corruption.
Storage Foundation Cluster File System architecture About I/O fencing About I/O fencing I/O fencing protects the data on shared disks when nodes in a cluster detect a change in the cluster membership that indicates a split-brain condition. The fencing operation determines the following: ■ The nodes that must retain access to the shared storage ■ The nodes that must be ejected from the cluster This decision prevents possible data corruption.
32 Storage Foundation Cluster File System architecture About I/O fencing To provide high availability, the cluster must be capable of taking corrective action when a node fails. In this situation, SFCFS configures its components to reflect the altered membership. Problems arise when the mechanism that detects the failure breaks down because symptoms appear identical to those of a failed node.
Storage Foundation Cluster File System architecture About I/O fencing With SCSI-3 PR technology, blocking write access is as easy as removing a registration from a device. Only registered members can "eject" the registration of another member. A member wishing to eject another member issues a "preempt and abort" command. Ejecting a node is final and atomic; an ejected node cannot eject another node. In SFCFS, a node registers the same key for all paths to the device.
34 Storage Foundation Cluster File System architecture About I/O fencing About coordination points Coordination points provide a lock mechanism to determine which nodes get to fence off data drives from other nodes. A node must eject a peer from the coordination points before it can fence the peer from the data drives. Racing for control of the coordination points to fence data disks is the key to understand how fencing prevents split-brain.
Storage Foundation Cluster File System architecture About I/O fencing Forcefully unregister other nodes (preempt) as members of this active SFCFS cluster In short, the CP server functions as another arbitration mechanism that integrates within the existing I/O fencing module. ■ Note: With the CP server, the fencing arbitration logic still remains on the SFCFS cluster. Multiple SFCFS clusters running different operating systems can simultaneously access the CP server.
36 Storage Foundation Cluster File System architecture About I/O fencing This default racing preference does not take into account the application groups that are online on any nodes or the system capacity in any subcluster. For example, consider a two-node cluster where you configured an application on one node and the other node is a standby-node.
Storage Foundation Cluster File System architecture About I/O fencing About I/O fencing configuration files Table 2-1 lists the I/O fencing configuration files. Table 2-1 I/O fencing configuration files File Description /etc/rc.config.d/ vxfenconf This file stores the start and stop environment variables for I/O fencing: VXFEN_START—Defines the startup behavior for the I/O fencing module after a system reboot. Valid values include: 1—Indicates that I/O fencing is enabled to start up.
38 Storage Foundation Cluster File System architecture About I/O fencing Table 2-1 I/O fencing configuration files (continued) File Description /etc/vxfenmode This file contains the following parameters: ■ vxfen_mode ■ scsi3—For disk-based fencing ■ customized—For server-based fencing ■ disabled—To run the I/O fencing driver but not do any fencing operations. vxfen_mechanism This parameter is applicable only for server-based fencing. Set the value as cps.
Storage Foundation Cluster File System architecture About I/O fencing Table 2-1 I/O fencing configuration files (continued) File Description /etc/vxfentab When I/O fencing starts, the vxfen startup script creates this /etc/vxfentab file on each node. The startup script uses the contents of the /etc/vxfendg and /etc/vxfenmode files. Any time a system is rebooted, the fencing driver reinitializes the vxfentab file with the current list of all the coordinator points.
40 Storage Foundation Cluster File System architecture About I/O fencing Table 2-2 I/O fencing scenarios (continued) Event Node A: What happens? Node B: What happens? Operator action Both private networks function again after event above. Node A continues to work. Node B has crashed. Restart Node B after It cannot start the private networks are database since it is restored. unable to write to the data disks. One private network Node A prints fails.
Storage Foundation Cluster File System architecture About I/O fencing Table 2-2 I/O fencing scenarios (continued) Event Node A: What happens? Node B: What happens? Operator action Nodes A and B and private networks lose power. Coordinator and data disks retain power. Node A restarts and I/O fencing driver (vxfen) detects Node B is registered with coordinator disks. The driver does not see Node B listed as member of cluster because private networks are down.
42 Storage Foundation Cluster File System architecture About I/O fencing Table 2-2 I/O fencing scenarios (continued) Event Node A: What happens? Node A crashes while Node A is crashed. Node B is down. Node B comes up and Node A is still down. Node B: What happens? Operator action Node B restarts and detects Node A is registered with the coordinator disks. The driver does not see Node A listed as member of the cluster.
Storage Foundation Cluster File System architecture About I/O fencing Table 2-2 I/O fencing scenarios (continued) Event Node A: What happens? The disk array Node A continues to containing two of the operate in the three coordinator cluster. disks is powered off. Node B: What happens? Operator action Node B has left the cluster. Power on the failed disk array so that subsequent network partition does not cause cluster shutdown, or replace coordinator disks.
Storage Foundation Cluster File System architecture About I/O fencing Moreover, both /etc/vxfenmode and /etc/vxfentab files contain additional parameter "security" which indicates if communication between CP server and SFCFS cluster nodes is secure. Figure 2-2 displays a schematic of the customized fencing options.
Storage Foundation Cluster File System architecture About I/O fencing ■ generate_snapshot.sh : Retrieves the SCSI ID’s of the coordinator disks and/or UUID ID's of the CP servers For information about the UUID (Universally Unique Identifier), see the Veritas Cluster Server Administrator's Guide. ■ join_local_node.sh: Registers the keys with the coordinator disks or CP servers ■ race_for_coordination_point.sh: Races to determine a winner after cluster reconfiguration ■ unjoin_local_node.
46 Storage Foundation Cluster File System architecture About Veritas Volume Manager cluster functionality Different access level privileges permit the user to issue different commands. If a user is neither a CP server admin nor a CP server operator user, then the user has guest status and can issue limited commands. The user types and their access level privileges are assigned to individual users during SFCFS cluster configuration for fencing.
Storage Foundation Cluster File System architecture About Veritas Volume Manager cluster functionality The private network allows the nodes to share information about system resources and about each other’s state. Using the private network, any node can recognize which nodes are currently active, which are joining or leaving the cluster, and which have failed. The private network requires at least two communication channels to provide redundancy against one of the channels failing.
48 Storage Foundation Cluster File System architecture About Veritas Volume Manager cluster functionality ■ Private and shared disk groups ■ Activation modes of shared disk groups ■ Connectivity policy of shared disk groups ■ Limitations of shared disk groups Private and shared disk groups Table 2-3 describes the disk group types. Table 2-3 Disk group types Disk group Description Private Belongs to only one node. A private disk group is only imported by one system.
Storage Foundation Cluster File System architecture About Veritas Volume Manager cluster functionality Reconfiguring a shared disk group is performed with the co-operation of all nodes. Configuration changes to the disk group happen simultaneously on all nodes and the changes are identical. Such changes are atomic in nature, which means that they either occur simultaneously on all nodes or not at all.
50 Storage Foundation Cluster File System architecture About Veritas Volume Manager cluster functionality Table 2-4 Activation modes for shared disk groups (continued) Activation mode Description readonly (ro) The node has read access to the disk group and denies write access for all other nodes in the cluster. The node has no write access to the disk group. Attempts to activate a disk group for either of the write modes on other nodes fail. sharedread (sr) The node has read access to the disk group.
Storage Foundation Cluster File System architecture About Veritas Volume Manager cluster functionality When a shared disk group is created or imported, it is activated in the specified mode. When a node joins the cluster, all shared disk groups accessible from the node are activated in the specified mode. The activation mode of a disk group controls volume I/O from different nodes in the cluster.
52 Storage Foundation Cluster File System architecture Storage Foundation Cluster File System and Veritas Volume Manager cluster functionality agents Table 2-6 Policies (continued) Policy Description Local In the event of disks failing, the failures are confined to the particular nodes that saw the failure. However, this policy is not highly available because it fails the node even if one of the mirrors is available.
Storage Foundation Cluster File System architecture Veritas Volume Manager cluster functionality Veritas Volume Manager cluster functionality The Veritas Volume Manager cluster functionality (CVM) makes logical volumes accessible throughout a cluster. CVM enables multiple hosts to concurrently access the logical volumes under its control. A VxVM cluster comprises nodes sharing a set of devices. The nodes are connected across a network. If one node fails, other nodes can access the devices.
54 Storage Foundation Cluster File System architecture Veritas Volume Manager cluster functionality
Chapter 3 Administering Storage Foundation Cluster File System and its components This chapter includes the following topics: ■ About Storage Foundation Cluster File System administration ■ Administering CFS ■ Administering CVM ■ Administering ODM ■ Administering I/O Fencing ■ Administering SFCFS global clusters About Storage Foundation Cluster File System administration The Veritas Storage Foundation Cluster File System is a shared file system that enables multiple hosts to mount and perform
56 Administering Storage Foundation Cluster File System and its components Administering CFS are several other packages supplied by VCS that provide application failover support when installing SFCFS HA. See the Veritas Storage Foundation Cluster File System Installation Guide. SFCFS also requires the cluster functionality (CVM) of the Veritas Volume Manager (VxVM) to create the shared volumes necessary for mounting cluster file systems.
Administering Storage Foundation Cluster File System and its components Administering CFS Resizing CFS file systems If you see a message on the console indicating that a CFS file system is full, you may want to resize the file system. The vxresize command lets you resize a CFS file system. It extends the file system and the underlying volume. See the vxresize (1M) manual page for information on various options.
58 Administering Storage Foundation Cluster File System and its components Administering CFS # gabconfig -a | grep "Port f" CFS agents and AMF support The CFS agents (CFSMount and CFSfsckd) are AMF-aware. In this release the CFS agents use the V51 framework. CFS agent log files You can use the CFS agent log files that are located in the directory /var/VRTSvcs/log to debug CFS issues. # cd /var/VRTSvcs/log # ls CFSMount_A.log CFSfsckd_A.log engine_A.
Administering Storage Foundation Cluster File System and its components Administering CFS Table 3-1 SFCFS commands (continued) Commands Description cfsshare Clustered NFS (CNFS) and Common Internet File System (CIFS) configuration command See the cfsshare(1M) manual page for more information. mount, fsclustadm, and fsadm commands The mount and fsclustadm commands are important for configuring cluster file systems.
60 Administering Storage Foundation Cluster File System and its components Administering CFS as dd execute without any reservation, and can damage a file system mounted from another node. Before running this kind of command on a file system, be sure the file system is not mounted on a cluster. You can run the mount command to see if a file system is a shared or local mount.
Administering Storage Foundation Cluster File System and its components Administering CFS The /etc/fstab file In the /etc/fstab file, do not specify any cluster file systems to mount-at-boot because mounts initiated from fstab occur before cluster configuration begins. For cluster mounts, use the VCS configuration file to determine which file systems to enable following a reboot.
62 Administering Storage Foundation Cluster File System and its components Administering CFS online backups of the file system. Snapshots implement copy-on-write semantics that incrementally copy data blocks when they are overwritten on the snapped file system. See the Veritas Storage Foundation Advanced Features Administrator’s Guide. Snapshots for cluster file systems extend the same copy-on-write mechanism for the I/O originating from any node in the cluster.
Administering Storage Foundation Cluster File System and its components Administering CFS 63 Creating a snapshot on a Storage Foundation Cluster File System To create and mount a snapshot on a two-node cluster using SFCFS administrative interface commands.
64 Administering Storage Foundation Cluster File System and its components Administering CVM Administering CVM Listing all the CVM shared disks You can use the following command to list all the CVM shared disks: # vxdisk -o alldgs list |grep shared Establishing CVM cluster membership manually In most cases you do not have to start CVM manually; it normally starts when VCS is started.
Administering Storage Foundation Cluster File System and its components Administering CVM To change the CVM master manually 1 To view the current master, use one of the following commands: # vxclustadm nidmap Name galaxy nebula CVM Nid 0 1 CM Nid 0 1 State Joined: Slave Joined: Master # vxdctl -c mode mode: enabled: cluster active - MASTER master: nebula In this example, the CVM master is nebula.
66 Administering Storage Foundation Cluster File System and its components Administering CVM 3 To monitor the master switching, use the following command: # vxclustadm -v nodestate state: cluster member nodeId=0 masterId=0 neighborId=1 members[0]=0xf joiners[0]=0x0 leavers[0]=0x0 members[1]=0x0 joiners[1]=0x0 leavers[1]=0x0 reconfig_seqnum=0x9f9767 vxfen=off state: master switching in progress reconfig: vxconfigd in join In this example, the state indicates that master is being changed.
Administering Storage Foundation Cluster File System and its components Administering CVM a transaction is in progress. Try again In some cases, if the master switching operation is interrupted by another reconfiguration operation, the master change fails. In this case, the existing master remains the master of the cluster. After the reconfiguration is complete, reissue the vxclustadm setmaster command to change the master.
68 Administering Storage Foundation Cluster File System and its components Administering CVM Evaluating the state of CVM ports CVM kernel (vxio driver) uses port ‘v’ for kernel messaging and port ‘w’ for vxconfigd communication between the cluster nodes. The following command displays the state of CVM ports: # gabconfig -a | egrep "Port [vw]" Verifying if CVM is running in an SFCFS cluster You can use the following options to verify whether CVM is up or not in an SFCFS cluster.
Administering Storage Foundation Cluster File System and its components Administering CVM nodeId=0 masterId=0 neighborId=1 members=0x3 joiners=0x0 leavers=0x0 reconfig_seqnum=0x72a10b vxfen=on The state indicates that CVM has completed its kernel level join and is in the middle of vxconfigd level join. The vxdctl -c mode command indicates whether a node is a CVM master or CVM slave.
70 Administering Storage Foundation Cluster File System and its components Administering ODM CVMVxconfigd_A.log engine_A.log # CVM vxconfigd Agent log # VCS log You can use the cmdlog file to view the list of CVM commands that have been executed. The file is located at /var/adm/vx/cmdlog. See the Veritas Volume Manager Administrator's Guide for more information.
Administering Storage Foundation Cluster File System and its components Administering I/O Fencing To stop ODM 1 On all nodes, issue the following command: # hastop -local 2 Next, issue the following command: #/sbin/init.d/odm stop Note: The administrator does not usually need to stop or start ODM. Normally, ODM is stopped during shutdown -r. ODM is started while rebooting, going to multi-user mode. Administering I/O Fencing See the Veritas Cluster Server Administrator's Guide for more information.
72 Administering Storage Foundation Cluster File System and its components Administering I/O Fencing The I/O fencing commands reside in the /opt/VRTS/bin folder. Make sure you added this folder path to the PATH environment variable. Refer to the corresponding manual page for more information on the commands. About the vxfentsthdw utility You can use the vxfentsthdw utility to verify that shared storage arrays to be used for data support SCSI-3 persistent reservations and I/O fencing.
Administering Storage Foundation Cluster File System and its components Administering I/O Fencing The disk /dev/vx/rdmp/c1t1d0 is ready to be configured for I/O Fencing on node galaxy If the utility does not show a message stating a disk is ready, verification has failed. ■ If the disk you intend to test has existing SCSI-3 registration keys, the test issues a warning before proceeding.
74 Administering Storage Foundation Cluster File System and its components Administering I/O Fencing Table 3-2 vxfentsthdw options (continued) vxfentsthdw option Description When to use -c Utility tests the coordinator disk For testing disks in coordinator group prompting for systems disk group. and devices, and reporting See “Testing the coordinator success or failure. disk group using vxfentsthdw -c option” on page 74.
Administering Storage Foundation Cluster File System and its components Administering I/O Fencing Note: To test the coordinator disk group using the vxfentsthdw utility, the utility requires that the coordinator disk group, vxfencoorddg, be accessible from two nodes. To test the coordinator disk group using vxfentsthdw -c 1 Use the vxfentsthdw command with the -c option.
76 Administering Storage Foundation Cluster File System and its components Administering I/O Fencing To remove and replace a failed disk 1 Use the vxdiskadm utility to remove the failed disk from the disk group. Refer to the Veritas Volume Manager Administrator’s Guide. 2 Add a new disk to the node, initialize it, and add it to the coordinator disk group.
Administering Storage Foundation Cluster File System and its components Administering I/O Fencing If the utility does not show a message stating a disk is ready, verification has failed. Failure of verification can be the result of an improperly configured disk array. It can also be caused by a bad disk. If the failure is due to a bad disk, remove and replace it.
78 Administering Storage Foundation Cluster File System and its components Administering I/O Fencing 6 If a disk is ready for I/O fencing on each node, the utility reports success: ALL tests on the disk /dev/vx/rdmp/c1t1d0 have PASSED The disk is now ready to be configured for I/O Fencing on node galaxy ... Removing test keys and temporary files, if any ... . . 7 Run the vxfentsthdw utility for each disk you intend to verify.
Administering Storage Foundation Cluster File System and its components Administering I/O Fencing After testing, destroy the disk group and put the disks into disk groups as you need. To test all the disks in a diskgroup 1 Create a diskgroup for the disks that you want to test. 2 Enter the following command to test the diskgroup test_disks_dg: # vxfentsthdw -g test_disks_dg The utility reports the test results one disk at a time.
80 Administering Storage Foundation Cluster File System and its components Administering I/O Fencing -m register with disks -n make a reservation with disks -p remove registrations made by other systems -r read reservations -x remove registrations Refer to the vxfenadm(1m) manual page for a complete list of the command options. About the I/O fencing registration key format The keys that the vxfen driver registers on the data disks and the coordinator disks consist of eight bytes.
Administering Storage Foundation Cluster File System and its components Administering I/O Fencing Byte 0 1 2 3 4 5 6 Value A+nID P G R DGcount DGcount DGcount DGcount 81 7 where DGcount is the count of disk group in the configuration Displaying the I/O fencing registration keys You can display the keys that are currently assigned to the disks using the vxfenadm command. The variables such as disk_7, disk_8, and disk_9 in the following procedure represent the disk names in your setup.
82 Administering Storage Foundation Cluster File System and its components Administering I/O Fencing Device Name: /dev/vx/rdmp/disk_7 Total Number Of Keys: 1 key[0]: [Numeric Format]: 66,80,71,82,48,48,48,48 [Character Format]: BPGR0001 [Node Format]: Cluster ID: unknown Node ID: 1 Node Name: nebula ■ To display the keys on a VCS failover disk group: # vxfenadm -s /dev/vx/rdmp/disk_8 Reading SCSI Registration Keys...
Administering Storage Foundation Cluster File System and its components Administering I/O Fencing 83 # lltstat -C 57069 If the disk has keys which do not belong to a specific cluster, then the vxfenadm command cannot look up the node name for the node ID and hence prints the node name as unknown.
84 Administering Storage Foundation Cluster File System and its components Administering I/O Fencing To verify that the nodes see the same disks 1 Verify the connection of the shared storage for data to two of the nodes on which you installed SFCFS.
Administering Storage Foundation Cluster File System and its components Administering I/O Fencing To clear keys after split-brain 1 Stop VCS on all nodes. # hastop -all 2 Make sure that the port h is closed on all the nodes. Run the following command on each node to verify that the port h is closed: # gabconfig -a Port h must not appear in the output. 3 Stop I/O fencing on all nodes. Enter the following command on each node: # /sbin/init.
86 Administering Storage Foundation Cluster File System and its components Administering I/O Fencing 6 Read the script’s introduction and warning. Then, you can choose to let the script run. Do you still want to continue: [y/n] (default : n) y In some cases, informational messages resembling the following may appear on the console of one of the nodes in the cluster when a node is ejected from a disk/LUN. You can ignore these informational messages.
Administering Storage Foundation Cluster File System and its components Administering I/O Fencing About the vxfenswap utility The vxfenswap utility allows you to replace coordinator disks in a cluster that is online. The utility verifies that the serial number of the new disks are identical on all the nodes and the new disks can support I/O fencing. Refer to the vxfenswap(1M) manual page.
88 Administering Storage Foundation Cluster File System and its components Administering I/O Fencing ■ For server-based fencing, use the vxfenswap -a cancel command to cancel the vxfenswap operation. Replacing I/O fencing coordinator disks when the cluster is online Review the procedures to add, remove, or replace one or more coordinator disks in a cluster that is operational.
Administering Storage Foundation Cluster File System and its components Administering I/O Fencing 4 If your setup uses VRTSvxvm version 5.1 RP2 (or later) or 5.1 SP1 (or later), then skip to step 5. You need not set coordinator=off to add or remove disks. For other VxVM versions, perform this step: Turn off the coordinator attribute value for the coordinator disk group.
90 Administering Storage Foundation Cluster File System and its components Administering I/O Fencing 9 Review the message that the utility displays and confirm that you want to commit the new set of coordinator disks. Else skip to step 10. Do you wish to commit this change? [y/n] (default: n) y If the utility successfully commits, the utility moves the /etc/vxfentab.test file to the /etc/vxfentab file. 10 If you do not want to commit the new set of coordinator disks, answer n.
Administering Storage Foundation Cluster File System and its components Administering I/O Fencing 4 Find the alternative disk groups available to replace the current coordinator diskgroup.
92 Administering Storage Foundation Cluster File System and its components Administering I/O Fencing ■ 9 Verifies that the new disk group can support I/O fencing on each node. If the disk verification passes, the utility reports success and asks if you want to replace the coordinator disk group. 10 Review the message that the utility displays and confirm that you want to replace the coordinator disk group. Else skip to step 13.
Administering Storage Foundation Cluster File System and its components Administering I/O Fencing To add new disks from a recovered site to the coordinator diskgroup 1 Make sure system-to-system communication is functioning properly. 2 Make sure that the cluster is online.
94 Administering Storage Foundation Cluster File System and its components Administering I/O Fencing 6 When the primary site comes online, start the vxfenswap utility on any node in the cluster: # vxfenswap -g vxfencoorddg [-n] 7 Verify the count of the coordinator disks.
Administering Storage Foundation Cluster File System and its components Administering I/O Fencing To refresh lost keys on coordinator disks 1 Make sure system-to-system communication is functioning properly. 2 Make sure that the cluster is online.
96 Administering Storage Foundation Cluster File System and its components Administering I/O Fencing About administering the coordination point server This section describes how to perform administrative and maintenance tasks on the coordination point server (CP server). For more information about the cpsadm command and the associated command options, see the cpsadm(1M) manual page.
Administering Storage Foundation Cluster File System and its components Administering I/O Fencing Where cp_server is the CP server's virtual IP address or virtual hostname.
98 Administering Storage Foundation Cluster File System and its components Administering I/O Fencing Adding or removing CP server users ■ To add a user Type the following command: # cpsadm -s cp_server -a add_user -e user_name -f user_role -g domain_type -u uuid ■ To remove a user Type the following command: # cpsadm -s cp_server -a rm_user -e user_name -g domain_type cp_server The CP server's virtual IP address or virtual hostname. user_name The user to be added to the CP server configuration.
Administering Storage Foundation Cluster File System and its components Administering I/O Fencing cluster_name The SFCFS cluster name. Preempting a node To preempt a node Type the following command: # cpsadm -s cp_server -a preempt_node -u uuid -n nodeid -v victim_node id cp_server The CP server's virtual IP address or virtual hostname. uuid The UUID (Universally Unique ID) of the SFCFS cluster. nodeid The node id of the SFCFS cluster node. victim_node id The victim node's node id.
100 Administering Storage Foundation Cluster File System and its components Administering I/O Fencing ■ To disable access for a user to a SFCFS cluster Type the following command: # cpsadm -s cp_server -a rm_clus_from_user -e user_name -f user_role -g domain_type -u uuid cp_server The CP server's virtual IP address or virtual hostname. user_name The user name to be added to the CP server. user_role The user role, either cps_admin or cps_operator.
Administering Storage Foundation Cluster File System and its components Administering I/O Fencing Note: If multiple clusters share the same CP server, you must perform this replacement procedure in each cluster. You can use the vxfenswap utility to replace coordination points when fencing is running in customized mode in an online cluster, with vxfen_mechanism=cps.
102 Administering Storage Foundation Cluster File System and its components Administering I/O Fencing To replace coordination points for an online cluster 1 Ensure that the SFCFS cluster nodes and users have been added to the new CP server(s). Run the following commands: # cpsadm -s cpserver -a list_nodes # cpsadm -s cpserver -a list_users If the SFCFS cluster nodes are not present here, prepare the new CP server(s) for use by the SFCFS cluster.
Administering Storage Foundation Cluster File System and its components Administering I/O Fencing 4 Use a text editor to access /etc/vxfenmode and update the values to the new CP server (coordination points). The values of the /etc/vxfenmode file have to be updated on all the nodes in the SFCFS cluster. Review and if necessary, update the vxfenmode parameters for security, the coordination points, and if applicable to your configuration, vxfendg.
104 Administering Storage Foundation Cluster File System and its components Administering I/O Fencing To refresh the registration keys on the coordination points for server-based fencing 1 Ensure that the SFCFS cluster nodes and users have been added to the new CP server(s). Run the following commands: # cpsadm -s cp_server -a list_nodes # cpsadm -s cp_server -a list_users If the SFCFS cluster nodes are not present here, prepare the new CP server(s) for use by the SFCFS cluster.
Administering Storage Foundation Cluster File System and its components Administering I/O Fencing 4 Run the vxfenswap utility from one of the nodes of the cluster. The vxfenswap utility requires secure ssh connection to all the cluster nodes. Use -n to use rsh instead of default ssh. For example: # vxfenswap [-n] The command returns: VERITAS vxfenswap version The logfile generated for vxfenswap is /var/VRTSvcs/log/vxfen/vxfenswap.log. 19156 Please Wait...
106 Administering Storage Foundation Cluster File System and its components Administering I/O Fencing Table 3-4 Scenario CP server Setup of CP server New CP server for a SFCFS cluster for the first time CP server deployment and migration scenarios SFCFS cluster Action required New SFCFS cluster using CP server as coordination point On the designated CP server, perform the following tasks: 1 Prepare to configure the new CP server. 2 Configure the new CP server.
Administering Storage Foundation Cluster File System and its components Administering I/O Fencing Table 3-4 Scenario CP server Enabling fencing New CP server in a SFCFS cluster with a new CP server coordination point CP server deployment and migration scenarios (continued) SFCFS cluster Action required Existing SFCFS cluster with fencing configured in disabled mode Note: Migrating from fencing in disabled mode to customized mode incurs application downtime on the SFCFS cluster.
108 Administering Storage Foundation Cluster File System and its components Administering I/O Fencing Table 3-4 Scenario CP server Enabling fencing Operational CP in a SFCFS cluster server with an operational CP server coordination point CP server deployment and migration scenarios (continued) SFCFS cluster Action required Existing SFCFS cluster with fencing configured in disabled mode Note: Migrating from fencing in disabled mode to customized mode incurs application downtime.
Administering Storage Foundation Cluster File System and its components Administering I/O Fencing Table 3-4 Scenario CP server Enabling fencing New CP server in a SFCFS cluster with a new CP server coordination point CP server deployment and migration scenarios (continued) SFCFS cluster Action required Existing SFCFS cluster with fencing configured in scsi3 mode On the designated CP server, perform the following tasks: 1 Prepare to configure the new CP server.
110 Administering Storage Foundation Cluster File System and its components Administering I/O Fencing Table 3-4 Scenario CP server Enabling fencing Operational CP in a SFCFS cluster server with an operational CP server coordination point CP server deployment and migration scenarios (continued) SFCFS cluster Action required Existing SFCFS cluster with fencing configured in disabled mode On the designated CP server, prepare to configure the new CP server.
Administering Storage Foundation Cluster File System and its components Administering I/O Fencing Migrating from disk-based to server-based fencing in an online cluster You can migrate between disk-based fencing and server-based fencing without incurring application downtime in the Storage Foundation Cluster File System HA clusters. You can migrate from disk-based fencing to server-based fencing in the following cases: ■ You want to leverage the benefits of server-based fencing.
112 Administering Storage Foundation Cluster File System and its components Administering I/O Fencing See the Veritas Storage Foundation Cluster File System Installation Guide for the procedures. 4 Create a new /etc/vxfenmode.test file on each Storage Foundation Cluster File System HA cluster node with the fencing configuration changes such as the CP server information. Refer to the sample vxfenmode files in the /etc/vxfen.d folder.
Administering Storage Foundation Cluster File System and its components Administering I/O Fencing 7 After the migration is complete, verify the change in the fencing mode.
114 Administering Storage Foundation Cluster File System and its components Administering I/O Fencing To migrate from server-based fencing to disk-based fencing 1 Make sure system-to-system communication is functioning properly. 2 Make sure that the Storage Foundation Cluster File System HA cluster is online.
Administering Storage Foundation Cluster File System and its components Administering I/O Fencing ■ If you do not want to commit the new fencing configuration changes, press Enter or answer n at the prompt. Do you wish to commit this change? [y/n] (default: n) n The vxfenswap utility rolls back the migration operation. ■ If you want to commit the new fencing configuration changes, answer y at the prompt.
116 Administering Storage Foundation Cluster File System and its components Administering I/O Fencing To migrate from non-secure to secure setup for CP server and SFCFS cluster 1 Stop fencing on all the SFCFS cluster nodes of all the clusters (which are using the CP servers). # /sbin/init.d/vxfen stop 2 Stop all the CP servers using the following command on each CP server: # hagrp -offline CPSSG -any 3 Ensure that security is configured for communication between CP servers and SFCFS cluster nodes.
Administering Storage Foundation Cluster File System and its components Administering I/O Fencing 7 Authorize the user to administer the cluster. For example, issue the following command on the CP server (mycps1.symantecexample.com): # cpsadm -s mycps1.symantecexample.com -a\ add_clus_to_user -c cpcluster\ -u {f0735332-1dd1-11b2-a3cb-e3709c1c73b9}\ -e _HA_VCS_galaxy@HA_SERVICES@galaxy.symantec.com\ -f cps_operator -g vx Cluster successfully added to user _HA_VCS_galaxy@HA_SERVICES@galaxy.symantec.
118 Administering Storage Foundation Cluster File System and its components Administering I/O Fencing ■ Make the VCS configuration writable. # haconf -makerw ■ Set the value of the cluster-level attribute PreferredFencingPolicy as System. # haclus -modify PreferredFencingPolicy System ■ Set the value of the system-level attribute FencingWeight for each node in the cluster.
Administering Storage Foundation Cluster File System and its components Administering I/O Fencing # haconf -dump -makero 5 To view the fencing node weights that are currently set in the fencing driver, run the following command: # vxfenconfig -a To disable preferred fencing for the I/O fencing configuration 1 Make sure that the cluster is running with I/O fencing set up. # vxfenadm -d 2 Make sure that the cluster-level attribute UseFence has the value set to SCSI3.
120 Administering Storage Foundation Cluster File System and its components Administering I/O Fencing Table 3-5 VXFEN tunable parameters (continued) vxfen Parameter Description and Values: Default, Minimum, and Maximum vxfen_max_delay Specifies the maximum number of seconds that the smaller sub-cluster waits before racing with larger sub-clusters for control of the coordinator disks when a split-brain occurs. This value must be greater than the vxfen_max_delay value.
Administering Storage Foundation Cluster File System and its components Administering SFCFS global clusters To configure the VxFEN parameters and reconfigure the VxFEN module 1 Configure the tunable parameter. # /usr/sbin/kctune tunable=value For example: # /usr/sbin/kctune vxfen_min_delay=100 2 Stop VCS. # hastop -local 3 Reboot the node.
122 Administering Storage Foundation Cluster File System and its components Administering SFCFS global clusters contains a point-in-time copy of the production data in the Replicated Volume Group (RVG). Bringing the fire drill service group online on the secondary site demonstrates the ability of the application service group to fail over and come online at the secondary site, should the need arise.
Administering Storage Foundation Cluster File System and its components Administering SFCFS global clusters You can schedule the fire drill for the service group using the fdsched script. See “Scheduling a fire drill” on page 124.
124 Administering Storage Foundation Cluster File System and its components Administering SFCFS global clusters Configuring local attributes in the fire drill service group The fire drill setup wizard does not recognize localized attribute values for resources. If the application service group has resources with local (per-system) attribute values, you must manually set these attributes after running the wizard.
Chapter 4 Using Veritas Extension for Oracle Disk Manager This chapter includes the following topics: ■ About Oracle Disk Manager ■ About Oracle Disk Manager and Storage Foundation Cluster File System ■ About Oracle Disk Manager and Oracle Managed Files ■ Setting up Veritas Extension for Oracle Disk Manager ■ Preparing existing database storage for Oracle Disk Manager ■ Converting Quick I/O files to Oracle Disk Manager files ■ Verifying that Oracle Disk Manager is configured ■ Disabling the
126 Using Veritas Extension for Oracle Disk Manager About Oracle Disk Manager as to which regions or blocks of a mirrored datafile to resync after a system crash. Oracle Resilvering avoids overhead from the VxVM DRL, which increases performance. Oracle Disk Manager reduces administrative overhead by providing enhanced support for Oracle Managed Files. Veritas Extension for Oracle Disk Manager has Quick I/O-like capabilities, but is transparent to the user.
Using Veritas Extension for Oracle Disk Manager About Oracle Disk Manager How Oracle Disk Manager improves database performance Oracle Disk Manager improves database I/O performance to VxFS file systems by: ■ Supporting kernel asynchronous I/O ■ Supporting direct I/O and avoiding double buffering ■ Avoiding kernel write locks on database files ■ Supporting many concurrent I/Os in one system call ■ Avoiding duplicate opening of files per Oracle instance ■ Allocating contiguous datafiles About ke
128 Using Veritas Extension for Oracle Disk Manager About Oracle Disk Manager and Storage Foundation Cluster File System About supporting many concurrent I/Os in one system call When performing asynchronous I/O, an Oracle process may try to issue additional I/O requests while collecting completed I/Os, or it may try to wait for particular I/O requests synchronously, as it can do no other work until the I/O is completed. The Oracle process may also try to issue requests to different files.
Using Veritas Extension for Oracle Disk Manager About Oracle Disk Manager and Oracle Managed Files About Oracle Disk Manager and Oracle Managed Files Oracle10g or later offers a feature known as Oracle Managed Files (OMF). OMF manages datafile attributes such as file names, file location, storage attributes, and whether or not the file is in use by the database. OMF is only supported for databases that reside in file systems. OMF functionality is greatly enhanced by Oracle Disk Manager.
130 Using Veritas Extension for Oracle Disk Manager About Oracle Disk Manager and Oracle Managed Files Note: Before building an OMF database, you need the appropriate init.ora default values. These values control the location of the SYSTEM tablespace, online redo logs, and control files after the CREATE DATABASE statement is executed. $ cat initPROD.
Using Veritas Extension for Oracle Disk Manager Setting up Veritas Extension for Oracle Disk Manager 131 The system is altered. SQL> create tablespace EMP_TABLE DATAFILE AUTOEXTEND ON MAXSIZE \ 500M; A tablespace is created. SQL> ALTER SYSTEM SET DB_CREATE_FILE_DEST = '/PROD/EMP_INDEX'; The system is altered. SQL> create tablespace EMP_INDEX DATAFILE AUTOEXTEND ON MAXSIZE \ 100M; A tablespace is created.
132 Using Veritas Extension for Oracle Disk Manager Preparing existing database storage for Oracle Disk Manager are installed. The Veritas Extension for Oracle Disk Manager library is linked to the library in the {ORACLE_HOME}/lib directory. If you are performing a local Oracle installation, not on the SFCFS file system, then ODM linking needs to be performed on all nodes in the cluster.
Using Veritas Extension for Oracle Disk Manager Verifying that Oracle Disk Manager is configured Note: If you are running an earlier version of Oracle (Oracle 8.x or lower), you should not convert your Quick I/O files because Oracle Disk Manager is for Oracle10g or later only. The Oracle Disk Manager uses the Quick I/O driver to perform asynchronous I/O, do not turn off the Quick I/O mount option, which is the default.
134 Using Veritas Extension for Oracle Disk Manager Verifying that Oracle Disk Manager is configured To verify that Oracle Disk Manager is configured 1 Verify that the ODM feature is included in the license: # /opt/VRTS/bin/vxlicrep | grep ODM The output verifies that ODM is enabled. Note: Verify that the license key containing the ODM feature is not expired. If the license key has expired, you will not be able to use the ODM feature.
Using Veritas Extension for Oracle Disk Manager Disabling the Oracle Disk Manager feature 135 To verify that Oracle Disk Manager is running 1 Start the Oracle database. 2 Check that the instance is using the Oracle Disk Manager function: # cat /dev/odm/stats # echo $? 0 3 Verify that the Oracle Disk Manager is loaded: # /usr/sbin/kcmodule -P state odm state loaded 4 In the alert log, verify the Oracle instance is running.
136 Using Veritas Extension for Oracle Disk Manager Disabling the Oracle Disk Manager feature To disable the Oracle Disk Manager feature in an Oracle instance 1 Shut down the database instance. 2 Use the rm and ln commands to remove the link to the Oracle Disk Manager Library. For Oracle 11g, enter: # rm ${ORACLE_HOME}/lib/libodm11.sl # ln -s ${ORACLE_HOME}/lib/libodmd11.sl \ ${ORACLE_HOME}/lib/libodm11.sl For Oracle 10g, enter: # rm ${ORACLE_HOME}/lib/libodm10.
Using Veritas Extension for Oracle Disk Manager Using Cached ODM 3 Restart the database instance. Using Cached ODM ODM I/O normally bypasses the file system cache and directly reads from and writes to disk. Cached ODM enables some I/O to use caching and read ahead, which can improve ODM I/O performance. Cached ODM performs a conditional form of caching that is based on per-I/O hints from Oracle. The hints indicate what Oracle does with the data.
138 Using Veritas Extension for Oracle Disk Manager Using Cached ODM To enable Cached ODM for a file system 1 Enable Cached ODM on the VxFS file system /database01: # vxtunefs -s -o odm_cache_enable=1 /database01 2 Optionally, you can make this setting persistent across mounts by adding a file system entry in the file /etc/vx/tunefstab: /dev/vx/dsk/datadg/database01 odm_cache_enable=1 See the tunefstab(4) manual page.
Using Veritas Extension for Oracle Disk Manager Using Cached ODM To check on the current cache advisory settings for a file ◆ Check the current cache advisory settings of the files /mnt1/file1 and /mnt2/file2: # odmadm getcachefile /mnt1/file1 /mnt2/file2 /mnt1/file1,ON /mnt2/file2,OFF To reset all files to the default cache advisory ◆ Reset all files to the default cache advisory: # odmadm resetcachefiles Adding Cached ODM settings via the cachemap You can use the odmadm setcachemap command to configu
140 Using Veritas Extension for Oracle Disk Manager Using Cached ODM To make the caching setting persistent across mounts ◆ Create the /etc/vx/odmadm file to list files and their caching advisories.
Chapter 5 Clustered NFS This chapter includes the following topics: ■ About Clustered NFS ■ Requirements ■ Understanding how Clustered NFS works ■ cfsshare manual page ■ Configure and unconfigure Clustered NFS ■ Administering Clustered NFS ■ How to mount an NFS-exported file system on the NFS clients ■ Debugging Clustered NFS About Clustered NFS In previous releases, the SFCFS stack only allowed an active/passive setup for NFS serving due to the complexity of lock reclamation in an active/
142 Clustered NFS Understanding how Clustered NFS works Understanding how Clustered NFS works This Clustered NFS feature allows the same file system mounted across multiple nodes using CFS to be shared over NFS from any combination of those nodes without any loss of functionality during failover. The failover of NFS lock servers includes all the locks being released by the old node then reclaimed by clients talking to the new node during the grace period.
Clustered NFS Understanding how Clustered NFS works ■ Stops lock and status services on all nodes to prevent granting locks. ■ Copies all the files from /locks/sm/lastonline/sm/ to /locks/sm/nextonline/sm/ directory. where locks is the file system created for storing lock information. where lastonline is the node on which the VIP resource was previous online. where nextonline is the node on which the VIP resource will go online next.
144 Clustered NFS cfsshare manual page Actions ■ On each node, a /opt/VRTSvcs/bin/IP/actions/nfscfs file is installed. This file is used to start and stop the NFS locking daemons on a specified node. The action script is used instead of using rsh, ssh or hacli for remote command execution from the triggers. ■ On each node, a /opt/VRTSvcs/bin/ApplicationNone/actions/nfscfsapp file is installed.
Clustered NFS Configure and unconfigure Clustered NFS The local state tracking directory contains a file for each NFS client that has a transaction with the NFS server. The local state tracking directory is: /var/statmon/sm This creates a symlink to /locks/sm/nodename/sm on all the cluster nodes. This allows the lock state files for any cluster node to be accessed by other nodes in the cluster, even when the node is down.
146 Clustered NFS Administering Clustered NFS There is a limit on the number of service groups that can be created in VCS. If this limit is reached, then cfsnfssg_dummy serves as the service group in which resources get created during cfsshare unshare and cfsshare delete operations. See the Veritas Cluster Server Administrator's Guide for information about the GroupLimit attribute. Unconfigure Clustered NFS cfsshare unconfig -p nfs This command is used to undo all the steps during the config phase.
Clustered NFS Administering Clustered NFS mount the shared file system at the mount_point. Once these commands have been executed, the CFSMount resource corresponding to the mount_point gets created in either a default service group (with a name similar to vrts_vea_cfs_int_cfsmountnumber) or in a separate service group, as specified by the user.
148 Clustered NFS Administering Clustered NFS Adding an NFS shared CFS file system to VCS cfsshare add shared_disk_group shared_volume mount_point[share_options] \ node_name=[mount_options]...
Clustered NFS Administering Clustered NFS # hares -modify vip1 Device lan2 -sys system02 # haconf -dump -makero where vip1 is the Virtual IP resource created by the cfsshare addvip command. where system01 and system02 are the cluster nodes. Deleting a Virtual IP address from VCS cfsshare deletevip address This command is used to delete the non-parallel/failover service group corresponding to the Virtual IP address.
150 Clustered NFS Administering Clustered NFS Note: The cfsshare unshare operation can affect NFS clients that might have mounted the mount_point file system. Sharing a file system checkpoint This section describes how to share a file system checkpoint. To share a file system checkpoint 1 To add the checkpoint to the VCS configuration, enter: # cfsmntadm add ckpt ckptname mntpt_of_fs mntpt_of_checkpoint \ all=cluster,rw where cktpname is the checkpoint name.
Clustered NFS Administering Clustered NFS To configure a Clustered NFS (Sample 1) 1 Configure a VCS configuration for CFS/CVM, enter: # cfscluster config 2 Configure CNFS components, enter: # cfsshare config -p nfs shared_disk_group shared_volume \ mount_point For example: # cfsshare config -p nfs cfsdg vollocks /locks 3 Add and mount the CFS file system to the VCS configuration, enter: # cfsmntadm add shared_disk_group shared_volume mount_point \ [service_group] all=[mount_options] # cfsmount mount
152 Clustered NFS Administering Clustered NFS 6 Add the Virtual IP addresses for users to access the shared CFS file systems, enter: # cfsshare addvip network_interface address netmask For example: # cfsshare addvip lan0 10.182.111.161 255.255.240.0 7 Delete a previously added Virtual IP address from the configuration, enter: # cfsshare deletevip address For example: # cfsshare deletevip 10.182.111.
Clustered NFS Administering Clustered NFS 153 To configure Clustered NFS (Sample 2) 1 Configure a VCS configuration for CFS/CVM, enter: # cfscluster config 2 Configure the CNFS components, enter: # cfsshare config -p nfs shared_disk_group shared_volume mount_point For example: # cfsshare config -p nfs cfsdg vollocks /locks 3 Add and mount the NFS shared CFS file system to the VCS configuration, enter: # cfsshare add shared_disk_group shared_volume mount_point \ [share_options] all=[mount_options]
154 Clustered NFS Administering Clustered NFS 6 Unshare, unmount, and remove the CFS file system from the VCS configuration, enter: # cfsshare delete mount_point For example: # cfsshare delete /mnt1 7 Unconfigure CNFS components, enter: # cfsshare unconfig -p nfs Sample main.cf file This is a sample main.cf file. include include include include include include include include "OracleASMTypes.cf" "types.cf" "ApplicationNone.cf" "CFSTypes.cf" "CVMTypes.cf" "Db2udbTypes.cf" "OracleTypes.
Clustered NFS Administering Clustered NFS ApplicationNone app ( MonitorProgram = "/opt/VRTSvcs/bin/ApplicationNone/lockdstatdmon" ) CFSMount cfsmount2 ( Critical = 0 MountPoint = "/fsqamnt2" BlockDevice = "/dev/vx/dsk/dg1/vol2" NodeList = { galaxy, nebula } ) CFSMount cfsnfs_locks ( Critical = 0 MountPoint = "/locks" BlockDevice = "/dev/vx/dsk/dg1/vollocks" NodeList = { galaxy, nebula } ) CVMVolDg cvmvoldg1 ( Critical = 0 CVMDiskGroup = dg1 CVMActivation @galaxy = sw CVMActivation @nebula = sw CVMVolume =
156 Clustered NFS Administering Clustered NFS // // // // // // // // // // // // // // // // // group cfsnfssg { ApplicationNone app CFSMount cfsnfs_locks { CVMVolDg cvmvoldg1 } Share share1 { NFS nfs CFSMount cfsmount2 { CVMVolDg cvmvoldg1 } } } group cfsnfssg_dummy ( SystemList = { galaxy = 0, nebula = 1 } AutoFailOver = 0 Parallel = 1 AutoStartList = { galaxy, nebula } ) requires group cvm online local firm // // // // // resource dependency tree group cfsnfssg_dummy { } group cvm ( SystemList =
Clustered NFS Administering Clustered NFS CFSfsckd vxfsckd ( ActivationMode @galaxy = { dg1 = sw } ActivationMode @nebula = { dg1 = sw } ) CVMCluster cvm_clus ( CVMClustName = cfs_cluster CVMNodeId = { galaxy = 0, nebula = 1 } CVMTransport = gab CVMTimeout = 200 ) CVMVxconfigd cvm_vxconfigd ( Critical = 0 CVMVxconfigdArgs = { syslog } ) cvm_clus requires cvm_vxconfigd vxfsckd requires cvm_clus // // // // // // // // // // // // resource dependency tree group cvm { CFSfsckd vxfsckd { CVMCluster cvm_clus
158 Clustered NFS How to mount an NFS-exported file system on the NFS clients Device = lan0 Address = "10.182.111.161" NetMask = "255.255.252.0" ) NIC nic1 ( Device = lan0 ) requires group cfsnfssg online local firm vip1 requires nic1 // // // // // // // // // resource dependency tree group vip1 { IP vip1 { NIC nic1 } } How to mount an NFS-exported file system on the NFS clients This section describes how to mount an NFS-exported file system on the NFS clients.
Chapter 6 Common Internet File System This chapter includes the following topics: ■ About Common Internet File System ■ Requirements ■ Understanding how Samba works ■ Configuring Clustered NFS and Common Internet File System on CFS ■ cfsshare manual page ■ Configuring Common Internet File System in user mode ■ Configuring Common Internet File System in domain mode ■ Configuring Common Internet File System in ads mode ■ Administering Common Internet File System ■ Debugging Common Interne
160 Common Internet File System Requirements Requirements ■ CIFS user mode can run with the default Samba packages that are part of the OS. ■ CIFS ads mode can run with the default Samba packages that are part of the OS and Kerberos version KRB5CLIENT_E.1.6.2.08. ■ CIFS domain mode requires Samba version 3.2 or later. ■ Prior knowledge of Samba is a prerequisite. Understanding how Samba works Samba is a networking tool that enables a UNIX server to participate in Windows networks.
Common Internet File System Configuring Common Internet File System in user mode Configuring Common Internet File System in user mode This section describes how to configure CIFS in user mode. In this mode, user authentication happens on the cluster nodes itself. You must have NIS or some other mechanism configured on the cluster nodes to ensure the same users/groups have the same user/groups IDs on all cluster nodes. A shared file system needs to be specified during the config operation.
162 Common Internet File System Configuring Common Internet File System in domain mode To complete the CIFS configuration when using the -n option 1 Copy the following lines to your smb.conf file: security = user passwd backend = smbpasswd smbpasswd file = pvtdir/smbpasswd where pvtdir is the private directory of your Samba installation. 2 Run the following command to backup your existing smbpasswd file: # cp -f pvtdir/smbpasswd pvtdir/smbpasswd.
Common Internet File System Configuring Common Internet File System in domain mode A shared file system needs to be specified during the config operation. This file system is used to replicate the secrets.tdb file (machine password file) across all cluster nodes. Only one of the cluster nodes joins the domain using the cluster name. Once you have copied this file to all the cluster nodes, the Domain controller sees all cluster nodes as one member server.
164 Common Internet File System Configuring Common Internet File System in ads mode To complete the CIFS configuration when using the -n option 1 Copy the following lines to your smb.conf file: security = domain workgroup = domainname password server = Domain_Controller_of_the_domain 2 Run the following command to backup your existing secrets.tdb file: # mv -f pvtdir/secrets.tdb pvtdir/secrets.tdb.OLD where pvtdir is the private directory of your Samba installation. 3 Copy the secrets.
Common Internet File System Configuring Common Internet File System in ads mode You must have configured Kerberos on all cluster nodes. The time on all cluster nodes needs to be synced up with the AD server/KDC. The shared file system can also be used to store any tdb file that needs to be shared across all cluster nodes. Appropriate symlinks must be created on all cluster nodes. You must backup your existing smb.
166 Common Internet File System Administering Common Internet File System To complete the CIFS configuration when using the -n option 1 Copy the following lines to your smb.conf file: security = ads workgroup = domainname password server = AD_server_of_the_domain realm = realm_name 2 Run the following command to backup your existing secrets.tdb file: # mv -f pvtdir/secrets.tdb pvtdir/secrets.tdb.OLD where pvtdir is the private directory of your Samba installation. 3 Copy the secrets.
Common Internet File System Administering Common Internet File System For example: # cfsshare addvip lan0 10.182.79.216 \ 255.255.240.0 10.182.79.215 The cfsshare addvip command lets you specify only one network interface, that is assumed to be present on all cluster nodes. If you want to specify different network interfaces for different cluster nodes, then you need to run certain VCS commands.
168 Common Internet File System Administering Common Internet File System # cfsshare delete -p cifs /mnt1 Deleting the VIP added previously: cfsshare deletevip address For example: # cfsshare deletevip 10.182.79.216 Sharing a CFS file system previously added to VCS Use one of the following commands: cfsshare share -p cifs -v address -n cifs share name \ [-C cifs_share_options] mount_point For example: # cfsshare share -p cifs -v 10.182.79.
Common Internet File System Administering Common Internet File System share resource on top of the CFSMount resource in the same cfsnfssg service group. Note: VCS does not have the functionality to move resources across service groups. The cfsshare command creates new CFSMount and CVMVolDg resources in the cfsnfssg service group and deletes the corresponding resources from the original service group. The newly created resource names are different from the original resource names.
170 Common Internet File System Administering Common Internet File System cluster node1node2 ( HacliUserLevel = COMMANDROOT ) system node1 ( ) system node2 ( ) group cfsnfssg ( SystemList = { node1 = 0, node2 = 1 } AutoFailOver = 0 Parallel = 1 AutoStartList = { node1, node2 } ) Application Samba_winbind ( StartProgram = "/opt/VRTSvcs/bin/ApplicationNone/winbindmonitor.sh start" StopProgram = "/opt/VRTSvcs/bin/ApplicationNone/winbindmonitor.sh stop" PidFiles = { "/var/run/winbindmonitor.
Common Internet File System Administering Common Internet File System Critical = 0 CVMDiskGroup = lockdg CVMVolume = { vollocks } CVMActivation @node1 = sw CVMActivation @node2 = sw ) CVMVolDg cvmvoldg2 ( Critical = 0 CVMDiskGroup = sharedg CVMVolume = { vol1 } CVMActivation @node1 = sw CVMActivation @node2 = sw ) NetBios Samba_netbios ( SambaServerRes = SambaServerResource NetBiosName = node1node2 ) SambaServer SambaServerResource ( ConfFile = "/etc/samba/smb.
172 Common Internet File System Administering Common Internet File System Parallel = 1 AutoStartList = { node1, node2 } ) requires group cvm online local firm // // // // // resource dependency tree group cfsnfssg_dummy { } group cvm ( SystemList = { node1 = 0, node2 = 1 } AutoFailOver = 0 Parallel = 1 AutoStartList = { node1, node2 } ) CFSfsckd vxfsckd ( ActivationMode @node1 = { lockdg = sw } ActivationMode @node2 = { lockdg = sw } ) CVMCluster cvm_clus ( CVMClustName = node1node2 CVMNodeId = { node1
Common Internet File System Debugging Common Internet File System SystemList = { node1 = 0, node2 = 1 } AutoStartList = { node1, node2 } PreOnline @node1 = 1 PreOnline @node2 = 1 ) IP vip1 ( Device = lan0 Address = "10.182.79.216" NetMask = "255.255.248.
174 Common Internet File System Debugging Common Internet File System
Chapter 7 Troubleshooting SFCFS This chapter includes the following topics: ■ About troubleshooting SFCFS ■ Troubleshooting fenced configurations ■ Troubleshooting I/O fencing ■ Troubleshooting CVM About troubleshooting SFCFS SFCFS contains several component products, and as a result can be affected by any issue with component products. The first step in case of trouble should be to identify the source of the problem.
176 Troubleshooting SFCFS Troubleshooting fenced configurations Example of a pre-existing network partition (split-brain) Figure 7-1 shows a two-node cluster in which the severed cluster interconnect poses a potential split-brain condition. Figure 7-1 Pre-existing network partition (split-brain) First - Interconnect failure causes both nodes to race. Second-Node 0 ejects key B for disk 1 and succeeds. Node 0 Node 1 Third-Node 0 ejects key B for disk 2 and succeeds.
Troubleshooting SFCFS Troubleshooting fenced configurations Example Scenario I Figure 7-2 scenario could cause similar symptoms on a two-node cluster with one node shut down for maintenance. During the outage, the private interconnect cables are disconnected. Figure 7-2 Example scenario I First - Network interconnect severed. Node 0 wins coordinator race. Node 1 Node 0 Second – Node 1 panics and reboots Finally- Node 1 boots up and finds keys registered for non-member. Prints error message and exits.
178 Troubleshooting SFCFS Troubleshooting I/O fencing before the private interconnect cables are fixed and Node 1 rejoins the cluster, Node 0 panics due to hardware failure and cannot come back up, Node 1 cannot rejoin. Suggested solution: Shut down Node 1, reconnect the cables, restart the node. You must then clear the registration of Node 0 from the coordinator disks. To fix scenario III 1 On Node 1, type the following command: # /opt/VRTSvcs/vxfen/bin/vxfenclearpre 2 Restart the node.
Troubleshooting SFCFS Troubleshooting I/O fencing Node is unable to join cluster while another node is being ejected A cluster that is currently fencing out (ejecting) a node from the cluster prevents a new node from joining the cluster until the fencing operation is completed. The following are example messages that appear on the console for the new node: ...VxFEN ERROR V-11-1-25 ... Unable to join running cluster since cluster is currently fencing a node out of the cluster.
180 Troubleshooting SFCFS Troubleshooting I/O fencing vxfenconfig: ERROR: There exists the potential for a preexisting split-brain. The coordinator disks list no nodes which are in the current membership. However, they also list nodes which are not in the current membership. I/O Fencing Disabled! Note: During the system boot, because the HP-UX rc sequencer redirects the stderr of all rc scripts to the file /etc/rc.log, the error messages will not be printed on the console.
Troubleshooting SFCFS Troubleshooting I/O fencing Apparent potential 1 split-brain condition—system 2 is down and system 1 is ejected Physically verify that system 2 is down. Verify the systems currently registered with the coordinator disks. Use the following command: # vxfenadm -s all -f /etc/vxfentab The output of this command identifies the keys registered with the coordinator disks. 2 Clear the keys on the coordinator disks as well as the data disks using the vxfenclearpre command.
182 Troubleshooting SFCFS Troubleshooting I/O fencing /dev/vx/rdmp/disk_9 galaxy> # vxfenadm -s /dev/vx/rdmp/disk_7 Reading SCSI Registration Keys... Device Name: /dev/vx/rdmp/disk_7 Total Number Of Keys: 1 key[0]: [Numeric Format]: 86,70,48,49,52,66,48,48 [Character Format]: VFBEAD00 [Node Format]: Cluster ID: 48813 Node ID: 0 Node Name: unknown where disk_7, disk_8, and disk_9 represent the disk names in your setup. Recommended action: You must use a unique set of coordinator disks for each cluster.
Troubleshooting SFCFS Troubleshooting I/O fencing See “About the vxfenswap utility” on page 87. Review the following information to replace coordinator disk in the coordinator disk group, or to destroy a coordinator disk group. Note the following about the procedure: ■ When you add a disk, add the disk to the disk group vxfencoorddg and retest the group for support of SCSI-3 persistent reservations. ■ You can destroy the coordinator disk group such that no registration keys remain on the disks.
184 Troubleshooting SFCFS Troubleshooting I/O fencing # vxdg list vxfencoorddg | ■ grep flags: | grep coordinator Destroy the coordinator disk group. # vxdg -o coordinator destroy vxfencoorddg 6 Add the new disk to the node and initialize it as a VxVM disk. Then, add the new disk to the vxfencoorddg disk group: ■ If you destroyed the disk group in step 5, then create the disk group again and add the new disk to it. ■ If the diskgroup already exists, then add the new disk to it.
Troubleshooting SFCFS Troubleshooting I/O fencing Troubleshooting server-based I/O fencing All CP server operations and messages are logged in the /var/VRTScps/log directory in a detailed and easy to read format. The entries are sorted by date and time. The logs can be used for troubleshooting purposes or to review for any possible security issue on the system that hosts the CP server.
186 Troubleshooting SFCFS Troubleshooting I/O fencing ■ Check the VCS engine log (/var/VRTSvcs/log/engine_[ABC].log) to see if any of the CPSSG service group resources are FAULTED. ■ Review the sample dependency graphs to make sure the required resources are configured correctly. Troubleshooting server-based fencing on the SFCFS cluster nodes The file /var/VRTSvcs/log/vxfen/vxfend_[ABC].
Troubleshooting SFCFS Troubleshooting I/O fencing Table 7-1 Fencing startup issues on SFCFS cluster (client cluster) nodes (continued) Issue Description and resolution Authentication failure If you had configured secure communication between the CP server and the SFCFS cluster (client cluster) nodes, authentication failure can occur due to the following causes: Symantec Product Authentication Services (AT) is not properly configured on the CP server and/or the SFCFS cluster.
188 Troubleshooting SFCFS Troubleshooting I/O fencing Table 7-1 Fencing startup issues on SFCFS cluster (client cluster) nodes (continued) Issue Description and resolution Preexisting split-brain Assume the following situations to understand preexisting split-brain in server-based fencing: There are three CP servers acting as coordination points. One of the three CP servers then becomes inaccessible. While in this state, also one client node leaves the cluster.
Troubleshooting SFCFS Troubleshooting I/O fencing To check the connectivity of CP server ◆ Run the following command to check whether a CP server is up and running at a process level: # cpsadm -s cp_server -a ping_cps where cp_server is the virtual IP address or virtual hostname on which the CP server is listening.
190 Troubleshooting SFCFS Troubleshooting I/O fencing Point agent continues monitoring the old set of coordination points it read from vxfenconfig -l output in every monitor cycle. The status of the Coordination Point agent (either ONLINE or FAULTED) depends upon the accessibility of the coordination points, the registrations on these coordination points, and the fault tolerance value.
Troubleshooting SFCFS Troubleshooting I/O fencing # export CPS_USERNAME=_HA_VCS_test-system@HA_SERVICES@test-system.symantec.com # export CPS_DOMAINTYPE=vx Once a pre-existing network partition is detected using the above commands, all spurious keys on the coordinator disks or CP server must be removed by the administrator. To troubleshoot server-based I/O fencing configuration in mixed mode 1 Review the current I/O fencing configuration by accessing and viewing the information in the vxfenmode file.
192 Troubleshooting SFCFS Troubleshooting I/O fencing 3 Review the SCSI registration keys for the coordinator disks used in the I/O fencing configuration. The variables disk_7 and disk_8 in the following commands represent the disk names in your setup. Enter the vxfenadm -s command on each of the SFCFS cluster nodes.
Troubleshooting SFCFS Troubleshooting I/O fencing 4 Review the CP server information about the cluster nodes. On the CP server, run the cpsadm list nodes command to review a list of nodes in the cluster. # cpsadm -s cp_server -a list_nodes where cp server is the virtual IP address or virtual hostname on which the CP server is listening. 5 Review the CP server list membership. On the CP server, run the following command to review the list membership.
194 Troubleshooting SFCFS Troubleshooting CVM Troubleshooting CVM This section discusses troubleshooting CVM problems. CVM group is not online after adding a node to the cluster The possible causes for the CVM group being offline after adding a node to the cluster are as follows: ■ The cssd resource is configured as a critical resource in the cvm group. ■ Other resources configured in the cvm group as critical resources are not online.
Troubleshooting SFCFS Troubleshooting CVM Please make sure that CVM and vxfen are configured and operating correctly First, make sure that CVM is running. You can see the CVM nodes in the cluster by running the vxclustadm nidmap command. # vxclustadm nidmap Name galaxy nebula CVM Nid 1 0 CM Nid 0 1 State Joined: Master Joined: Slave This above ouput shows that CVM is healthy, with system galaxy as the CVM master.
196 Troubleshooting SFCFS Troubleshooting CVM CVMVolDg not online even though CVMCluster is online When the CVMCluster resource goes online, then all shared disk groups that have the auto-import flag set are automatically imported. If the disk group import fails for some reason, the CVMVolDg resources fault. Clearing and taking the CVMVolDg type resources offline does not resolve the problem. To resolve the resource issue 1 Fix the problem causing the import of the shared disk group to fail.
Appendix A Creating a starter database This appendix includes the following topics: ■ Creating a database for Oracle 10g or 11g Creating a database for Oracle 10g or 11g Create a database tablespace for Oracle 10g or 11g using one of the two options: ■ Option 1: on shared raw VxVM volumes ■ Option 2: on cluster file system (CFS) Before you begin, take note of the following prerequisites: ■ CRS daemons must be running.
198 Creating a starter database Creating a database for Oracle 10g or 11g To create database tablespace on shared raw VxVM volumes (option 1) 1 On any cluster node, log in as root.
Creating a starter database Creating a database for Oracle 10g or 11g 4 Define the access mode and permissions for the volumes storing the Oracle data. For each volume listed in $ORACLE_HOME/raw_config, use the vxedit command: # vxedit -g disk_group set group=group user=user mode=660 volume See the vxedit(1M) manual page. For example, enter: # vxedit -g oradatadg set group=oinstall user=oracle mode=660 \ VRT_system01 In this example, VRT_system01 is the name of one of the volumes.
200 Creating a starter database Creating a database for Oracle 10g or 11g 3 Create a single shared volume that is large enough to contain a file system for all tablespaces. See the Oracle documentation specific to the Oracle database release for tablespace sizes. Assuming 6.
Glossary ACL (access control list) The information that identifies specific users or groups and their access privileges for a particular file or directory. agent A process that manages predefined Veritas Cluster Server (VCS) resource types. Agents bring resources online, take resources offline, and monitor resources to report any state changes to VCS. When an agent is started, it obtains configuration information from VCS and periodically monitors the resources and updates VCS with the resource status.
202 Glossary shared device can be done concurrently from any host on which the cluster file system is mounted. To be a cluster mount, a file system must be mounted using the mount -o cluster option. Cluster Services The group atomic broadcast (GAB) module in the SFCFS stack provides cluster membership services to the file system. LLT provides kernel-to-kernel communications and monitors network communications. contiguous file A file in which data blocks are physically adjacent on the underlying media.
Glossary file system block The fundamental minimum size of allocation in a file system. This is equivalent to the fragment size on some UNIX file systems. fileset A collection of files within a file system. fixed extent size An extent attribute used to override the default allocation policy of the file system and set all allocations for a file to a specific fixed size.
204 Glossary system A file system mounted on a single host. The single host mediates all file system writes to storage from other clients. To be a local mount, a file system cannot be mounted using the mount -o cluster option. metadata Structural data describing the attributes of files on a disk. MB (megabyte) 220 bytes or 1024 kilobytes. mirror A duplicate copy of a volume and the data therein (in the form of an ordered collection of subdisks).
Glossary shared disk group A disk group in which the disks are shared by multiple hosts (also referred to as a cluster-shareable disk group). shared volume A volume that belongs to a shared disk group and is open on more than one node at the same time. snapshot file system An exact copy of a mounted file system at a specific point in time. Used to do online backups. snapped file system A file system whose exact image has been used to create a snapshot file system.
206 Glossary VxVM The Veritas Volume Manager.
Index Symbols /etc/default/vxdg file 50 /etc/fstab file 61 A Actions 144 Administering Clustered NFS 146 Administration I/O fencing 71 SFCFS 55 Agents CVM 52 Applications SFCFS 17 Architecture SFCFS 13 Asymmetric mounts 22 mount_vxfs(1M) 23 B Backup strategies SFCFS 26 Basic design Clustered NFS 142 Benefits SFCFS 17 C CFS file system Sharing 168 CFS primaryship determining 24 moving 24 cfscluster command 58 cfsdgadm command 58 cfsmntadm command 58 cfsmount command 58 cfsnfssg_dummy service group 145 c
208 Index Configuring low priority link 29 Configuring a CNFS samples 150, 152 Connectivity policy shared disk groups 51 coordinator disks DMP devices 34 for I/O fencing 34 copy-on-write technique 25 CP server deployment scenarios 105 migration scenarios 105 CP server database 45 CP server user privileges 45 Creating database for Oracle 10g 197 database for Oracle 11g 197 snapshot SFCFS 63 CVM 46 agents 52 functionality 53 CVM master changing 64 CVM master node 47 D data corruption preventing 31 data dis
Index L Limitations shared disk groups 52 Locking 16 log files 185 Low priority link configuring 29 M main.
210 Index Setting parallel fsck threads 24 primaryship fsclustadm 61 SFCFS administration 55 applications 17 architecture 13 backup strategies 26 benefits 17 environments 29 features 18 GLM 22 growing file system 60 load distribution 61 performance 62 primary fails 61 snapshots 27, 61 synchronize time 24 usage 18 Shared CFS file system unsharing 169 Shared disk groups 47 allowed conflicting 50 connectivity policy 51 limitations 52 off default 49 shared disk groups activation modes 49 Sharing CFS file syst