Serviceguard NFS Toolkit A.11.11.06, A.11.23.05 and A.11.31.
© Copyright 2001-2008 © Hewlett-Packard Development Company, L. P Legal Notices The information in this document is subject to change without notice. Hewlett-Packard makes no warranty of any kind with regard to this manual, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose.
Table of Contents 1 Overview of Serviceguard NFS....................................................................................7 Limitations of Serviceguard NFS............................................................................................................7 Overview of Serviceguard NFS Toolkit A.11.31.03 with Serviceguard A.11.18 and Veritas Cluster File System Support............................................................................................................................
Configuring a Serviceguard NFS failover package....................................................................43 Starting a Serviceguard NFS failover package...........................................................................45 3 Sample Configurations................................................................................................47 Example One - Three-Server Mutual Takeover....................................................................................
Index.................................................................................................................................
List of Figures 1-1 1-2 1-3 1-4 1-5 1-6 1-7 1-8 1-9 2-1 2-2 2-3 3-1 3-2 3-3 3-4 3-5 3-6 3-7 3-8 6 CFS versus Non-CFS (VxFS) Implementation...............................................................................12 SG NFS Servers over VxFS — High Availability...........................................................................13 SG NFS Servers over CFS — High Availability, Scalability, Load Balancing................................13 SG NFS Servers over CFS — High Availability, File Locking....
1 Overview of Serviceguard NFS Serviceguard NFS is a tool kit that enables you to use Serviceguard to set up highly available NFS servers. You must set up a Serviceguard cluster before you can set up Highly Available NFS. For instructions on setting up a Serviceguard cluster, see the Managing Serviceguard manual. Serviceguard NFS is a separately purchased set of configuration files and control script, which you customize for your specific needs.
• • wait at least 60 seconds after an HA-NFS package has started before mounting file systems exported from that package. The ServiceGuard supports Cross Subnet Failover. However, HA-NFS has a few limitations with Cross Subnet configurations. For Cross Subnet support details, refer “Managing Serviceguard” documentation available at: http://www.docs.hp.com/en/B3936-90122/ ch02s02.
• • filesystems that moves between servers as part of the package. This holding directory is a configurable parameter and must be dedicated to hold the Status Monitor (SM) entries only. A new script, nfs.flm, periodically (default value is five seconds; you can change this value by modifying the >PROPAGATE_INTERVAL parameter in the nfs.flm script) copies SM entries from the /var/statmon/sm directory into the package holding directory. To edit the nfs.
• • the new server's SM and v4_state directories with the entries from the primary server, respectively. After failover, nfsd is also killed and restarted when rpc.lockd and rpc.statd are killed and restarted on the adoptive node. The killing and restarting of nfsd forces NFSv4 servers to begin the grace period during which time the NFSv4 clients will reclaim their locks. After the nfs.
Limitations The following is a list of limitations when using Serviceguard NFS Toolkit A.11.23.05 with Serviceguard A.11.17: • • Serviceguard A.11.17 introduces a new MULTI_NODE package type which is not supported by Serviceguard NFS Toolkit. The only supported package type is FAILOVER. Serviceguard A.11.17 provides a new package configuration file template.
Figure 1-1 CFS versus Non-CFS (VxFS) Implementation In a Serviceguard CFS environment, files and filesystems are concurrently accessible on multiple nodes. When a package fails over, the adoptive systems do not have to mount the disks from the failed system because they are already mounted. There is a new multi-node package that runs on each server in the cluster and exports all the cluster filesystems. The exported filesystems do not have to be re-exported when a package fails over.
NOTE: The implementation of a load balancer or DNS round-robin scheme is optional and is beyond the scope of this publication. For more information about DNS round-robin addressing refer to the BIND Name Service Overview section in the HP-US IP Address and Client Administrator's Guide.
Figure 1-4 SG NFS Servers over CFS — High Availability, File Locking Supported Configurations Serviceguard NFS supports the following configurations: • • • • • Simple failover from an active NFS server node to an idle NFS server node. Failover from one active NFS server node to another active NFS server node, where the adoptive node supports more than one NFS package after the failover. A host configured as an adoptive node for more than one NFS package.
Figure 1-5 Simple Failover to an Idle NFS Server Node_A is the primary node for NFS server package Pkg_1. When Node_A fails, Node_B adopts Pkg_1. This means that Node_B locally mounts the file systems associated with Pkg_1 and exports them. Both Node_A and Node_B must have access to the disks that hold the file systems for Pkg_1. Failover from One Active NFS Server to Another Figure 1-6 shows a failover from one active NFS server node to another active NFS server node.
Figure 1-6 Failover from One Active NFS Server to Another In Figure 1-6, Node_A is the primary node for Pkg_1, and Node_B is the primary node for Pkg_2. When Node_A fails, Node_B adopts Pkg_1 and becomes the server for both Pkg_1 and Pkg_2. A Host Configured as Adoptive Node for Multiple Packages Figure 1-7 shows a three-node configuration where one node is the adoptive node for packages on both of the other nodes. If either Node_A or Node_C fails, Node_B adopts the NFS server package from that node.
Figure 1-7 A Host Configured as Adoptive Node for Multiple Packages When Node_A fails, Node_B becomes the server for Pkg_1. If Node_C fails, Node_B will become the server for Pkg_2. Alternatively, you can set the package control option in the control script, nfs.cntl, to prevent Node_B from adopting more than one package at a time. With the package control option, Node_B may adopt the package of the first node that fails, but if the second node fails, Node_B will not adopt its package.
Figure 1-8 Cascading Failover with Three Adoptive Nodes Server-to-Server Cross Mounting Two NFS server nodes may NFS-mount each other's file systems and still act as adoptive nodes for each other's NFS server packages. Figure 1-9 illustrates this configuration.
Figure 1-9 Server-to-Server Cross Mounting The advantage of server-to-server cross-mounting is that every server has an identical view of the file systems. The disadvantage is that, on the node where a file system is locally mounted, the file system is accessed through an NFS mount, which has poorer performance than a local mount. Each node NFS-mounts the file systems for both packages. If Node_A fails, Node_B mounts the filesystem for Pkg_1, and the NFS mounts are not interrupted.
• • • Initiates the NFS monitor script to check periodically on the health of NFS services, if you have configured your NFS package to use the monitor script. Exports each file system associated with the package so that it can later be NFS-mounted by clients. Assigns a package IP address to the LAN card on the current node. After this sequence, the NFS server is active, and clients can NFS-mount the exported file systems associated with the package.
rpc.statd, rpc.lockd, nfsd, rpc.mountd, rpc.pcnfsd, and nfs.flm processes. You can monitor any or all of these processes as follows: • • • • To monitor the rpc.statd, rpc.lockd, and nfsd processes, you must set the NFS_SERVER variable to 1 in the /etc/rc.config.d/nfsconf file. If one nfsd process dies or is killed, the package fails over, even if other nfsd processes are running. To monitor the rpc.mountd process, you must set the START_MOUNTD variable to 1 in the /etc/rc.config.d/nfsconf file.
2 Installing and Configuring Serviceguard NFS This chapter explains how to configure Serviceguard NFS. You must set up your Serviceguard cluster before you can configure Serviceguard NFS. For instructions on setting up an Serviceguard cluster, see the Managing Serviceguard manual.
cmmakepkg -s /opt/cmcluster/nfs/nfs.cntl 3. 4. Create a directory, /etc/cmcluster/nfs. Run the following command to copy the Serviceguard NFS template files to the newly created /etc/cmcluster/nfs directory: cp /opt/cmcluster/nfs/* /etc/cmcluster/nfs Monitoring NFS/TCP Services with Serviceguard NFS Toolkit In addition to monitoring NFS/UDP services, you can monitor NFS/TCP services with Serviceguard NFS Toolkit on HP-UX 11.x. For HP-UX 11.0, you need at least Serviceguard NFS Toolkit A.11.00.
are not currently present on the NFS server node, the node cannot boot properly. This happens if the server is an adoptive node for a file system, and the file system is available on the server only after failover of the primary node. 3. If your NFS servers must serve PC clients, set the PCNFS_SERVER variable to 1 in the /etc/ rc.config.d/nfsconf file on the primary node and each adoptive node.
group files are the same on the primary node and all adoptive nodes, or use NIS to manage the passwd and group databases. For information on configuring NIS, see the NFS Services Administrator's Guide. 10. Create an entry for the name of the package in the DNS or NIS name resolution files, or in /etc/hosts, so that users will mount the exported file systems from the correct node. This entry maps the package name to the package's relocatable IP address. 11.
NOTE: Serviceguard NFS Toolkit requires Serviceguard A.11.13 (or above). To enable the File Lock Migration feature (available with 11i v1 and 11i v2), you need Serviceguard A.11.15 or above. To ensure that the File Lock Migration feature functions properly, install HP-UX 11i v1 NFS General Release and Performance Patch, PHNE_26388 (or a superseding patch). For HP-UX 11i v2, the feature functions properly without a patch. There is an additional NFS specific control script, hanfs.
4. Specify the IP address for the package and the address of the subnet to which the IP address belongs: IP[0]=15.13.114.243 SUBNET[0]=15.13.112.0 The IP address you specify is the relocatable IP address for the package. NFS clients that mount the file systems in the package will use this IP address to identify the server. You should configure a name for this address in the DNS or NIS database, or in the /etc/hosts file. 5. Serviceguard NFS Toolkit A.11.23.
LV[0]=/dev/vg01/lvol1;FS[0]=/ha_root LV[1]=/dev/vg01/lvol2;FS[1]=/users/scaf LV[2]=/dev/vg02/lvol1;FS[2]=/ha_data 3. Create a separate XFS[n] variable for each NFS directory to be exported. Specify the directory name and any export options. XFS[0]="/ha_root" XFS[1]="/users/scaf" XFS[2]="-o ro /ha_data" Do not configure these exported directories in the /etc/exports file. When an NFS server boots up, it attempts to export all file systems in its /etc/exports file.
function customer_defined_run_cmds { cmmodpkg -d -n 'hostname' pkg02 & } The package control option can prevent an adoptive node from becoming overloaded when multiple packages fail over. If an adoptive node becomes overloaded, it can fail. In this example, if a host is an adoptive node for both pkg01 and pkg02, the above cmmodpkg -d command, in the control script for pkg01, would prevent the host that is running pkg01 from adopting pkg02.
of a node or network failure. The NFS monitor script causes the package failover if any of the monitored NFS services fails. If you do not want to run the NFS monitor script, comment out the NFS_SERVICE_NAME and NFS_SERVICE_CMD variables: # NFS_SERVICE_NAME[0]=nfs.monitor # NFS_SERVICE_CMD[0]=/etc/cmcluster/nfs/nfs.mon By default, the NFS_SERVICE_NAME and NFS_SERVICE_CMD variables are commented out, and the NFS monitor script is not run. NOTE: The Serviceguard A.11.
NOTE: If you enable the File Lock Migration feature, an NFS client (or group of clients) may hit a corner case of requesting a file lock on the HA/NFS server and not receiving a crash recovery notification message when the HA/NFS package migrates to an adoptive node.
Editing the Package Configuration File (nfs.conf) 1. Serviceguard A.11.17 provides a new package configuration file template. The new package configuration file template introduces the following dependency variables: • DEPENDENCY_NAME • DEPENDENCY_CONDITION • DEPENDENCY_LOCATION The above parameters are not supported in Serviceguard NFS Toolkit A.11.23.05. By default, these variables are commented out in the nfs.conf file. 2. Set the PACKAGE_NAME variable.
Figure 2-1 Server-to-Server Cross-Mounting The advantage of server-to-server cross-mounting is that every server has an identical view of the file systems. The disadvantage is that, on the node where a file system is locally mounted, the file system is accessed through an NFS mount, which has poorer performance than a local mount. In order to make a Serviceguard file system available to all servers, all servers must NFS-mount the file system.
In this example, nfs1is the name that maps to the package's relocatable IP address. It must be configured in the name service used by the server (DNS, NIS, or the /etc/hosts file). If a server for the package will NFS-mount the package's file systems, the client mount point (CNFS) must be different from the server location (SNFS). 3. 4. Copy the script you have just modified to all the servers that will NFS-mount the file systems in the package.
vgchange -a n /dev/vg_nfsu01 7. Run the cluster using the following command: cmruncl -v -f Configuring Serviceguard NFS over CFS Packages This section describes how to configure and start Serviceguard NFS toolkit packages in a Serviceguard over CFS environment. It is assumed that you have already set up your Serviceguard cluster, performed the steps listed in the “Before Creating a Serviceguard NFS Package” section of the Serviceguard NFS Toolkit A.11.31.03, A.11.11.06 and A.11.23.
NOTE: Serviceguard A.11.18 introduces the concepts of modular packages which allows packages to be created using building blocks that comprise only the functions that the package needs. It also includes some changes to the variable names in the package control and configuration files created with the cmmakepkg command. Creation of modular packages is not supported in this version of Serviceguard NFS Toolkit, but will be supported in a future version.
3. Edit the nfs-export.cntl script and set the HA_NFS_SCRIPT_EXTENSION to export.sh. Note that when you edit the configuration scripts referred to throughout this document, you may have to uncomment the lines as you edit them. # HA_NFS_SCRIPT_EXTENSION = "export.sh" This will set the NFS specific control script to be run by the package to hanfs.export.sh as we have named it in the copy command above. No other changes are needed in this script. 4. Edit the hanfs.export.
XFS[0]="/cfs1" XFS[1]="/cfs2" CAUTION: Do not modify other variables or contents in this script since doing so is not supported. 5. Edit the nfs-export.conf file as follows: a. Set the PACKAGE_NAME variable to SG-NFS-XP-1 (by default this variable is set to FAILOVER) PACKAGE_NAME b. SG-NFS-XP-1 Change the PACKAGE_TYPE from FAILOVER to MULTI_NODE PACKAGE_TYPE c.
# cmapplyconf -v -C /etc/cmcluster/cluster.conf -P /etc/cmcluster/nfs/nfs-export.conf 4. Run the export package on a single server with the following command # cmrunpkg -v SG-NFS-XP-1 5. You can verify the export package is running with the cmviewcl command.
4. Edit the hanfs.sh scripts (hanfs.1.sh and hanfs.2.sh). if you want to monitor NFS services (by running the NFS monitor script). To monitor NFS services, set the NFS_SERVICE_NAME and NFS_SERVICE_CMD variables NFS_SERVICE_NAME[0]=nfs1.monitor NFS_SERVICE_CMD[0]=/etc/cmcluster/nfs/nfs1.mon In hanfs.2.sh, set NFS_SERVICE_NAME[0] to nfs2.monitor and set NFS_SERVICE_CMD[0] to /etc/cmcluster/nfs/nfs2.mon. If you do not want to monitor NFS services, leave these variables commented out. 5. Edit the nfs.
3. Verify and apply the cluster package configuration files on a single server # cmapplyconf -v -C /etc/cmcluster/cluster.conf -P /etc/cmcluster/nfs/nfs1.conf -P /etc/cmcluster/nfs/nfs2.conf 4.
Figure 2-3 Serviceguard NFS over CFS with file locking Configuring a Serviceguard NFS failover package Configuring a Serviceguard NFS failover package for a CFS environment is similar to configuring the package for a non-CFS environment. The main difference is that you must configure one failover package for each server that exports CFS. Use the following procedure to configure a failover package. 1.
a. Set the exported directory in hanfs.1.sh XFS[0]="/cfs1" b. c. Set XFS[0] to /cfs2 in hanfs.2sh If you want to monitor NFS services (by running the NFS monitor script), set the NFS_SERVICE_NAME and NFS_SERVICE_CMD variables in hanfs1.sh NFS_SERVICE_NAME[0]=nfs1.monitor NFS_SERVICE_CMD[0]=/etc/cmcluster/nfs/nfs1.mon d. 5. In hanfs.2.sh, set NFS_SERVICE_NAME[0] to nfs2.monitor and set NFS_SERVICE_CMD[0] to /etc/cmcluster/nfs/nfs2.mon.
6. Edit the nfs.conf scripts (nfs1.conf and nfs2.conf a. Specify the package name PACKAGE_NAME b. c. SG-NFS1 In nfs2.conf set the PACKAGE_NAME to SG-NFS2 Set the NODE_NAME variables for each node that can run the package. The first NODE_NAME should specify the primary node, followed by adoptive node(s) in the order in which they will be tried NODE_NAME thyme NODE_NAME basil d.
4.
3 Sample Configurations This chapter gives sample cluster configuration files, package configuration files, and control scripts for the following configurations: • • • • Example One - Three-Server Mutual Takeover: This configuration has three servers and three Serviceguard NFS packages. Each server is the primary node for one package and an adoptive node for the other two packages.
Figure 3-1 Three-Server Mutual Takeover Figure 3-2 shows the three-server mutual takeover configuration after host basil has failed and host sage has adopted pkg02. Dotted lines indicate which servers are adoptive nodes for the packages.
Figure 3-2 Three-Server Mutual Takeover after One Server Fails Cluster Configuration File for Three-Server Mutual Takeover This section shows the cluster configuration file (cluster.conf) for this configuration example. The comments are not shown. CLUSTER_NAME MutTakOvr FIRST_CLUSTER_LOCK_VG /dev/nfsu01 NODE_NAME thyme NETWORK_INTERFACE lan0 HEARTBEAT_IP 15.13.119.146 NETWORK_INTERFACE lan1 FIRST_CLUSTER_LOCK_PV /dev/dsk/c0t1d0 NODE_NAME basil NETWORK_INTERFACE lan0 HEARTBEAT_IP 15.13.113.
AUTO_START_TIMEOUT 600000000 NETWORK_POLLING_INTERVAL 2000000 MAX_CONFIGURED_PACKAGES 3 VOLUME_GROUP VOLUME_GROUP VOLUME_GROUP /dev/nfsu01 /dev/nfsu02 /dev/nfsu03 Package Configuration File for pkg01 This section shows the package configuration file (nfs1.conf) for the package pkg01 in this sample configuration. The comments are not shown.
The hanfs.sh Control Script This section shows the NFS control script (hanfs1.sh) for the pkg01 package in this sample configuration. This example includes only the user-configured part of the script; the executable part of the script and most of the comments are omitted. This example does not enable the File Lock Migration feature. XFS[0]=/hanfs/nfsu011 NFS_SERVICE_NAME[0]="nfs1.monitor" NFS_SERVICE_CMD[0]="/etc/cmcluster/nfs/nfs.mon" NFS_FILE_LOCK_MIGRATION=0 NFS_FLM_SCRIPT="${0%/*}/nfs.
IP[0]=15.13.112.244 SUBNET[0]=15.13.112.0 The hanfs.sh Control Script This section shows the NFS control script (hanfs2.sh) for the pkg02 package in this sample configuration. This example includes only the user-configured part of the script; the executable part of the script and most of the comments are omitted. This example does not enable the File Lock Migration feature. XFS[0]=/hanfs/nfsu021 NFS_SERVICE_NAME[0]="nfs2.monitor" NFS_SERVICE_CMD[0]="/etc/cmcluster/nfs/nfs.
VXVOL="vxvol -g \$DiskGroup startall" #Default FS_UMOUNT_COUNT=1 FS_MOUNT_RETRY_COUNT=0 IP[0]=15.13.114.245 SUBNET[0]=15.13.112.0 The hanfs.sh Control Script This section shows the NFS control script (hanfs3.sh) for the pkg03 package in this sample configuration. This example includes only the user-configured part of the script; the executable part of the script and most of the comments are omitted. This example does not enable the File Lock Migration feature.
Figure 3-4 One Adoptive Node for Two Packages after One Server Fails This sample configuration also enables the File Lock Migration feature. Cluster Configuration File for Adoptive Node for Two Packages with File Lock Migration This section shows the cluster configuration file (cluster.conf) for this configuration example. The comments are not shown. CLUSTER_NAME PkgCtrl FIRST_CLUSTER_LOCK_VG /dev/nfsu01 NODE_NAME thyme NETWORK_INTERFACE lan0 HEARTBEAT_IP 15.13.119.
VOLUME_GROUP VOLUME_GROUP /dev/nfsu01 /dev/nfsu02 Package Configuration File for pkg01 This section shows the package configuration file (nfs1.conf) for the package pkg01 in this sample configuration. The comments are not shown.
The function customer_defined_run_cmds calls the cmmodpkg command with the package control option (-d). This command prevents the host that is running pkg01 from adopting pkg02. The ampersand (&) causes the cmmodpkg command to run in the background. It must run in the background to allow the control script to complete. There is a short time, after one primary node has failed but before the cmmodpkg command has executed, when the other primary node can fail and the adoptive node will adopt its package.
AUTO_RUN YES LOCAL_LAN_FAILOVER_ALLOWED YES NODE_FAIL_FAST_ENABLED NO RUN_SCRIPT RUN_SCRIPT_TIMEOUT HALT_SCRIPT HALT_SCRIPT_TIMEOUT /etc/cmcluster/nfs/nfs2.cntl NO_TIMEOUT /etc/cmcluster/nfs/nfs2.cntl NO_TIMEOUT SERVICE_NAME nfs2.monitor SERVICE_FAIL_FAST_ENABLED NO SERVICE_HALT_TIMEOUT 300 SUBNET 15.13.112.0 NFS Control Scripts for pkg02 The nfs.cntl Control Script This section shows the NFS control script (nfs2.cntl) for the pkg02 package in this sample configuration.
part of the script and most of the comments are omitted. This example enables the File Lock Migration feature. XFS[0]=/hanfs/nfsu021 NFS_SERVICE_NAME[0]="nfs2.monitor" NFS_SERVICE_CMD[0]="/etc/cmcluster/nfs/nfs2.mon" NFS_FILE_LOCK_MIGRATION=1 NFS_FLM_SCRIPT="${0%/*}/nfs2.flm" NFS File Lock Migration and Monitor Scripts for pkg02 The nfs.flm Script This section shows the NFS File Lock Migration (nfs2.flm) script for the pkg02 package in this sample configuration.
Figure 3-5 Cascading Failover with Three Servers Figure 3-6 shows the cascading failover configuration after host thyme has failed. Host basil is the first adoptive node configured for pkg01, and host sage is the first adoptive node configured for pkg02. Figure 3-6 Cascading Failover with Three Servers after One Server Fails Cluster Configuration File for Three-Server Cascading Failover This section shows the cluster configuration file (cluster.conf) for this configuration example.
CLUSTER_NAME Cascading FIRST_CLUSTER_LOCK_VG /dev/nfsu01 NODE_NAME thyme NETWORK_INTERFACE lan0 HEARTBEAT_IP 15.13.119.146 NETWORK_INTERFACE lan1 FIRST_CLUSTER_LOCK_PV /dev/dsk/c0t1d0 NODE_NAME basil NETWORK_INTERFACE lan0 HEARTBEAT_IP 15.13.113.168 FIRST_CLUSTER_LOCK_PV /dev/dsk/c1t1d0 NODE_NAME sage NETWORK_INTERFACE lan0 HEARTBEAT_IP 15.13.115.
NFS Control Scripts for pkg01 The nfs.cntl Control Script This section shows the NFS control script (nfs1.cntl) for the pkg01 package in this sample configuration. Only the user-configured part of the script is shown; the executable part of the script and most of the comments are omitted.
HALT_SCRIPT_TIMEOUT NO_TIMEOUT SERVICE_NAME nfs2.monitor SERVICE_FAIL_FAST_ENABLED NO SERVICE_HALT_TIMEOUT 300 SUBNET 15.13.112.0 NFS Control Scripts for pkg02 The nfs.cntl Control Script This section shows the NFS control script (nfs2.cntl) for the pkg02 package in this sample configuration. Only the user-configured part of the script is shown; the executable part of the script and most of the comments are omitted.
Figure 3-7 Two Servers with NFS Cross-Mounts Figure 3-8 shows two servers with NFS cross-mounted file systems after server thyme has failed. The NFS mounts on server basil are not interrupted.
Figure 3-8 Two Servers with NFS Cross-Mounts after One Server Fails Cluster Configuration File for Two-Server NFS Cross-Mount This section shows the cluster configuration file (cluster.conf) for this configuration example. The comments are not shown. CLUSTER_NAME XMnt FIRST_CLUSTER_LOCK_VG /dev/nfsu01 NODE_NAME thyme NETWORK_INTERFACE lan0 HEARTBEAT_IP 15.13.119.146 NETWORK_INTERFACE lan1 FIRST_CLUSTER_LOCK_PV /dev/dsk/c0t1d0 NODE_NAME basil NETWORK_INTERFACE lan0 HEARTBEAT_IP 15.13.113.
Package Configuration File for pkg01 This section shows the package configuration file (nfs1.conf) for the package pkg01 in this sample configuration. The comments are not shown. PACKAGE_NAME PACKAGE_TYPE pkg01 FAILOVER FAILOVER_POLICY FAILBACK_POLICY CONFIGURED_NODE MANUAL NODE_NAME NODE_NAME thyme basil AUTO_RUN YES LOCAL_LAN_FAILOVER_ALLOWED YES NODE_FAIL_FAST_ENABLED NO RUN_SCRIPT RUN_SCRIPT_TIMEOUT HALT_SCRIPT HALT_SCRIPT_TIMEOUT /etc/cmcluster/nfs/nfs1.
in the /etc/fstab file, the package might not be active yet when the servers tried to mount the file system at system boot. By configuring the NFS control script to NFS-mount the file system, you ensure that the package is active before the mount command is invoked. The first line in the customer_defined_run_cmds function executes the nfs1_xmnt script locally on host thyme (the primary node for pkg01). The second line, beginning with remsh, executes the nfs1_xmnt script remotely on host basil.
RUN_SCRIPT_TIMEOUT HALT_SCRIPT HALT_SCRIPT_TIMEOUT NO_TIMEOUT /etc/cmcluster/nfs/nfs2.cntl NO_TIMEOUT SERVICE_NAME nfs2.monitor SERVICE_FAIL_FAST_ENABLED NO SERVICE_HALT_TIMEOUT 300 SUBNET 15.13.112.0 NFS Control Scripts for pkg02 The nfs.cntl Control Script This section shows the NFS control script (nfs2.cntl) for the pkg02 package in this sample configuration. Only the user-configured part of the script is shown; the executable part of the script and most of the comments are omitted.
The hanfs.sh Control Script This section shows the NFS control script (hanfs2.sh) for the pkg02 package in this sample configuration. This example includes only the user-configured part of the script; the executable part of the script and most of the comments are omitted. This example does not enable the File Lock Migration feature. XFS[0]=/hanfs/nfsu021 NFS_SERVICE_NAME[0]="nfs2.monitor" NFS_SERVICE_CMD[0]="/etc/cmcluster/nfs/nfs.mon" NFS_FILE_LOCK_MIGRATION=0 NFS_FLM_SCRIPT="${0%/*}/nfs.
Index Symbols -d option, cmmodpkg, 28, 29, 56, 57 A adoptive nodes, 14 configuring, 33 example of package control option, 53 for multiple packages, 14, 16, 28, 29 illustration of cascading failover, 17 automounter timeout, 21 C cascading failover, 14 example configuration, 58 illustration of, 17 client behavior, 7, 21 cluster configuration file (cluster.
monitoring, 21 restarting, 20 stopping, 20 logging, NFS monitor script, 21 logical volumes configuration, 25 specifying in nfs.cntl, 27, 28 LV variable, in nfs.cntl script, 27, 28 LVM volume groups, 27 M monitor script (nfs.mon), 20 logging, 21 specified in hanfs.sh, 30 specified in nfs.cntl, 29 specified in nfs.
stopping, 20 stop parameter, control script, 20 SUBNET variable in hanfs.sh, 28 in nfs.cntl, 29 in nfs.conf, 33 swinstall command, 23 T timeout, automounter, 21 U unexporting file systems, 20 unmounting file systems, 20 user IDs, 26 V VG variable, in hansf.sh script, 30 VG variable, in nfs.cntl script, 27, 28 volume groups activating, 19 configuring, 25 deactivating, 20 major and minor numbers, 25 specifying in hansf.sh, 30 specifying in nfs.cntl, 27, 28 VxVM disk groups, 27 X XFS variable, in nfs.