Managing Serviceguard Extension for SAP on Linux (IA64 Integrity and x86_64) *T2392-90015* Printed in the US HP Part Number: T2392-90015 Published: March 2009
Legal Notices Copyright (R) 2000-2009 Hewlett-Packard Development Company, L.P. Serviceguard, Serviceguard Extension for SAP, Serviceguard Extension for RAC, Metrocluster and Serviceguard Manager are products of Hewlett-Packard Company, L. P., and all are protected by copyright.Confidential computer software. Valid license from HP required for possession, use, or copying. Consistent with FAR 12.211 and 12.
Table of Contents Printing History...........................................................................................11 About this Manual.................................................................................................................................11 1 Understanding Serviceguard Extension for SAP on Linux................................13 Designing Serviceguard Extension for SAP on Linux Cluster Scenarios..........................................................
Why SAP Provides sapcpe................................................................................................................37 SHARED NFS: the NFS automounter........................................................................................................38 Summary of automount file systems....................................................................................................39 Overview Serviceguard packages......................................................................
General Serviceguard Setup Changes...................................................................................................117 5 Serviceguard Extension for SAP on Linux Cluster Administration....................119 Change Management.........................................................................................................................119 System Level Changes.....................................................................................................................
List of Figures 1-1 1-2 1-3 1-4 1-5 1-6 1-7 2-1 2-2 2-3 2-4 3-1 5-1 Two-Package Failover with Mutual Backup Scenario ............................................................................15 Two-Tier Client Configuration............................................................................................................17 Three-Tier Client Configuration..........................................................................................................18 Dual-Stack Client Configuration....
List of Tables 1 Editions and Releases..........................................................................................................................11 1-1 About the Two and Three Tier Layers .................................................................................................16 1-2 About the Two and Three Tier Layers .................................................................................................19 2-1 Understanding the Different SAP Environments and Components......
Printing History Table 1 Editions and Releases Printing Date Part Number Serviceguard Extension for SAP on Linux Operating System Releases Oct/04 T1227-90004 x86_32 RHEL 3 SLES 8 Dec/05 T2392-90001 IA64 Integrity SLES 9 July/06 T2392-90010 IA64 Integrity and x86_64 RHEL 4 SLES 9 Feb/08 T2392-90013 IA64 Integrity and x86_64 RHEL 4, RHEL 5 and RHEL 5.1 SLES 9 and SLES 10 March/09 T2392-90015 IA64 Integrity and x86_64 RHEL 4, RHEL 5 and RHEL 5.1, RHEL 5.
1 Understanding Serviceguard Extension for SAP on Linux HP Serviceguard Extension for SAP on Linux (IA64 Integrity and x86_64) hereafter documented as HP Serviceguard Extension for SAP on Linux or SGeSAP/LX, extends HP Serviceguard's failover cluster capabilities to SAP application environments. Serviceguard Extension for SAP on Linux continuously monitors the health of each SAP cluster node and automatically responds to failures or threshold violations.
the pointed brackets < > indicate the characters enclosed within are to be replaced by an installation specific SID. The contains the installation number of a dialog instance.
Figure 1-1 Two-Package Failover with Mutual Backup Scenario A Serviceguard package is a description of resources such as file system, storage volumes or network addresses and commands for starting, stopping or monitoring an application within a Serviceguard cluster. Each SGeSAP/LX package name should include the SAP System Identifier (SID) of the system to which the package belongs. It is strongly recommended to base the package naming on the naming conventions for the SGeSAP/LX package type.
be restarted triggered by a failover. A sample configuration in Figure 1-5 shows node1 with a failure, which causes the package containing the database and central instance to fail over to node2. A Quality Assurance System and additional Dialog Instances get shut down, before the database and Central Instance are restarted. NOTE: A J2EE Engine might also be part of the Central Instance. The JAVA instance will then automatically be moved with the package.
About Two-Tier Configurations A two-tier client server configuration consists of the database (Database Layer = DB ) and all the SAP applications (SAP Application Layer = CI ) running on the same physical machine, as one Serviceguard package (= dbci). Figure 1-2 Two-Tier Client Configuration NOTE: The SAPGUI does not get clustered. The benefits of a two-tier configuration include: 1. Fewer physical machines to manage and maintain 2.
About Three-Tier Configurations A three-tier client server configuration consists of database (Database Layer) on a machine, and some or all of the SAP applications (SAP Application Layer) running on one or more different machines. Figure 1-3 Three-Tier Client Configuration Benefits of a Three-Tier Configuration: 1. 2. 3. 4. 18 Making the SAP system more efficient by off-loading CPU intensive SAP processes to one or more machines. Reduce failover times.
About SAP Three-Tier Dual-Stack Client Configurations A SAP dual stack configuration combines SAP ABAP stack and SAP J2EE (JAVA) stack into one infrastructure. The following example for a three tier implementation.
The Standalone Enqueue Service has the ability to mirror its memory content to an Enqueue Replication Service instance, which must be running on a remote node in the cluster. Both Standalone Enqueue Service and Enqueue Replication Service are configured as Serviceguard packages. In case of a failure of the node running the Standalone Enqueue Service, the package configured for the Standalone Enqueue Service switches over to the node running the Enqueue Replication package.
NOTE: Multiple dialog instances can be configured to start from a single SGeSAP package compared to all the other once from the appropriate package e.g. a SGeSAP database package can only start one database, a SCS package can only start one Enqueue and Message Sever. Dialog Instance packages allow an uncomplicated approach to achieve abstraction from the hardware layer. It is possible to shift around Dialog Instance packages between servers at any given time.
Handling of Redundant Dialog Instances Non-critical SAP WAS ABAP Application Servers can be run on HP-UX, SUSE or RedHat Linux application server hosts. These hosts do not need to be part of the Serviceguard cluster. Even if the additional SAP WAS services are run on nodes in the Serviceguard cluster, they are not necessarily protected by Serviceguard packages.
The Serviceguard package running the Standalone Enqueue should be configured to failover to the node running the Enqueue Replication Service. In the event of a failure of the Serviceguard package for the Standalone Package it will then failover to that node. It now can reclaim the replicated SAP locks that the Enqueue Replication Service had collected.
Figure 1-6 Replicated Enqueue Clustering for ABAP or JAVA Instances The integrated version of the Enqueue Service is not able to utilize replication features. To be able to run the replicated enqueue feature the DVEBMGS Instance needs to be manually split into a standard Central Instance (CI) and a ABAP System Central Service Instance (ASCS). NOTE: There are two SAP Enqueue variants possible with an SAP installation.
Serviceguard Extension for SAP on Linux File Structure The Linux distribution of Serviceguard uses a special file, /etc/cmcluster.config, to define the path locations for configuration and log files within the Linux file system. These paths differ depending on the Linux distribution. NOTE: In this document, references to ${SGCONF} can be replaced by the definition of the variable that is found in this file /etc/cmcluster.config.
Package Directory Content for the One-Package Model NOTE: In this document, references to ${SGCONF} can be replaced by the definition of the variable that is found in this file /etc/cmcluster.config. The default values are: • SGCONF=/opt/cmcluster/conf - for SUSE Linux • SGCONF=/usr/local/cmcluster/conf - for Redhat Linux In the one package model, the SAP functionality-database and central instance along with the highly available NFS server have all been placed in one Serviceguard package.
Package Directory Content for the Two-Package Model NOTE: In this document, references to ${SGCONF} can be replaced by the definition of the variable that is found in this file /etc/cmcluster.config. The default values are: • SGCONF=/opt/cmcluster/conf - for SUSE Linux • SGCONF=/usr/local/cmcluster/conf - for Redhat Linux In the two package model, the SAP functionality is separated into two Serviceguard packages. One for the database (DB) and the other for the SAP central instance (CI).
Figure 1-7 Configuration Files Needed to Build a Cluster At the top of this structure, SGeSAP/LX provides a generic function pool and a generic package runtime logic file for SAP applications. liveCache packages have a unique runtime logic file. All other package types are covered with the generic runtime logic.
2 Planning a File System Layout for SAP in a Serviceguard/LX Cluster Environment The goal of this chapter is to determine the correct storage layout for SAP file systems in a Serviceguard cluster configuration. Overview The following table outlines important concepts that you should understand before using Serviceguard with SAP. Each step is cumulative, that is each step builds upon itself.
SAP Web AS Programming Option Description SAP Web AS JAVA • This system option consists of the J2EE engine only and auxiliary services. There is no ABAP engine installed. • The J2EE JAVA engine requires it's own database schema. Therefore a database instance installation is mandatory. The following sections detail each step in greater detail. About SAP Components The following terms are used to describe the components of an SAP system.
Term Definition liveCache A liveCache instance. SAP liveCache technology is an enhancement of the MaxDB database system and was developed to manage logistical solutions such as SAP SCM/APO (Supply Chain Management / Advanced Planning Optimizer). In contrast to MaxDB and for performance reasons the database system is located in the main memory instead of being file system based.
About Storage Options For each of the above listed file system scenarios the following questions need to be answered: 1. 2. 3. Whether it needs to be kept as a LOCAL copy on internal disks of each node of the cluster. The file system requires a LOCAL mount point. Which of the file systems have to be SHARED by all cluster nodes on a SAN storage, but have to be mounted in (SHARED EXCLUSIVE) mode by the cluster node that the SAP instance failed over to and will run that instance.
LOCAL mount There are several reasons a LOCAL mount point is required in an SAP configuration: • The first reason is that some files used in an SAP configuration are not cluster aware. As an example consider the case of SAP file /usr/sap/tmp/coll.put. This file contains system performance data collected by the SAP performance collector.
the access to the storage devices on the first cluster node; activating the access to the storage devices on the second node, mounting the file systems will cause the startup of the application to fail. NOTE: In prior SGeSAP/LX documentation this category was called "Instance specific." Configuration Scenarios In the following sections the file systems used for several different SAP configuration scenarios will be analyzed. Other combinations of scenarios are possible.
/oracle/C11/sapdatan /oracle/C11/origlogA /oracle/C11/origlogB /oracle/C11/mirrlogA /oracle/C11/mirrlogB One ABAP CI, one ABAP DI and one DB This configuration scenario consists of one SAP Central Instance (= CI), one SAP Dialog Instance (=DI) and one Database instance (=DB). So one additional dialog instance (= D01) will be added when compared to the previous example. The CI again is identified by the following name "DVEBMGS00", the dialog instance is identified by the name "D01" in the file system path.
Two ABAP DI's and one ABAP SCS and one DB Using the terminology of the previous example, the next configuration scenario consists of one ABAP Central instance (= ABAP CI), one ABAP Dialog instance (ABAP DI), one ABAP SAP Central Services (ABAP SCS) and one Database instance (=DB). So compared to the previous example one instance (ABAP SCS) will be added. This instance is identified by the name "ASCS02". By adding the ABAP SCS, the naming conventions of the SAP components in this scenario have changed.
Two ABAP DI's, one ABAP SCS, one JAVA CI, one JAVA SCS, one JAVA REP, and one DB Compared to the previous example one JAVA enqueue replication server instance (JAVA REP) will be added. The JAVA REP instance is identified by the name "ERS13. For this instance one new file system is required: /usr/sap/C11/ERS13. NOTE: In previous versions of SAP that the SAP instance number for SCS03 (03) has to be the same as for the replication instance ERS03 (03).
In clustered SAP environments it is recommended to use local copies of the SAP executables. Local executables limit the dependencies between the SAP package and the database package of a two package installation to a minimum. They allow SAP packages to shutdown even if HA NFS is not available. To automatically keep the local executables of any instance in sync with the latest installed patches, SAP developed the sapcpe mechanism.
To configure the NFS automounter the file system /usr/sap/trans will again be used as an example. The goal is to access /usr/sap/trans transparently from all cluster nodes. The initial approach for configuring the automounter on Linux: 1. As in the static NFS case from above: On the cluster node acting as the NFS server the /usr/sap/trans volumes are mounted under an /export directory and exported to the NFS clients.
Using the data from the SHARED NFS sections from above, the following lines are added to the automounter map file /etc/auto.import.
Component Description liveCache liveCache instance (lc) SAPNFS NFS package (sapnfs) that provides NFS file systems to any of the above SAP instances. As described earlier this Serviceguard package provides the cluster wide SHARED NFS mount points required by any of the above SAP instance types. Some of the above listed Serviceguard packages can also be configured as a combination: (dbci, dbjci dbcijci...).
Uppercase SAP instance name (RIS, NFS) • Communication with a Serviceguard package is based on virtual IP addresses dbris, ciris, d01ris and nfsris. Each package has its own IP address. • Each package has one or more file systems (storage volumes) that get mounted when a package is started and unmounted when a package is stopped. • Normally any of the packages can be run or relocated on any of the cluster nodes in any combination.
installed on the same host. The files in /sapdb/programs have to be of the newest version that any MaxDB on the cluster nodes uses. Files in /sapdb/programs are downwards compatible. • /sapdb/data/config: This directory is also shared between instances, though you can find lots of files that are Instance specific in here, e.g. /sapdb/data/config/.* According to SAP this path setting is static. • /sapdb/data/wrk: The working directory for MaxDB.
Mount Point Access Type SGeSAP/LX Package link /sapdb/data ->/import/data nfsreloc:/export/sapdb/data /import/data NFS sapNFS or db or dbci link /sapdb/programs -> /import/programs nfsreloc:/export/sapdb/programs /import/programs link /var/spool/sql/ini -> /import/ini nfsreloc:/export/var/spool/sql/ini /import/ini Oracle Database Instance Storage Considerations Oracle server directories reside below /oracle/.
or dbci). The setup for these directories follows the "on top" mount approach, i.e.
3 Step-by-Step Cluster Conversion This chapter describes in detail how to implement a SAP cluster using Serviceguard and Serviceguard Extension for SAP (Serviceguard Extension for SAP on Linux). It is written in the format of a step-by-step guide. It gives examples for each task in great detail. Actual implementations might require a slightly different approach. Many steps synchronize cluster host configurations or virtualized SAP instances manually.
• Database Configuration • SAP WAS System Configuration The tasks are presented as a sequence of steps.
For example, for a Serviceguard package configuration file: dbc11.config, when modifying the RUN_SCRIPT variable, you would substitute: RUN_SCRIPT/dbC11.name with the following: • For SUSE - RUN_SCRIPT /opt/cmcluster/conf/dbC11.name • For Redhat - RUN_SCRIPT /usr/local/cmcluster/conf/dbC11.name You can use the ${SGCONF} variable, when modifying the Serviceguard package control script file. For example for a Serviceguard package control script file, dbC11.control.
The Central System and Distributed System installations build a traditional SAP landscape. They will install a database and a monolithic Central Instance. Exceptions are JAVA-only based installations. NOTE: For JAVA-only based installations the only possible installation option is a High Availability System installation.
PACKAGE_NAME, NODE_NAME, RUN_SCRIPT, HALT_SCRIPT, SUBNET Specify NODE_NAME entries for all hosts on which the package should be able to run. Specify the created control scripts that were created earlier on as run and halt scripts: RUN_SCRIPT //.control.script HALT_SCRIPT //.control.script Specify subnets to be monitored in the SUBNET section. In the ${SGCONF}.control.script file(s), there is a section that defines a virtual IP address array.
SAP Netweaver 2004s High Availability System Installer Installation Step: NW04S1330 The installation is done using the virtual IP provided by the Serviceguard package. SAPINST can be invoked with a special parameter called SAPINST_USE_HOSTNAME. This prevents the installer routines from comparing the physical hostname with the virtual address and drawing wrong conclusions. The installation of the entire SAP WAS 7.
cd /IM_OS/SAPINST/UNIX//sapinst\ SAPINST_USE_HOSTNAME= Follow the instructions of the installation process and provide the required details when asked by SAPINST. Provide the virtual hostname to any instance related query. The instances are now able to run on the installation host, once the corresponding Serviceguard packages have been started.
Create standard package configuration and control files for each package. The SAP JAVA instances are mapped to Serviceguard Extension for SAP on Linux package types as follows: The SCS Instance requires a (jci) package type. The database requires a (db) package type. The package files are recommended to be named .control.script for a control file and .config for a configuration file. The packages are created with the following commands: cmmakepkg -s ${SGCONF}//.control.
Be sure to propagate changes to all nodes in the cluster. Preparation Step: NW04J1310 The SAP J2EE engine needs a supported and tested JAVA SDK on all cluster nodes. Check the installed JAVA SDK with the requirement as of the NW04 Master Guide Part 1. Be sure to install the required SDK on all nodes in the cluster.
The SAPINST_USE_HOSTNAME option can be set as an environment variable, using export or setenv commands. It can also be passed to SAPINST as an argument. cd /_OS/SAPINST/UNIX//sapinst\ /_OS/SAPINST/UNIX/product_ha.catalog\ SAPINST_USE_HOSTNAME=reloc_jci> ASCS installation Follow the instructions of the ASCS installation process and provide the required details when asked by SAPINST.
The SAP installer should now start. Follow the instructions of the JAVA Central Instance installation process. JAVA dialog instances get installed in the same fashion. Afterwards, the virtualized installation for SAP J2EE Engine 6.40 should have completed, but the cluster still needs to be configured. The instances are now able to run on the installation host, provided the corresponding Serviceguard packages got started up front.
su - adm mkdir /usr/sap//ASCS Replicated Enqueue Conversion: RE020 A volume group needs to be created for the ASCS instance. The physical device(s) should be created as LUN(s) on shared storage. Storage connectivity is required from all nodes of the cluster that should run the ASCS. For the volume group, one logical volume should get configured. For the required size, refer to the capacity consumption of /usr/sap//DVEBMGS.
# prevent shmem pool creation #---------------------------------ipc/shm_psize_16 = 0 ipc/shm_psize_24 = 0 ipc/shm_psize_34 = 0 ipc/shm_psize_66 = 0 This template shows the minimum settings. Scan the old _DVEBMGS_ profile to see whether there are additional parameters that apply to either the Enqueue Service or the Message Service. Individual decisions need to be made whether they should be moved to the new profile.
INSTANCE_NAME=D SAPSYSTEM= rdisp/vbname = __ SAPLOCALHOST= SAPLOCALHOSTFULL=.domain The exact changes depend on the individual appearance of the file for each installation. The startup profile is also individual, but usually can be created similar to the default startup profile of any Dialog Instance.
Replicated Enqueue Conversion: RE095 Copy SAP executables into the ERS exe directory. This is described in detail in the SAP NetWeaver Library under: http://help.sap.com/saphelp_nw2004s/helpdata/en/de/cf853f11ed0617e10000000a114084/content.htm For convenience the following example script is provided which copies the required files: ################################################################## echo "usage for ABAP ASCS - example: echo " setup_ers_files.
# NOTE: the libxxx.30 is SAP version dependent # change as required ################################################################### for i in enqt \ enrepserver \ ensmon \ libicudata.so.30 \ libicui18n.so.30 \ libicuuc.so.30 \ libsapu16_mt.so \ libsapu16.so \ librfcum.so \ sapcpe \ sapstart \ sapstartsrv \ sapcontrol do echo "cp $S/$i ${D}/exe" cp $S/$i ${D}/exe echo $i >> ${ERSLST} done echo "servicehttp >> ${ERSLST}" echo servicehttp >> ${ERSLST} echo "ers.lst >> ${ERSLST}" echo ers.
#------------------------------------------------------------------enque/process_location = REMOTESA rdisp/enqname = $(rdisp/myname) enque/serverinst = $(SCSID) enque/serverhost = <[J]CIRELOC> Here is an example template for the startup profile START_ERS_ #----------------------------------------------------------------------SAPSYSTEM = SAPSYSTEMNAME = INSTANCE_NAME = ERS #-------------------------------------------------------------------# Special settings for this
2. a machine different from this, the database vendor dependent steps have to be done on the host on which the database was installed. Cluster Node Synchronization - This section consists of steps performed on the backup nodes. This ensures that the primary node and the backup nodes have a similar environment. 3. NOTE: 4. Cluster Node Configuration - This section consists of steps performed on all the cluster nodes, regardless if the node is a primary node or a backup node. 5. NOTE: 6.
/usr/sap/.new cd / unmount /usr/sap/ MaxDB Database Step: MD040 This step can be skipped for MaxDB instances starting with versions 7.6 and higher. NOTE: Make sure you have mounted a sharable logical volume on /sapdb//wrk as discussed in section MaxDB Storage Considerations in Chapter 2. Change the path of the runtime directory of the MaxDB and move the files to the new logical volume accordingly. cd /sapdb/data/wrk/ find . -depth -print | cpio -pd /sapdb//wrk cd ..
Figure 3-1 sapcpe Mechanism for Executables To create local executables, the SAP file system layout needs to be changed. The original link /usr/sap//SYS/run needs to be renamed to /usr/sap//SYS/ctrun. A new local directory /usr/sap//SYS/run will then be required to store the local copy. It needs to be initialized by copying the files sapstart and saposcol from the central executable directory /sapmnt//exe. Make sure to match owner, group and permission settings.
Cluster Node Synchronization NOTE: Repeat the steps in this section for each node of the cluster that is different than the primary host. Logon as root to the system where the SAP Central Instance is installed (primary host) and prepare a logon for each of its backup hosts - if not already available. Installation Step: IS070 Open the groupfile file, /etc/group, on the primary side.
The following statement should automate this activity for standard directory contents. Do not use a line break within the awk statement: su - adm ls -a|awk '// { system( sprintf( "mv %s %s\n", $0, gensub("", "", 1 ))) }' exit Never use the relocatable address in these filenames. If an application server was already installed, do not overwrite any files which will start the application server.
Use ftp(1) to copy the file over to the secondary node. On the secondary node: su - adm mkdir -p /usr/sap//SYS cd /usr/sap//SYS cpio -id /DVEBMGS Installation Step: IS210 Create a local directory for the saposcol temporary data on possible the backup nodes.
Cluster Node Configuration NOTE: Repeat the steps in this section for each node of the cluster. Installation Step: IS241 Logon as root Serviceguard Extension for SAP on Linux needs remote login to be enabled on all cluster hosts. The traditional way to achieve this is via remote shell commands. If security concerns prohibit this, it is also possible to use secure shell access instead. If you are planning to use the traditional remote shell access adjust the security settings in /etc/pam.
rpm -q serviceguard rpm -q nfs-toolkit rpm -q sgesap-toolkit rpm -q sgcmon rpm -q pidentd If the rpm packages are missing install them with the following command rpm -Uhv serviceguard-.product...rpm rpm -Uhv nfs-toolkit-.product...rpm rpm -Uhv sgesap-toolkit-.product...rpm rpm -Uhv sgcmon-.product...rpm rpm -Uhv pidentd-.product...
After your volume groups have been created on a node, back them up using vgcfgbackup, then comment out (there are two places in this file that you need to comment out) the following lines in the /etc/rc.d/rc.sysinit file: if [ -e /proc/lvm -a -x /sbin/vgchange -a -f /etc/lvmtab ]; then action $"Setting up LVM:" /sbin/vgscan && /sbin/vgchange -a -y fi Installation Step: IS392 Check chapter 2 for more automounter details.
1. Unmount any file systems that will be part of the automounter configuration. Make sure that these file systems are also commented out of /etc/fstab on all cluster nodes. Some typical automount file systems in an SAP environment are: SAP instance: umount /usr/sap/trans umount /sapmnt/ MaxDB instance: umount /sapdb/programs umount /sapdb/data umount /var/spool/sql/ini 2. Create new directories directly below /import.
:/export/usr/sap/trans It is important that the nosymlink option is specified. The nosymlink option does not create symbolic links to local directories if the NFS server file system is local. Instead it always mounts from the virtual IP address . Therefore a cluster node can be an NFS server and client at the same time and the NFS server file systems can be dynamically relocated to the other cluster nodes. 5. After completing the automounter changes restart the automounter with /etc/rc.
cp ${SAPSTAGE}/sap.functions ${SGCONF}/sap.functions cp ${SAPSTAGE}/SID/sap.config ${SGCONF}//sap.conf cp ${SAPSTAGE}/SID/customer.functions ${SGCONF}//customer.functions cp ${SAPSTAGE}/SID/sapwas.sh ${SGCONF}//sapwas.sh Enter the package directory ${SGCONF}/. For each Serviceguard package that will be configured create: • a Serviceguard configuration template (xxx.config) • a Serviceguard control script template (xxx.control.
RUN_SCRIPT //ci.control.script HALT_SCRIPT //ci.control.script d01.config: RUN_SCRIPT //d01.control.script HALT_SCRIPT //d01.control.script Optional Step: OS440 Serviceguard also allows the monitoring of resources and processes that belong to a Serviceguard package. The terminology used to describe this capability is called a "Serviceguard monitoring service".
First, the monitoring scripts need to be copied from the staging directory ${SAPSTAGE} to the current SGeSAP/LX package${SGCONF} directory. cp ${SAPSTAGE}/SID/sapms.mon ${SGCONF}//sapms.mon Example entries in ciC11.control.script: SERVICE_NAME[0]="ciC11mon" SERVICE_CMD[0]="${SGCONF}/C11/sapms.mon" SERVICE_RESTART[0]"" Optional Step: OS460 It is recommended to set AUTOSTART_CMCLD=1 in ${SGCONF}/cmcluster.rc. This variable controls the automatic cluster start.
LV[0]="/dev/vgC11_oracleC11/lvol0"; \ FS[0]="/oracle/XI7"; \ FS_TYPE[0]="ext3"; \ FS_MOUNT_OPT[0]=""; \ FS_UMOUNT_OPT[0]=""; \ FS_FSCK_OPT[0]="" # # # # # # # # # # IP ADDRESSES Specify the IP and Subnet address pairs which are used by this package. Uncomment IP[0]="" and SUBNET[0]="" and fill in the name of your first IP and subnet address. You must begin with IP[0] and SUBNET[0] and increment the list in sequence. For example, if this package uses an IP of 192.10.25.12 and a subnet of 192.10.25.
# FILESYSTEMS LV[0]="/dev/vgsapdbLXM/lvsapdbLXM"; \ FS[0]="/sapdb/LXM"; \ FS_TYPE[0]="reiserfs"; \ FS_MOUNT_OPT[0]="-o rw"; \ FS_UMOUNT_OPT[0]=""; \ FS_FSCK_OPT[0]="" # IP ADDRESSES IP[0]="16.41.101.163" SUBNET[0]="16.41.101.0" # START OF CUSTOMER DEFINED FUNCTIONS function customer_defined_run_cmds { . /usr/local/cmcluster/conf/LXM/sapwas.sh start test_return 51 } function customer_defined_halt_cmds { . /usr/local/cmcluster/conf/LXM/sapwas.
package is the last one stopped. This prevents global directories from disappearing before all SAP components in the cluster have completed their shutdown. Installation Step: IS515 Verify that the setup works correctly to this point. Use the following commands to run or halt a SGeSAP/LX package: cmrunpkg dbC11 cmhaltpkg dbC11 NOTE: Normally the (sapnfs) Serviceguard package is required to be configured and running before any of the other SGeSAP/LX packages can be started.
FS[0]="/usr/sap/SX1/ASCS02"; \ FS_TYPE[0]="ext3"; \ FS_MOUNT_OPT[0]="-o rw" IP[0]="16.41.100.32" SUBNET[0]="16.41.100.0" SERVICE_NAME[0]="ascsSX1_enqmon" SERVICE_CMD[0]="/opt/cmcluster/conf/SX1/sapenq.mon monitor ascsSX1" SERVICE_RESTART[0]="" function customer_defined_run_cmds { . /opt/cmcluster/conf/SX1/sapwas.sh start test_return 51 } function customer_defined_halt_cmds { . /opt/cmcluster/conf/SX1/sapwas.sh stop test_return 52 } vi ers12SX1.
Start the packages with the following commands. cmapplyconf -P ascsSX1.config cmrunpkg –n clunode1 ascsSX1 cmmodpkg –e ascsSX1 cmapplyconf -P ers12SX1.config cmrunpkg –n clunode2 ers12SX1 Repeat the above steps for the Java Standalone Enqueue and Java Enqueue Replication packages scsSX1 and ers13SX1. Serviceguard NFS Toolkit Configuration The cross-mounted file systems need to be added to a package that provides Highly Available NFS services.
NOTE: It sometimes can be convenient for testing to allow root write access via NFS. This can be enabled with flag -o rw, no_root_squash.... Use with caution. Serviceguard Extension for SAP on Linux Configuration - sap.config This section deals with the configuration of the SAP specifics of the Serviceguard packages SGeSAP/LX SAP specific configuration file called ${SGCONF}//sap.config.
NOTE: All xxxRELOC parameters listed above have to use the same syntax as the IP[]-array in the package control file. When using the automounter the NFSRELOC does not need to be specified. Subsection for the DB component: OS610 Serviceguard Extension for SAP on Linux performs activities specific to the database you use. Specify the underlying database vendor using the DB parameter. Possible options are: ORACLE and MaxDB.
Specify the instance name in AREPNAME, the instance ID number in AREPNR and the relocatible IP address of the SAP instance for the replication service in AREPRELOC. The variable ers_SGESAP_COMP links the Serviceguard package name (=ers) to SGeSAP component type "AREP". For example variable ers12SX1_SGESAP_COMP=AREP links Serviceguard package " ers12SX1" with SGeSAP component type "AREP" Example: AREPNAME=AREP AREPNR=00 AREPRELOC=1.1.1.
The file sap.config contains SGeSAP configuration data for all SGeSAP packages of a It could for example contain data to start additional remote SAP dialog instances which are not part of the cluster. Now every time any SGeSAP package (DB, CI,D01..) starts it scans through the sap.config file, finds the entry for starting a remote SAP dialog instances and starts this.
STOP_WITH_PKG, RESTART_DURING_FAILOVER, STOP_IF_LOCAL_AFTER_FAILOVER, STOP_DEPENDENT_INSTANCES. • • ASTREAT[*]=0means that the Application Server is not affected by any changes that happens to the package status. This value makes sense to (temporarily) unconfigure the instance. • ${START_WITH_PKG}: Add 1 to ASTREAT[*] if the Application Server should automatically be started during startup of the package.
Table 3-1 Overview of reasonable ASTREAT values ASTREAT value STOP_DEP STOP_LOCAL RESTART STOP START Restrictions 0 0 0 0 0 0 Should only be configured for AS that belong to the same SID 1 0 0 0 0 1 (1) 2 0 0 0 1 (2) 0 3 0 0 0 1 (2) 1 4 0 0 1 (4) 0 0 5 0 0 1 (4) 0 1 (1) 6 0 0 1 (4) 1 (2) 0 7 0 0 1 (4) 1 (2) 1 (1) 8 0 1 (8) 0 0 0 Can be configured for AS belonging to same SID or AS part of other SID 9 0 1 (8) 0 0 1 (1) Should only be config
ASSID[0]=QAS; ASHOST[0]=node2; ASNAME[0]=DVEBMGS; ASNR[0]=10; ASTREAT[0]=24; ASPLATFORM[0]="HP-UX" ASSID[1]=QAS; ASHOST[1]=node3; ASNAME[1]=D; ASNR[1]=11; ASTREAT[1]=8; ASPLATFORM[1]="HP-UX" ASSID[2]=QAS; ASHOST[2]=extern1; ASNAME[2]=D; ASNR[2]=12; ASTREAT[2]=8; ASPLATFORM[2]="HP-UX" The Central Instance is treated the same way as any of the additional packaged or unpackaged instances. Use the ASTREAT[*]-array to configure the treatment of a Central Instance.
The collector will only be stopped if there is no instance of an SAP system running on the host. Specify SAPOSCOL_START=1 to start the saposcol even with packages that don't use the implicit startup mechanism that comes with SAP instance startup, e.g. database-only packages or packages that only have a SCS, ASCS or AREP instance. Optional Step: OS720 If several packages start on a single node after a failover, it is likely that some packages start up faster than others on which they might depend.
Command: Description: start_addons_postci additional startup steps on Central Instance host after start of the Central Instance before start of any Application Server Instance start_addons_postciapp additional startup steps that are performed on the Central Instance host after startup of all Application Server Instances Equally, there are hooks for the stop procedure of packages: stop_addons_preciapp usually this function contains actions that relate to what has been added to start_addons_postciapp
Global Defaults The fourth section of sap.config is rarely needed. It mainly provides various variables that allow overriding commonly used default parameters. Optional Step: OS770 If there is a special demand to use values different from the default, it is possible to redefine some global parameters. Depending on your need for special high availability setups and configurations have a look at those parameters and their description in the SAP specific configuration file sap.config.
Database Configuration This section deals with additional database specific installation steps and contains the following: • Additional Steps for Oracle • Additional Steps for MaxDB Additional Steps for Oracle The Oracle RDBMS includes a two-phase instance and crash recovery mechanism that enables a faster and predictable recovery time after a crash.
# backup echo "connect internal;" > $SRVMGRDBA_CMD_FILE echo "startup mount;" >> $SRVMGRDBA_CMD_FILE echo "spool endbackup.log" >> $SRVMGRDBA_CMD_FILE echo "select 'alter database datafile '''||f.name||''' end backup;'" >> $SRVMGRDBA_CMD_FILE echo "from v\$datafile f, v\$backup b" >> $SRVMGRDBA_CMD_FILE echo "where b.file# = f.file# and b.status = 'ACTIVE'" >> $SRVMGRDBA_CMD_FILE echo "/" >> $SRVMGRDBA_CMD_FILE echo "spool off" >> $SRVMGRDBA_CMD_FILE echo "!grep '^alter' endbackup.log >endbackup.
Table 3-3 Working with the two parts of the file Part of the file: Instruction: first part Replace each occurrence of the word LISTENER by a new listener name. You can choose what suits your needs, but it is recommended to use the syntax LISTENER: ( host = ) Change nothing. second part Replace each occurrence of the word LISTENER by a new listener name different from the one chosen above. For example, use LISTENER if is the SID of the second SAP system.
If this parameter is set too low, incoming tcp connections from starting SAP Application servers that want connect to the DB via the Oracle Listener may halt. This will hang the starting process of the SAP Application server.
Make sure that /usr/spool exists as a symbolic link to /var/spool on all cluster nodes on which the database can run. MaxDB Database Step: MD950 Configure the XUSER file in the adm user home directory. The XUSER file in the home directory of the SAP Administrator keeps the connection information and grant information for a client connecting to the MaxDB database. The XUSER content needs to be adopted to the relocatable IP the MaxDB RDBMS is running on.
Installation Step: IS1100 For ci SAP computing change into the profile directory by typing the alias: cdpro In the DEFAULT.PFL change the following entries and replace the hostname with the relocatible name if you cluster a (ci) component.
This parameter represents the communication path inside an SAP system and between different SAP systems. SAPLOCALHOSTFULL is used for rfc-connections. Set it to the fully qualified hostname. The application server name appears in the server list held by the Message Server, which contains all instances, hosts and services of the instances. The application server name or the hostname is also stored in some system tables on the database.
If an instance is running on the standby node in normal operation (and is stopped when switching over), the control console shows this instance to be down (for example, you will get a red node on a graphical display) after the switchover. Installation Step: IS1190 A SAP Internet Communication Manager (ICM) may run as part of any Application Server. It is started as a separate multi-threaded process and can be restarted independently from the Application Server. E.g.
These settings have to be adjusted for the switchover of the J2EE part of the SAP WEB AS; the following configuration has to be performed in the Offline Configuration Editor: ▲ Log on to the Offline Configuration Editor. Table 3-4 IS1130 Installation Step Choose... Change the following values cluster_data -> dispatcher -> cfg -> kernel -> Propertysheet LockingManager enqu.host = cluster_data -> dispatcher -> cfg -> kernel -> Propertysheet ClusterManager ms.
Add IPv6 addresses to /etc/hosts an all cluster nodes. The following virtual and physical addresses will be used for this IPv6 configuration. vi /etc/hosts 2001::101:51 sap51 # vip ascsSX1 – ABAP SCS 2001::101:53 sap53 # vip D00SX1 - CI 2001::101:54 sap54 # vip D01SX1 – DIALOG 2001::101:146 sap146 # phys node1 - CLUSTER 2001::101:147 sap147 # phys node2 – CLUSTER 16.41.101.51 sap51 # vip ascsSX1 16.41.101.53 sap53 # vip D00SX1 16.41.101.54 sap54 # vip D01SX1 16.41.101.146 sap146 # phys node1 - CLUSTER 16.
Bringing up interface eth0.01: [ OK ] The IPv6 address and network interface can be checked with the following command: ifconfig eth0 Link encap:Ethernet HWaddr 00:12:79:94:D4:C8 inet addr:16.41.101.147 Bcast:16.41.101.255 Mask:255.255.255.0 inet6 addr: 2001::101:147/24 Scope:Global inet6 addr: fe80::212:79ff:fe94:d4c8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Repeat these steps for all RHEL nodes in the cluster.
Edit the Serviceguard cluster configuration file and add the IPv6 STATIONARY_IP addresses for the cluster nodes. vi $SGCONF/cmclconfig.ascii NODE_NAME node146 NETWORK_INTERFACE eth0 HEARTBEAT_IP 16.41.101.146 STATIONARY_IP 2001::101:146 NODE_NAME node147 NETWORK_INTERFACE eth0 HEARTBEAT_IP 16.41.101.147 STATIONARY_IP 2001::101:147 Update the Serviceguard cluster configuration database with the following command. cmapplyconf -C cmclconfig.ascii ...
vi /sapmnt/SX1/profile/START_ASCS02_sap51 vi /sapmnt/SX1/profile/START_D01_sap54 vi /sapmnt/SX1/profile/START_DVEBMGS00_sap53 NI_USEIPv6 = true Add the same parameter to SAP admins startup environment vi ~/.cshrc setenv NI_USEIPv6 true Installation Step: IP1280 Start the Serviceguard packages and test IPv6 connectivity. cmrunpkg ascsSX1 cmrunpkg d01SX1 cmrunpkg d00SX1 Installation Step: IPV6xxx Test IPv6 connectivity with SAP niping and lgtest commands.
------------------------------------[saptux3_SX1_00] [sap53] [2001::101:53] [tick-port] [3200] [DIA UPD BTC SPO UP2 ICM ] [saptux4_SX1_01] [sap54] [2001::101:54] [cpq-tasksmart] [3201] [DIA BTC ICM ] 106 Step-by-Step Cluster Conversion
4 SAP Supply Chain Management Within SAP Supply Chain Management (SCM) scenarios two main technical components have to be distinguished: the APO System and the liveCache The first technical component, the APO System is based on SAP WAS technology. Therefore, (ci), (db), (dbci), (d) and (sapnfs) packages may be implemented for APO. These APO packages are set up the same way as the Netweaver packages.
NOTE: is the system id of the particular implementation a liveCache system. denotes the three-letter database name of the liveCache instance in uppercase.is the same name in lowercase. The term MaxDB based components and liveCache, MaxDB and SAPDB: SABDB is the original name of a database from SAP. The name SAPB was recently replaced by the name MaxDB. LiveCache is a memory based variant of the MaxDB database.
this directory to a backup location. This information is then used to determine the reason of the crash. In HA scenarios, for liveCache versions lower than 7.6, this directory should move with the package. Therefore, SAP provided a way to redefine this path for each liveCache/MaxDB individually. Serviceguard Extension for SAP on Linux expects the work directory to be part of the lc package. The mount point moves from /sapdb/data/wrkto /sapdb/data//wrk.
.SAPDBLC=/sapdb/LC1/db LC1=/sapdb/LC1/db _SAPDBAP=/sapdb/AP1/db AP1=/sapdb/AP1/db [Runtime] /sapdb/programs/runtime/7240=7.2.4.0, /sapdb/programs/runtime/7250=7.2.5.0, /sapdb/programs/runtime/7300=7.3.0.0, /sapdb/programs/runtime/7301=7.3.1.0, /sapdb/programs/runtime/7401=7.4.1.0, /sapdb/programs/runtime/7402=7.4.2.0, For MaxDB and liveCache Version 7.5 (or higher) the SAP_DBTech.ini file does not contain sections [Installations], [Databases] and [Runtime].
--dbmcli on >param_directput RUNDIRECTORY /sapdb//wrk OK --dbmcli on > Linux Setup This section describes how to setup Serviceguard extension for SAP with Linux. Clustered Node Synchronization 1. 2. Repeat the steps in this section for each node of the cluster that is different from the primary. Logon as root to the primary host and prepare a logon for each of its backup hosts. liveCache installation Step: LC030 Synchronize the /etc/group and /etc/passwd files.
pid -> /sapdb/data/pid pipe -> /sapdb/data/pipe ppid -> /sapdb/data/ppid liveCache installation Step: LC070 Make sure /var/spool/sql exists as a directory on the backup node. /usr/spool must be a symbolic link to /var/spool. liveCache installation Step: LC080 On the backup node, create a directory as future mountpoint for all relevant directories from the table of section that refers to the layout option you chose.
export SAPSTAGE Variables ${SGCONF} and ${SGROOT} can also be set with the following command: (Note: the leading "dot"); . /etc/cmcluster.conf The command: rpm -Uhv / sgesap-toolkit-.product...rpm copies all relevant SGeSAP/LX files in the ${SAPSTAGE} staging area.
In file ${SGCONF}//lc.control.script add the following lines to the customer defined functions customer_defined_run_cmds and customer_defined_halt_cmds. These commands will actually call the SGeSAP/LX relevant functions to start and stop the liveCache instance. So the customer defined functions are the glue that connects the generic Serviceguard components with the SGeSAP/LX components. function customer_defined_run_cmds { ### Add following line ${SGCONF}//saplc.
NOTE: After a failover, the liveCache instance will only be recovered to the state referenced in LCSTARTMODE. The behavior of the liveCache service monitoring script is independent from the setting of LCSTARTMODE. The monitoring script will always monitor the vserver process. It will start to monitor if liveCache is in WARM state, as soon as it reaches WARM state for the first time. The LCMONITORINTERVAL variable specifies how often the monitoring polling occurs (in sec.
Create a symbolic link that acts as a hook that informs SAP software where to find the liveCache monitoring software to allow the prescribed interaction with it. ln -s ${SGCONF}//saplc.mon /sapdb//db/sap/lccluster liveCache installation Step: LC215 For the following steps the SAPGUI is required. Logon to the APO central instance as user SAP*. Start transaction /nlc10 and enter LCA for the logical connection.
dbmcli -n -d -u control,control -uk c -us control,control\ WARM key dbmcli -n -d -u superdba,admin -uk w -us superdba,admin\ LCA key dbmcli -n -d -us control,control -uk 1LCA -us\ control,control NOTE: Refer to the SAP documentation to learn for more information about the dbmcli syntax. After recreation of the .
5 Serviceguard Extension for SAP on Linux Cluster Administration A SAP application within a Serviceguard Extension for SAP on Linux cluster is no longer treated as though it runs on a dedicated host. It is wrapped up inside one or more Serviceguard packages and packages can be moved to any of the hosts that are nodes of the Serviceguard cluster. The Serviceguard packages provide a SAP adoptive computing layer that keeps the application independent of specific server hardware.
as needed. Servers outside of the cluster that have External Dialog Instances installed are set up in a similar way. Refer to /etc/auto.direct for a full list of automounter file systems of Serviceguard Extension for SAP on Linux. It enhances the security of the installation if the directories below /export are exported without root permissions. The effect is that the root user cannot modify these directories or their contents. With standard permissions set, the root user cannot even see the files.
Relocatable IP addresses can be used as of SAP kernel 6.40. Older releases use local hostnames in profile names and startup script names. Renamed copies of the files or symbolic links had to be created to overcome this issue. The SAP Spool Work Process uses the SAP Application Server name as the destination for print formatting. Use the relocatable name if you plan to use Spool Work processes with your Central Instance. In the case of a failover the SAP print system will continue to work.
• changing the SAP System ID • changing the name of the SAP System Administrator • migrating to another database vendor • adding/deleting Dialog Instances • changing an Instance Name • changing an Instance Number • changing the network name belonging to a relocatable address • changing the name of a Serviceguard Extension for SAP on Linux package • changing hostnames of hosts inside the cluster • changing hostnames or IP addresses of hosts that run additional application servers • chang
Sometimes, SAP upgrades come with additional configuration options. Such an option for example could be ASCS (ABAP SAP Central System) or the Replicated Enqueue Refer to Chapter Three on how to configure packages with additional SAP components. Switching Serviceguard Extension for SAP on Linux Off and On This section provides a brief description of how to switch off Serviceguard Extension for SAP on Linux. Your individual configuration may require additional steps that are not included in this document.
• Check that batch jobs in SAP are not scheduled to run on the relocatable IP address. Transaction code: SM37 • Relocate printers to the real hostname. Transaction code: SPAD • Check operational modes within SAP. You must setup new operation modes for the new hostname. Transaction code: RZ04 • Do all testing as described in the document SAP BC High Availability.