Using Global Workload Manager with SAP Abstract ........................................................................................................................................ 3 Goal ............................................................................................................................................ 3 Target Audience............................................................................................................................. 3 Terms ...........................
Example 4: SAP separating batch and dialog processes...................................................................... 22 Prerequisites............................................................................................................................. 22 Walkthrough ............................................................................................................................ 23 Confirming SGeSAP is categorizing processes.............................................................
Abstract HP’s Global Workload Manager (gWLM) can help you isolate, manage, and prioritize SAP workloads that have variable resource needs. This paper demonstrates various management techniques.
Requirements for each example The paper covers several different example use cases, from simple to more complex. The more complex use cases have more tool requirements, as shown in the following table. Also included in the table for your reference are the versions used on the test systems during the development of this paper. Use case demonstrated Required management components Versions demonstrated Example 1: SAP instances or systems in partitions gWLM VSEMgmt 3.0.1 (includes gWLM 3.0.
Prerequisites All of the steps in this example use basic features included in VSEMgmt and gWLM, so there are no other prerequisites. Walkthrough In this first screen, Virtualization Manager is showing the two target vpars, cutst125 and cutst128. We’ve checked the box for the npar surrounding the two vpars to indicate we want to form an SRD with all the vpars in that npar, and have them collectively share resources.
Creating the SRD We then select in the VSE menu bar Create->Shared Resource Domain and enter step 1 of the SRD creation wizard with the vpars selected. Click Next to continue. In step 2, leave the default SRD name and interval but change the mode from Advisory to Managed. Then click Next. The next screen, wizard step 3 of 4, lists the default compartment types as workloads (vpars). We have 8 cores on our example system.
min/own/max values, select New... in the policy selection; and you will step aside to a dialog box allowing you to create a new policy, and then return to this screen to apply it to your workload. Select a policy for each vpar, then select Next and then Finish. We get a warning that neither vpar can achieve its policy max of 8, as each vpar must get at least one core, leaving a max of only seven cores for the other vpar. That is okay, so we ignore this warning.
Chances are the two vpars are fairly idle like the example screenshot, and because neither is busy, the vpars each have their own value of 4 cores. To see lending and borrowing of CPU resources, force some activity onto one of the virtual partitions, for example cutst128. In this next screenshot, you see the SRD view again, but cutst128 has borrowed cores from cutst125.
That completes the first use case example. The vpars containing the SAP instances are now under gWLM control and the Owns_4-Max_8 policies are in effect. Ideas for extending this use case You may wish to configure gWLM events to notify you of resource shortfalls and pass those along to Openview. For information on generating events, select in the VSE menu bar Tools->Global Workload Manager->Events.
Creating the SRD First, on the System tab, select the vpar cutst128. Then select in the VSE menu bar Create->Shared Resource Domain. (Starting with gWLM 4.0, the System tab is known as the Visualization tab.) We are then placed in the SRD creation wizard. We deselect the box Automatically join with any compatible SRDs, and then move on to step 2.
In step 2, we rename the SRD slightly from the default to cutst128.sap.vs.other.srd and we change the mode from Advisory to Managed. In step 3 (not shown), we switch ‘Compartment Type’ from the original vpar to fss. You could choose pset here instead if you wish to assign whole cores to the workloads instead. We also use the Add button to add a second fss group. If the system gets very busy, we’d like to split the 8 cores evenly between the workloads, so we use the gWLM-provided policy Owns_4-Max_8.
This definition places all the processes run by user c03adm into the workload we created and named c03.sap.instance. When we click OK, we’re returned to step 3 of the SRD creation wizard. Click OK again, and in step 4 confirm the sizes and click OK. You may receive a warning about compartment and policy max values not matching. You can ignore this, as our gWLM-provided policy has a max 8, but we cannot achieve quite that amount while reserving 0.05 cores for the default fss workload.
You can view the individual workload allocation and consumption in real time by clicking the CPU Utilization bar graphs next to the workload names. Ideas for extending this use case This example separated a single SAP instance from other work. If you have multiple instances or systems running, you could simply create more workloads, and assign one or more instances to the workloads.
Figure 2 shows the workload and policy setup when the package has failed over to cutst128. Figure 2: Package on cutst128 Prerequisites As with the previous example, we will use only gWLM built-in features for process placement in this example. However, we’ll also illustrate gWLM adapting to Serviceguard failovers of packages, so you will need Serviceguard and Serviceguard Extensions for SAP installed and configured.
Creating the policy In this example, we need a policy that owns 7 cores. None of the gWLM-provided policies meet that requirement, so we will create it. We’ll also create a policy for the OTHER workload. Then, in step 3 of the SRD creation wizard, we’ll be able to just choose our policies from the list of policies. First, select Policy->Create gWLM Policy and create a new Owns_7-Max_14 policy with the settings shown in the next figure.
Now we create a conditional policy to attach to the workload. This conditional policy is really a container holding two (or more) policies and a description of what condition should cause gWLM to switch between them. You can see we’ve chosen to have the default policy be Owns_1-Max_Remaining (a new policy named for the policy owned setting), but if the package is present, switch to policy Owns_7-Max_14 (which unsurprisingly has a policy setting of owned equal to 7 cores).
Note this one has Owns_7-Max_14 as the default, but if the package is active it shrinks to Owns_1-Max_Remaining. Creating the SRD Now let’s create our SRD to use those policies. Select the target system on the System tab, and then select Create->Shared Resource Domain. (Starting with gWLM 4.0, the System tab is known as the Visualization tab.
In step 2 of the wizard, change the mode to Managed. In step 3 of the wizard, select fss and use Add until you have two fss workloads. Assign the policies you created previously as shown in the next figure. Click OK to continue.
Confirm your sizes in step 4. Click Finish. You’ll then be placed in the SRD monitoring screen. The package is not failed over to cutst128, so the Owns_7-Max_14 policy is active for the cutst128.OTHER workload.
When we use the Serviceguard manager to move the package off this system, we see the policy assignments change to match the new condition. Now the DBCI_IA package is active on the cutst128 system, and the policy Owns_1-Max_Remaining is active for the cutst128.OTHER workload, as shown in the next figure. Viewing the real-time graph (by clicking the CPU Utilization bar), you can see in this figure that gWLM marks the graph showing which time periods had which policy active.
Ideas for extending this use case Other Serviceguard conditions The trigger for our example here is the presence of a given Serviceguard package. An alternative condition would be to trigger such a policy change if the Serviceguard cluster has reduced cluster capacity. For applications that run on multiple nodes at once, triggering off reduced capacity may make more sense than package presence.
The next figure shows the conditional policy. Such an arrangement allows automatic consumption of TiCAP during failover, but disables it when the package is moved off this system, presumably back onto its primary system. Example 4: SAP separating batch and dialog processes SAP instances can have different types of work, such as batch or dialog, and a given set of processes from the instances assigned to handle these types of work.
Walkthrough This example uses a procmap to identify which SAP processes are batch and which are dialog, and separately place them in different gWLM workloads. To do that, there is some additional setup before we can define our SRD and workloads. Confirming SGeSAP is categorizing processes To confirm that SGeSAP tools are producing a list of PIDs, check for the existence of a map file for your instance. For our example instance C03, this means: root@cutst125:/tmp ls /etc/cmcluster/C03/wlmprocmap.
Now that we have the two scripts with correct permissions, let’s test them: root@cutst125:/etc/opt/vse/scripts /etc/opt/vse/scripts/batch.C03_DVEBMGS00 7956 7961 root@cutst125:/etc/opt/vse/scripts echo $? 0 root@cutst125:/etc/opt/vse/scripts grep BTC /etc/cmcluster/C03/wlmprocmap.C03_DVEBMGS00_cutst125 BTC:7956 BTC:7961 and dialog: root@cutst125:/etc/opt/vse/scripts /etc/opt/vse/scripts//dialog.
We are placed in step 1 of the wizard. Click Next. We then choose a name and change the mode to Managed.
Now we select fss as workload type and use Add to get two named workloads plus the default OTHER workload. For the second workload, choose New... and create a new workload definition for batch processes like the one in the following figure. gWLM expects the process map to be in the /etc/opt/vse/scripts directory, so you only have to enter the file name; no path is needed. When you return to step 3 of the wizard, choose New… again to create a new workload definition for dialog processes.
After creating that workload, we are back in step 3, which will look similar to the next screen. Once the two new workload definitions are built, assigned, and we’ve chosen policies to match our desired 4/3/1 split, we continue, completing step 4 of the wizard.
Select Finish, and you’ll be placed in the SRD monitoring screen. If you see any ‘Uncleared gWLM Events’ in the SRD monitoring screen, check the event contents and the gwlmagent log file to confirm you have no typos in your procmap script, the permissions are correct, and the script is exiting with a 0 exit code.
Repeat for the other procmap: root@cutst125:/tmp /etc/opt/vse/scripts/batch.C03_DVEBMGS00 7956 7961 root@cutst125:/tmp ps -fR c03.batch UID PID PPID C STIME TTY TIME COMMAND c03adm 7961 7919 0 20:45:47 ? 0:00 dw.sapC03_DVEBMGS00 pf=/usr/sap/C03/SYS/profile/C03_DVEBMGS00_cutst125 c03adm 7956 7919 0 20:45:47 ? 0:00 dw.
For example, here is a view of an example system cutst125 in Application Discovery showing the workloads that Application Discovery has matched using factory-supplied templates. Note that at the bottom of the listing, processes that Application Discovery could not match are listed to help you confirm your process placement and template definitions are what you expected. Here is a list of the various factory templates supplied with Application Discovery 3.0.1.
However, before blindly employing these templates, complete some testing to make sure the list of processes they match is correct for your system. If you have binaries running from a different path or start script, you may need to clone the factory template and edit your copy to tweak paths or names slightly. To see what is being matched, below are the template definitions for a few of the factory-supplied items, first for Oracle related processes. Here are the items for the Oracle listener process.
The next screen shows example SAP related templates. This screen shows one that matches a particular SAP system and instance. In each of these you can see the Rules section that defines what will be matched. See the Application Discovery User’s Guide or online help for a full description of how rules are composed and executed. You can use the factory-supplied templates or define your own templates.
Here’s an example of using one of the templates in a gWLM workload definition, to assign an Oracle instance and listener to a particular workload. This would be in place of, or perhaps in addition to “by user” and “by executable” rules for the workload. Reporting gWLM Historical and Advanced Reports In the gWLM examples above, the discussion described setting up and monitoring in real time the behavior immediately after configuration.
Conclusion The VSE tool set provides a variety of benefits to SAP workloads: • Ability to match resources with SAP workloads that need it, while keeping resource allocations in line with business objectives • Ability to quickly visualize and manage resources across many SAP systems and instances • Ability to adapt resource allocations to high availability events such as Serviceguard failovers • Ability to do advanced pattern matching of processes to monitor or control the multitude of SAP processes 34
For more information High-level product information: • VSE – www.hp.com/go/vse • Serviceguard – www.hp.com/go/serviceguard Technical documentation: • VSE – www.docs.hp.com/en/vse.html • Serviceguard – www.docs.hp.com/en/ha.html © 2008 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services.