HP CLX Migration Whitepaper
6
Comparison of some important terminologies
The following table compares the important RHCS and SLE HA terminologies with SG/LX terminologies.
Table 1. Important Terminologies
RHCS Terminology
SLE HA Terminology
SG/LX Terminology
Description
System
Node
Node
Member of cluster
Cluster Service
Cluster Resources
Package
Way to package all hardware and software
resource together as single unit.
Resource
Resource
Modules
Way to manage (start/stop/monitor) hardware
and software resources.
Resource Agent
Resource Agent
Toolkit
A framework to manage (start/stop/monitor) a
specific application.
Quorum Disk (Qdisk)
SCSI reserve
Lock LUN
One of the Cluster membership arbitration
mechanisms.
N/A
Quorum Daemon
Quorum Server
One of the Cluster membership arbitration
mechanisms.
System-config-cluster Cluster
Administration GUI/Conga
HA Web Konsole (Hawk)
Serviceguard Manager
Graphical User Interface.
Failover Cluster service
Failover Cluster Resource
Failover Package
A packaged single unit which contains software
and hardware resources that runs on one
node/system at a time.
Multi-site cluster
Multi-site Cluster/Overlay
clusters
Continentalclusters for Linux
Disaster Recovery Solution in which multiple
clusters are used to provide application
recovery over local or wide area network.
Mapping of RHCS and SLE HA cluster attributes to SG/LX cluster attributes
The following table maps the RHCS and SLE HA cluster attributes to the closest SG/LX cluster attributes.
Table 2. RHCS and SLE HA cluster attributes and corresponding SG/LX cluster attributes
RHCS cluster attributes
which can be configured by
users
SLE HA cluster attributes
which can be configured by
users
Corresponding SG/LX
Cluster attributes
(equivalent to some
extent) which can be
configured by users
Description
<cluster alias=“RHCS_cluster”
name=“RHCS_cluster”>
-
CLUSTER_NAME
Name of the cluster
fence_daemon
fence_device
fence_daemon
fence_device
Deadman driver in kernel
When SG/LX is installed, deadman driver is
statically compiled into the kernel. When
cluster is started, deadman driver is activated
and it plays the role of fencing the server when
one or more node is isolated from majority of
cluster nodes or when split brain situation
occurs. No manual configuration is required to
configure fencing mechanism. However, when
Linux kernel is updated, It is required to
recompile the kernel with deadman driver.
NA
NA
QS_HOST
QS_ADDR
QS_POLLING_INTERVA L
120000000
QS_TIMEOUT_EXTENSI
ON 2000000
In SG/LX, quorum server and Lock LUN are the
arbitration mechanism. A cluster may either
have Lock LUN or quorum server configured.
Usually Lock LUN is used up to 4 node cluster.
Beyond 4 nodes, quorum server is used.