HP-UX System Administrator's Guide: Logical Volume Management HP-UX 11i Version 3 Abstract This document describes how to configure, administer, and troubleshoot the Logical Volume Manager (LVM) product on the HP-UX Version 3 platform. The HP-UX System Administrator’s Guide is written for administrators of HP-UX systems of all skill levels needing to administer HP-UX systems beginning with HP-UX Release 11i Version 3.
© Copyright 2011 Hewlett-Packard Development Company, L.P. Legal Notices Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents 1 Introduction...............................................................................................8 LVM Features...........................................................................................................................8 LVM Architecture.......................................................................................................................9 Physical versus Logical Extents.......................................................................................
Physical Volume Groups.................................................................................................30 Snapshots and Performance............................................................................................30 Increasing Performance Through Disk Striping........................................................................31 Determining Optimum Stripe Size....................................................................................
Moving and Reconfiguring Your Disks.......................................................................................70 Moving Disks Within a System.............................................................................................71 Moving Disks Between Systems............................................................................................72 Moving Data to a Different Physical Volume...........................................................................
Information Collection.......................................................................................................110 Consistency Checks..........................................................................................................111 Maintenance Mode Boot..................................................................................................111 I/O Errors..........................................................................................................................
HP Encourages Your Comments..............................................................................................148 A LVM Specifications and Limitations............................................................149 Determining LVM’s Maximum Limits on a System.......................................................................152 B LVM Command Summary........................................................................154 C Volume Group Provisioning Tips.....................................
1 Introduction This chapter addresses the following topics: • “LVM Features” (page 8) • “LVM Architecture” (page 9) • “Physical versus Logical Extents” (page 10) • “LVM Volume Group Versions” (page 11) • “LVM Device File Usage” (page 12) • “LVM Disk Layout” (page 16) • “LVM Limitations” (page 18) • “Shared LVM” (page 18) LVM Features Logical Volume Manager (LVM) is a storage management system that lets you allocate and manage disk space for file systems or raw data.
LVM Architecture An LVM system starts by initializing disks for LVM usage. An LVM disk is known as a physical volume (PV). A disk is marked as an LVM physical volume using either the HP System Management Homepage (HP SMH) or the pvcreate command. Physical volumes use the same device special files as traditional HP-UX disk devices. LVM divides each physical volume into addressable units called physical extents (PEs).
Figure 1 Disk Space Partitioned Into Logical Volumes Physical versus Logical Extents When LVM allocates disk space to a logical volume, it automatically creates a mapping of the logical extents to physical extents. This mapping depends on the policy chosen when creating the logical volume. Logical extents are allocated sequentially, starting at zero, for each logical volume. LVM uses this mapping to access the data, regardless of where it physically resides.
Figure 2 Physical Extents and Logical Extents As shown in Figure 2, the contents of the first logical volume are contained on all three physical volumes in the volume group. Because the second logical volume is mirrored, each logical extent is mapped to more than one physical extent. In this case, there are two physical extents containing the data, each on both the second and third disks within the volume group.
Version 2.0, 2.1, and 2.2 enable the configuration of larger volume groups, logical volumes, physical volumes, and other parameters. Version 2.1 is identical to Version 2.0, but allows a greater number of volume groups, physical volumes, and logical volumes. Version 2.x volume groups are managed exactly like Version 1.0 volume groups, with the following exceptions: • Version 2.x volume groups have simpler options to the vgcreate command. When creating a Version 2.
Legacy Device Files versus Persistent Device Files As of HP-UX 11i Version 3, disk devices can be represented by two different types of device files in the /dev directory, legacy and persistent. Legacy device files were the only type of mass storage device files in releases prior to HP-UX 11i Version 3. They have hardware path information such as SCSI bus, target, and LUN encoded in the device file name and minor number.
Physical Volume Names Physical volumes are identified by their device file names, as follows: Table 1 Physical Volume Naming Conventions Device File Name Type of Device /dev/disk/diskn Persistent block device file /dev/disk/diskn_p2 Persistent block device file, partition 2 /dev/rdisk/diskn Persistent character device file /dev/rdisk/diskn_p2 Persistent character device file, partition 2 /dev/dsk/cntndn Legacy block device file /dev/dsk/cntndns2 Legacy block device file, partition 2 /dev/rdsk/
Logical Volume Names Logical volumes are identified by their device file names, which can either be assigned by you or assigned by default when you create a logical volume using the lvcreate command. When assigned by you, you can choose any name up to 255 characters. When assigned by default, these names take the form /dev/vgnn/lvolN (the block device file form) and /dev/vgnn/rlvolN (the character device file form).
Physical volumes use the device files associated with their disk. LVM does not create device files for physical volumes. Version 1.0 Device Number Format Table 2 lists the format of the device file number for Version 1.0 volume groups. Table 2 Version 1.0 Device Number Format Major Number Volume Group Number Reserved Logical Volume Number 64 0–0xff 0 0–0xff 0=group file For Version 1.0 volume groups, the major number for LVM device files is 64.
Root: lvol1 Swap: lvol2 Dump: lvol2 /dev/dsk/c5t0d0 /dev/dsk/c12t0d0 -- Boot Disk on: /dev/dsk/c3t0d0 /dev/dsk/c4t0d0 on: /dev/dsk/c3t0d0 /dev/dsk/c4t0d0 on: /dev/dsk/c3t0d0 The physical volumes designated "Boot Disk" are bootable, having been initialized with mkboot and pvcreate -B. Multiple lines for lvol1 and lvol2 indicate that the root and swap logical volumes are being mirrored.
LVM Limitations LVM is a sophisticated subsystem. It requires time to learn, it requires maintenance, and in rare cases, things can go wrong. HP recommends using logical volumes as the preferred method for managing disks. Use LVM on file and application servers. On servers that have only a single disk and are used only to store the operating system and for swap, a “whole-disk” approach is simpler and easier to manage. LVM is not necessary on such systems.
Volume group Version 2.2 and higher with snapshot logical volumes configured cannot be activated in shared mode. Also, snapshots cannot be created off logical volumes belonging to shared volume groups. Synchronization of mirrored volume groups will be slower on shared volume groups.
2 Configuring LVM By default, the LVM commands are already installed on your system. This chapter discusses issues to consider when setting up your logical volumes. It addresses the following topics: • “Planning Your LVM Configuration” (page 20) • “Setting Up Different Types of Logical Volumes” (page 20) • “Planning for Availability” (page 24) • “Planning for Recovery” (page 34) • “Planning for Performance” (page 28) Planning Your LVM Configuration Using logical volumes requires some planning.
and /etc/lvmtab_p, which means that data of a logical volume might not be evenly distributed over all the physical volumes within your volume group. As a result, when I/O access to the logical volumes occurs, one or more disks within the volume group might be heavily used, while the others might be lightly used, or not used at all. This arrangement does not provide optimum I/O performance.
TIP: Because increasing the size of a file system is usually easier than reducing its size, be conservative in estimating how large to create a file system. An exception is the root file system. As a contiguous logical volume, the root file system is difficult to extend.
Swap Logical Volume Guidelines Use the following guidelines when configuring swap logical volumes: • Interleave device swap areas for better performance. Two swap areas on different disks perform better than one swap area with the equivalent amount of space. This configuration allows interleaved swapping, which means the swap areas are written to concurrently, thus enhancing performance. When using LVM, set up secondary swap areas within logical volumes that are on different disks using lvextend.
unlike the lvsplit approach, does not require the user to reduce a mirror copy from the original logical volume. Refer to the LVM Snapshot Logical Volumes white paper for more details on backups using snapshot logical volumes. Planning for Availability This section describes LVM features that can improve the availability and redundancy of your data.
Strict and Nonstrict Allocation Strict allocation requires logical extents to be mirrored to physical extents on different physical volumes. Nonstrict allocation allows logical extents to be mirrored to physical extents that may be on the same physical volume. The -s y and -s n options to the lvcreate or lvchange commands set strict or nonstrict allocation.
The frequency of extra disk writes is small for sequentially accessed logical volumes (such as database logs), but increases when access is more random. Therefore, logical volumes containing database data or file systems with few or infrequently written large files (greater than 256K) must not use the MWC when runtime performance is more important than crash recovery time. The -M option to the lvcreate or lvchange command controls the MWC.
option, lvsync spawns multiple threads to simultaneously synchronize all logical volumes belonging to the same volume group, often reducing the total synchronization time. TIP: The vgchange, lvmerge, and lvextend commands support the –s option to suppress the automatic synchronization of stale extents.
link fails, an automatic switch to an alternate link occurs. Using this type of multipathing (also called pvlinks) increases availability. NOTE: As of HP-UX 11i Version 3, the mass storage stack supports native multipathing without using LVM pvlinks. Native multipathing provides more load balancing algorithms and path management options than LVM. HP recommends using native multipathing to manage multipathed devices instead of using LVM's alternate links.
• “Increasing Performance Through Disk Striping” (page 31) • “Increasing Performance Through I/O Channel Separation” (page 33) General Performance Factors The following factors affect overall system performance, but not necessarily the performance of LVM. Memory Usage The amount of memory used by LVM is based on the values used at volume group creation time and on the number of open logical volumes. The largest portion of LVM memory is used for extent maps.
by I/O in progress, a given request might have to wait in a queue of requests until an entry becomes available. Another performance consideration for mirrored logical volumes is the method of reconciling inconsistencies between mirror copies after a system crash. Two methods of resynchronization are available: Mirror Consistency Recovery (MCR) and none. Whether you use the MWC depends on which aspect of system performance is more important to your environment, run time or recovery time.
the following consecutive I/Os that fall within the same unshare unit will not incur the overhead of a unshare operation. Refer to the LVM Snapshot Logical Volumes white paper for more information about snapshots and performance.
Figure 4 Interleaving Disks Among Buses • Increasing the number of disks might not improve performance because the maximum efficiency that can be achieved by combining disks in a striped logical volume is limited by the maximum throughput of the file system itself and by the buses to which the disks are attached. • Disk striping is highly beneficial for applications with few users and large, sequential transfers.
Interactions Between Mirroring and Striping Mirroring a striped logical volume improves the read I/O performance in a same way that it does for a nonstriped logical volume. Simultaneous read I/O requests targeting a single logical extent are served by two or three different physical volumes instead of one. A striped and mirrored logical volume follows a strict allocation policy; that is, the data is always mirrored on different physical volumes.
Planning for Recovery Flexibility in configuration, one of the major benefits of LVM, can also be a source of problems in recovery. The following are guidelines to help create a configuration that minimizes recovery time: • Keep the number of disks in the root volume group to a minimum; HP recommends using three disks, even if the root volume group is mirrored.
Preparing for LVM System Recovery To ensure that the system data and configuration are recoverable in the event of a system failure, follow these steps: 1. Load any patches for LVM. 2. Use Ignite-UX to create a recovery image of your root volume group. Although Ignite-UX is not intended to be used to back up all system data, you can use it with other data recovery applications to create a method of total system recovery. 3. Perform regular backups of the other important data on your system.
NOTE: For volume group Version 2.2 and higher: volume groups that have snapshots on which data unsharing is occurring, the LVM configuration backup file might not always be in sync with the LVM metadata on disk. LVM ensures that the configuration for volume group Version 2.2 and higher is the latest by automatically backing it up during deactivation, unless backup has been disabled by the -A n option.
Example Script for LVM Configuration Recording The following example script captures the current LVM and I/O configurations. If they differ from the previously captured configuration, the script prints the updated configuration files and notifies the system administrator.
3 Administering LVM This section contains information on the day-to-day operation of LVM.
For help using HP SMH, see the HP SMH online help. • LVM command-line interface: LVM has a number of low-level user commands to perform LVM tasks, described in “Physical Volume Management Commands” (page 39), “Volume Group Management Commands” (page 39), and “Logical Volume Management Commands” (page 40). The following tables provide an overview of which commands perform a given task. For more information, see the LVM individual manpages.
Table 5 Volume Group Management Commands (continued) Task Command Handling online shared LVM reconfiguration, and pre-allocation of extents lvmpud for space-efficient snapshots Migrating a volume group to a different volume group version vgversion Migrating a volume group to new disks vgmove 1 To convert the cDSFs of a volume group in a particular node back to their corresponding persistent DSFs, use the vgscan -f command. For example: # vgscan -f vgtest *** LVMTAB has been updated successfully.
Displaying LVM Information To display information about volume groups, logical volumes, or physical volumes, use one of three commands. Each command supports the -v option to display detailed output and the -F option to help with scripting. NOTE: For volume group Version 2.2 or higher, when snapshots are involved, additional fields are displayed by these commands. See the individual command manpages for full description of the fields displayed.
Information on Physical Volumes Use the pvdisplay command to show information about physical volumes.
0001 /dev/disk/disk42 0002 /dev/disk/disk42 0001 current 0002 current Common LVM Tasks The section addresses the following topics: • “Initializing a Disk for LVM Use” (page 43) • “Creating a Volume Group” (page 44) • “Migrating a Volume Group to a Different Version: vgversion” (page 46) • “Adding a Disk to a Volume Group” (page 51) • “Removing a Disk from a Volume Group” (page 51) • “Creating a Logical Volume” (page 52) • “Extending a Logical Volume” (page 53) • “Reducing a Logical Volume”
6. Initialize the disk as a physical volume using the pvcreate command. For example: # pvcreate /dev/rdisk/disk3 Use the character device file for the disk. If you are initializing a disk for use as a boot device, add the -B option to pvcreate to reserve an area on the disk for a LIF volume and boot utilities. If you are creating a boot disk on an HP Integrity server, make sure the device file specifies the HP-UX partition number (2). For example: # pvcreate -B /dev/rdisk/disk3_p2 NOTE: Version 2.
Use the block device file to include each disk in your volume group. You can assign all the physical volumes to the volume group with one command, or create the volume group with a single physical volume. No physical volume can already be part of an existing volume group. You can set volume group attributes using the following options: -V 1.0 Version 1.
specified, it cannot be changed. The default unshare unit size is 1024 KB if the -U option is not used. Below is an example of vgcreate with the -U option: # vgcreate -V 2.2 -S 4t -s 8 -U 2048 /dev/vg01 /dev/disk/disk20 Migrating a Volume Group to a Different Version: vgversion Beginning with the HP-UX 11i v3 March 2009 Update, LVM offers the new command vgversion, which enables you to migrate the current version of an existing volume group to any other version, except for migrating to Version 1.0.
Command Syntax The vgversion syntax is: vgversion [-r] [-v] –V [-U unshare_unit] vg_version_new vg_name where -r is Review mode. This allows you to review the operation before performing the actual volume group version migration. -v is Verbose mode. -U unshare_unit sets the unit at which data will be unshared between a logical volume and its snapshots, in the new volume group. This is only applicable for migration to volume group Version 2.2 or higher.
Version 2.1 requires more metadata. Thus, it is possible that there is not enough space in the LUN for the increase in metadata. In this example, vgversion –r should display the following: #vgversion -V 2.1 -r -v vg01 Performing "vgchange -a r -l -p -s vg01" to collect data Activated volume group Volume group "vg01" has been successfully activated. The space required for Volume Group version 2.1 metadata on Physical Volume /dev/disk/disk12 is 8448 KB, but available free space is 1024 KB.
relocation. The bad block relocation policy of all logical volumes will be set to NONE. Volume Group version can be successfully changed to 2.1 Review complete. Volume group not modified 3. After messages from the review indicate a successful migration, you can begin the actual migration: a. Unlike in Review mode, the target volume group must meet certain conditions at execution time, including being de-activated.
CAUTION: The recovery script should be run only in cases where the migration unexpectedly fails, such as an interruption during migration execution. • The recovery script should not be used to “undo” a successful migration. For a successful vgversion migration, you should use only a subsequent vgversion execution (and not the recovery script) to reach the newly desired volume group version. Note that a migration to 1.0 is not supported, so no return path is available once a migration from Version 1.
Adding a Disk to a Volume Group Often, as new disks are added to a system, they must be added to an existing volume group rather than creating a whole new volume group. If new disks are being added for user data, such as file systems or databases, do not to add them to the root volume group. Instead, leave the root volume group as only the disks containing the root file system and system file systems such as /usr, /tmp, and so on. To add a disk to a volume group, follow these steps: 1.
2. After the disk no longer holds any physical extents, use the vgreduce command to remove it from the volume group. For example: # vgreduce /dev/vgnn /dev/disk/disk3 IMPORTANT: If you are using LVM pvlinks, as described in “Increasing Hardware Path Redundancy Through Multipathing” (page 27), you must run the vgreduce command for each link to the disk. Creating a Logical Volume To create a logical volume, follow these steps: 1. Decide how much disk space the logical volume needs.
-C y Contiguous allocation -C n Noncontiguous allocation (default) Mirror Scheduling Policy -d p Parallel scheduling (default) -d s Sequential scheduling Mirror Consistency Policy -M y MWC enable (default, optimal mirror resynchronization during crash recovery) -M n -c y MCR enable (full mirror resynchronization during crash recovery) -M n -c n MCR disable (no mirror resynchronization during crash recovery) For example, to create a 240 MB mirrored logical volume with one mirror copy, nonstrict
3. Extend the logical volume. For example: # lvextend -L 332 /dev/vg00/lvol7 This increases the size of this volume to 332 MB. NOTE: On the HP-UX 11i v3 March 2010 Update, the size of a logical volume cannot be extended if it has snapshots associated with it. With the HP–UX 11i v3 September 2010 Update, this limitation is removed; and logical volumes with snapshots can be extended.. For information about snapshot logical volumes, see “Creating and Administering Snapshot Logical Volumes” (page 104).
2. Decide on the new size of the logical volume. For example, if the logical volume is mounted to a file system, the new size must be greater than the space the data in the file system currently occupies. The bdf command shows the size of all mounted volumes. The first column shows the space allocated to the volume; the second shows how much is actually being used. The new size of the logical volume must be larger than the size shown in the second column of the bdf output. 3.
Removing a Mirror from a Logical Volume To remove a mirror copy, use the lvreduce command, specifying the number of mirror copies you want to leave. For example, to remove all mirrors of a logical volume, enter the following command: # lvreduce -m 0 /dev/vg00/lvol1 This reduces the number of mirror copies to 0, so only the original copy is left. To remove the mirror copy from a specific disk, use lvreduce and specify the disk from which to remove the mirror copy.
# lvremove /dev/vg01/lvol5 3. You can now use this space to extend an existing logical volume or build a new logical volume. For volume group Version 2.2 and higher, a snapshot and all its predecessors can be removed using a single lvremove command, with the new -F option. See lvremove(1M) for more information. NOTE: A logical volume with associated snapshots cannot be removed. First, all of its snapshots have to be deleted, then the original logical volume can be deleted.
4. Activate the volume group as follows: # vgchange -a y vgnn NOTE: If the volume group contains any multipathed disks, HP recommends using HP-UX's native multipathing that is a superset of LVM's alternate links. See “Increasing Hardware Path Redundancy Through Multipathing” (page 27) for more information. If you want to use LVM's alternate link features, importing the volume group has several implications: • You must omit the -N option to the vgimport command.
1. Run vgmodify to collect information about the volume group. Save the output from these three commands: # vgmodify -o -r vgnn # vgmodify -v -t vgnn # vgmodify -v -n -t vgnn The -o option attempts to optimize the values by making full use of the existing LVM metadata space. The -t option reports the optimized range of settings without renumbering physical extents; the -n option enables renumbering of physical extents. 2. 3. 4. 5.
/dev/rdisk/disk6 /dev/rdisk/disk5 Summary 896 896 896 32768 32768 32768 Volume Group optimized settings (no PEs renumbered): max_pv(-p) max_pe(-e) Disk size (Mb) 2 53756 1720193 3 35836 1146753 ... 213 296 9473 255 252 8065 # vgmodify -v -n -t vg32 Volume Group configuration for /dev/vg32 has been saved in /etc/lvmconf/vg32.
New Volume Group settings: Max LV Max PV Max PE per PV PE Size (Mbytes) VGRA Size (Kbytes) Review complete. Volume group not modified 5. 255 255 15868 32 32640 Deactivate the volume group: # vgchange -a n vg32 Volume group "vg32" has been successfully changed. 6. Commit the new values: # vgmodify -p 255 -e 15868 -n vg32 Current Volume Group settings: Max LV Max PV Max PE per PV PE Size (Mbytes) VGRA Size (Kbytes) The current and new Volume Group parameters differ.
Total Spare PVs in use VG Version 0 1.0 vgmodify for a Version 2.x Volume Group If the maximum volume group size (chosen when the Version 2.
1. Use the review mode of vgmodify to verify that the volume group maximum size can be decreased to the desired lower value. # vgmodify -r -a -S 32t vg1 2. If review mode indicates that the maximum VG size can be decreased, perform the actual reprovisioning reconfiguration. The vgmodify command reconfigures every PV in the VG to reduce the amount of space being used for LVM configuration date. The unused space is made available as new extents for user data # vgmodify -a -S 32t vg1 3.
Physical volume "/dev/disk/disk46" was not changed. Physical volume "/dev/disk/disk4" requires reconfiguration to be provisioned to the requested maximum volume group size 8388608 MB. Current number of extents: 25602 Number of extents after reconfiguration: 25602 ... Physical volume "/dev/disk/disk47" was not changed. In this example, all physical volumes in the volume group can be reconfigured, so the maximum Volume Group size of /dev/vg1 can be increased from 500 GB to 8 TB.
--- Logical volumes --LV Name LV Status available/syncd LV Size (Mbytes) Current LE Allocated PE Used PV --- Physical volumes --PV Name PV Status Total PE Free PE ...
Renaming a Volume Group To change the name of a volume group, export it, then import it using the new name. For more detailed information on how to export and import a volume group, see “Exporting a Volume Group” (page 57) and “Importing a Volume Group” (page 57). To rename the volume group vg01 to vgdb, follow these steps: 1. Deactivate the volume group as follows: # vgchange -a n vg01 2.
To keep /dev/disk/disk0 and /dev/disk/disk1 in vgold and split the remaining physical volumes into a new volume group named vgnew, follow these steps: 1. Deactivate the volume group as follows: # vgchange -a n vgold 2. Export the volume group as follows: # vgexport vgold 3. Change the VGID on the physical volumes to be assigned to the new volume group as follows: # vgchgid -f /dev/rdisk/disk2 /dev/rdisk/disk3 \ /dev/rdisk/disk4 /dev/rdisk/disk5 4. 5. 6.
2. Find the names of all logical volumes and physical volumes in the volume group. Enter the following command: # vgdisplay -v /dev/vgnn 3. Make sure that none of those logical volumes are in use. This may require stopping applications using any logical volumes in the volume group, and unmounting file systems contained in the volume group. Use the fuser command on each logical volume: # fuser -cu /dev/vgnn/lvoln 4. Remove each of the logical volumes as follows: # lvremove /dev/vgnn/lvoln 5.
Backing Up and Restoring Volume Group Configuration It is important that volume group configuration information be saved whenever you make any change to the configuration such as: • Adding or removing disks to a volume group • Changing the disks in a root volume group • Creating or removing logical volumes • Extending or reducing logical volumes Unlike fixed disk partitions or nonpartitioned disks that begin and end at known locations on a given disk, each volume group configuration is unique, chang
Make sure backups of the root volume group are on the root file system, in case these are required during recovery. To run vgcfgrestore, the physical volume must be detached. If all the data on the physical volume is mirrored and the mirror copies are current and available, you can temporarily detach the physical volume using pvchange, perform the vgcfgrestore, and reattach the physical volume.
Moving Disks Within a System There a two procedures for moving the disks in a volume group to different hardware locations on a system. Choose a procedure depending on whether you use persistent or legacy device files for your physical volumes; the types of device files are described in “Legacy Device Files versus Persistent Device Files” (page 13). LVM Configuration with Persistent Device Files If your LVM configuration uses persistent device files, follow these steps: 1.
10. Back up the volume group configuration as follows: # vgcfgbackup /dev/vgnn Moving Disks Between Systems To move the disks in a volume group to different hardware locations on a different system, export the volume group from one system, physically move the disks to the other system, and import the volume group there. The procedures for exporting and importing a volume are described in “Exporting a Volume Group” (page 57) and “Importing a Volume Group” (page 57).
/dev/vg01/markets from the disk /dev/disk/disk4 to the disk /dev/disk/disk7, enter the following: # pvmove -n /dev/vg01/markets /dev/disk/disk4 /dev/disk/disk7 On the other hand, you can move all the data contained on one disk, regardless of which logical volume it is associated with, to another disk within the same volume group. For example, do this to remove a disk from a volume group.
de Specifies the starting location of the destination physical extents within a destination physical volume. se1 [-se2] Defines the source physical extent range, provided along with source physical volume. Beginning with the September 2009 Update, pvmove additionally provides these options: -a Moves data to achieve auto-rebalance of disk space usage within a volume group. Supported for Version 2.x volume groups only. See “Moving Data for Disk Space Balancing: Auto Re-balancing” (page 74).
Usage There are three different ways to use the –a option: • pvmove –a vg_name all logical volumes within the volume group vg_name are auto re-balanced • pvmove –a lv_path [pv_path | pvg_name] only the logical volumes specified by lv_path are auto re-balanced.
Volume Group configuration for /dev/vg_01 has been saved in /etc/lvmconf/vg_01.conf NOTE: In the automatic re-balance mode, the pvmove command tries to achieve an optimal rebalance, but it does not guarantee an optimal rebalance; and there are scenarios where the user can perform a more optimal rebalance manually than the one provided by the pvmove auto rebalance operation.
# pvchange -x y /dev/disk/disk1 5. Use the pvmove command to move the data from the spare to the replaced physical volume. For example: # pvmove /dev/disk/disk3 /dev/disk/disk1 The data from the spare disk is now back on the original disk or its replacement, and the spare disk is returned to its role as a standby empty disk. For more information on disk sparing, see “Increasing Disk Redundancy Through Disk Sparing” (page 27).
NOTE: Beginning with the September 2009 Update, the vgmodify also supports Version 2.x volume groups, but the volume groups must be in active mode to run vgmodify. For more information, see vgmodify(1M). If the LUN for a physical volume was dynamically expanded using an array utility, the vgmodify -E option can be used to make available the new space for user data. After the physical volume is reconfigured, you can expand logical volumes with lvextend, or you can create new logical volumes with lvcreate.
Max PV Max PE per PV PE Size (Mbytes) VGRA Size (Kbytes) Review complete. Volume group not modified 16 1016 32 176 The expanded physical volume requires 3051 physical extents to use all its space, but the current max_pe value limits this to 1016. 3.
The table shows that if physical extents are renumbered, all values of max_pv permit a max_pe large enough to accommodate the increased physical volume size. For this example, select a max_pv of 10, which permits a max_pe value of 10748. 4. Preview the changes by using the -r option to vgmodify as follows: # vgmodify -p 10 -e 10748 -r vg32 Current Volume Group settings: Max LV 255 Max PV 16 Max PE per PV 1016 PE Size (Mbytes) 32 VGRA Size (Kbytes) 176 The current and new Volume Group parameters differ.
7. Activate the volume group and verify the changes by entering the following commands: # vgchange -a y vg32 Activated volume group Volume group "vg32" has been successfully changed. # vgdisplay vg32 --- Volume groups --VG Name VG Write Access VG Status Max LV Cur LV Open LV Max PV Cur PV Act PV Max PE per PV VGDA PE Size (Mbytes) Total PE Alloc PE Free PE Total PVG Total Spare PVs Total Spare PVs in use VG Version /dev/vg32 read/write available 255 0 0 10 2 2 10748 4 32 3119 0 3119 0 0 0 1.
3. Verify that the physical volumes were reconfigured and that there are new extents available with the vgdisplay –v command: # vgdisplay -v vg1 --- Volume groups --VG Name ... PE Size (Mbytes) Total PE Alloc PE Free PE … VG Version VG Max Size VG Max Extents … --- Logical volumes --LV Name LV Status LV Size (Mbytes) Current LE Allocated PE Used PV --- Physical volumes --PV Name PV Status Total PE Free PE … PV Name PV Status Total PE Free PE /dev/vg1 8 51180 25580 25600 2.
Handling Size Decreases CAUTION: A similar procedure can also be used when the size of a physical volume is decreased. However, there are limitations: • Sequence: The sequence must be reversed to avoid data corruption. For 1. 2. For 1. 2. • an increase in size, the sequence is: Increase the LUN size from the array side. Then, increase the volume group size from the LVM side. a decrease in size, the sequence is: Decrease the volume group size from the LVM side.
information. If a physical volume was accidentally initialized as bootable, you can convert the disk to a nonbootable disk and reclaim LVM metadata space. CAUTION: The boot volume group requires at least one bootable physical volume. Do not convert all of the physical volumes in the boot volume group to nonbootable, or your system will not boot. To change a disk type from bootable to nonbootable, follow these steps: 1. Use vgcfgrestore to determine if the volume group contains any bootable disks. 2.
1 2 ... 255 65535 45820 2097120 1466240 252 8064 If you change the disk type, the VGRA space available increases from 768 KB to 2784KB (if physical extents are not renumbered) or 32768 KB (if physical extents are renumbered). Changing the disk type also permits a larger range of max_pv and max_pe. For example, if max_pv is 255, the bootable disk can only accommodate a disk size of 8064 MB, but after conversion to nonbootable, it can accommodate a disk size of 40834 MB. 3.
"/etc/lvmconf/vg01.conf.old" Volume group "vg01" has been successfully changed. 6. Activate the volume group and verify the changes as follows: # vgchange -a y vg01 Activated volume group Volume group "vg01" has been successfully changed. # vgcfgbackup vg01 Volume Group configuration for /dev/vg01 has been saved in /etc/lvmconf/vg01.conf # vgcfgrestore -l -v -n vg01 Volume Group Configuration information in "/etc/lvmconf/vg01.
Detaching a link does not disable sparing. That is, if all links to a physical volume are detached and a suitable spare physical volume is available in the volume group, LVM uses it to reconstruct the detached disk. For more information on sparing, see “Increasing Disk Redundancy Through Disk Sparing” (page 27). You can view the LVM status of all links to a physical volume using vgdisplay with the -v option.
1. Create a bootable physical volume. a. On an HP Integrity server, partition the disk using the idisk command and a partition description file, then run insf as described in “Mirroring the Boot Disk on HP Integrity Servers” (page 91). b. Run pvcreate with the -B option. On an HP Integrity server, use the device file denoting the HP-UX partition: # pvcreate -B /dev/rdisk/disk6_p2 On an HP 9000 server, use the device file for the entire disk: # pvcreate -B /dev/rdisk/disk6 2.
Boot: Root: Swap: Dump: bootlv rootlv swaplv swaplv /dev/disk/disk6 -- Boot Disk on: /dev/disk/disk6 on: /dev/disk/disk6 on: /dev/disk/disk6 on: /dev/disk/disk6, 0 15. Once the boot and root logical volumes are created, create file systems for them. For example: # mkfs –F hfs /dev/vgroot/rbootlv # mkfs –F vxfs /dev/vgroot/rrootlv NOTE: On HP Integrity servers, the boot file system can be VxFS.
2. Create a bootable physical volume as follows: # pvcreate -B /dev/rdisk/disk4 3. Add the physical volume to your existing root volume group as follows: # vgextend /dev/vg00 /dev/disk/disk4 4. Place boot utilities in the boot area as follows: # mkboot /dev/rdisk/disk4 5. Add an autoboot file to the disk boot area as follows: # mkboot -a "hpux" /dev/rdisk/disk4 NOTE: If you expect to boot from this disk only when you lose quorum, you can use the alternate string hpux –lq to disable quorum checking.
TIP: To shorten the time required to synchronize the mirror copies, use the lvextend and lvsync command options introduced in the September 2007 release of HP-UX 11i Version 3. These options enable you to resynchronize logical volumes in parallel rather than serially. For example: # # # # # # # # # 8.
For this example, the disk to be added is at hardware path 0/1/1/0.0x1.0x0, with device special files named /dev/disk/disk2 and /dev/rdisk/disk2. Follow these steps: 1. Partition the disk using the idisk command and a partition description file. a. Create a partition description file.
00271 current /dev/vg00/lvol7 00000 00408 current /dev/vg00/lvol8 00000 8. Mirror each logical volume in vg00 (the root volume group) onto the specified physical volume. For example: # lvextend –m 1 /dev/vg00/lvol1 /dev/disk/disk2_p2 The newly allocated mirrors are now being synchronized. This operation will take some time. Please wait .... # lvextend –m 1 /dev/vg00/lvol2 /dev/disk/disk2_p2 The newly allocated mirrors are now being synchronized. This operation will take some time. Please wait ....
12. Add a line to /stand/bootconf for the new boot disk using vi or another text editor as follows: # vi /stand/bootconf l /dev/disk/disk2_p2 Where the literal “l” (lower case L) represents LVM. Migrating a Volume Group to New Disks: vgmove Beginning with September 2009 Update, LVM provides a new vgmove command to migrate data in a volume group from an old set of disks to a new set of disks.
1. Instead of manually creating a diskmap file, mapping the old source to new destination disks, the -i option is used to generate a diskmap file for the migration. The user provides a list of destination disks, called newdiskfile in this example. # cat newdiskfile /dev/disk/disk10 /dev/disk/disk11 # vgmove -i newdiskfile -f diskmap.txt /dev/vg00 2. The resulting diskmap.txt file contains the mapping of old source disks to new destination disks: # cat diskmap.
Administering File System Logical Volumes This section describes special actions you must take when working with file systems inside logical volumes. It addresses the following topics: • “Creating a File System” (page 96) • “Extending a File System” (page 97) • “Reducing the Size of a File System” (page 98) • “Backing Up a VxFS Snapshot File System” (page 100) TIP: When dealing with file systems, you can use HP SMH or a sequence of HP-UX commands. For most tasks, using HP SMH is quicker and simpler.
Extending a File System Extending a file system inside a logical volume is a two-step task: extending the logical volume, then extending the file system. The first step is described in “Extending a Logical Volume” (page 53). The second step, extending the file system itself, depends on the following factors: • What type of file system is involved? If it is HFS or VxFS? HFS requires the file system to be unmounted to be extended. Check the type of file system.
# /sbin/lvextend -L 332 /dev/vg01/lvol2 This increases the size of this volume to 332 MB. 3. Extend the file system size to the logical volume size. If the file system is unmounted, use the extendfs command as follows: # /sbin/extendfs /dev/vg01/rlvol2 If you did not have to unmount the file system, use the fsadm command instead. The new size is specified in terms of the block size of the file system. In this example, the block size of the file system /work/project5 is 1 KB.
Reducing a File System Created with OnlineJFS Using the fsadm command shrinks the file system, provided the blocks it attempts to deallocate are not currently in use; otherwise, it fails. If sufficient free space is currently unavailable, file system defragmentation of both directories and extents might consolidate free space toward the end of the file system, allowing the contraction process to succeed when retried. For example, suppose your VxFS file system is currently 6 GB.
Backing Up a VxFS Snapshot File System NOTE: Creating and backing up a VxFS snapshot file system requires that you have the optional HP OnlineJFS product installed on your system. For more information, see HP-UX System Administrator's Guide: Configuration Management. VxFS enables you to perform backups without taking the file system offline by making a snapshot of the file system, a read-only image of the file system at a moment in time. The primary file system remains online and continues to change.
Administering primary Swap Logical Volumes NOTE: Version 2.0 and 2.1 volume groups do not support configuring primary swap logical volume through lvlnboot(1M) command. However they do support configuring swap logical volume through swapon(1M) command. Please refer the section ““Administering secondary Swap Logical Volumes” (page 102) When you enable a swap area within a logical volume, HP-UX determines how large the area is, and it uses no more space than that.
Reducing the Size of a Swap Device If you are using a logical volume for swap, you must reduce the swap size before reducing the size of the logical volume. You can reduce the size of the logical volume using lvreduce or HP SMH. NOTE: Changes to primary swap configuration, such as re-configuring another logical volume as swap or size changes, will take effect in swap sub system only after the reboot.
For more information, see lvcreate(1M). After creating a logical volume to be used as a dump device, use the lvlnboot command with the -d option to update the dump information used by LVM. If you created a logical volume /dev/ vg00/lvol2 for use as a dump area, update the boot information by entering the following: # lvlnboot -d /dev/vg00/lvol2 Removing a Dump Logical Volume To discontinue the use of a currently configured logical volume as a dump device, use the lvrmboot command with the -d option.
When you configure non-root logical volume as a swap or dump, ensure that AUTO_VG_ACTIVATE settings in /etc/lvmrc is turned on: # grep "AUTO_VG_ACTIVATE=" AUTO_VG_ACTIVATE=1 /etc/lvmrc Without this setting, the logical volumes from non-root VG will not be configured as a swap/dump device after the reboot since the corresponding volume group stays deactivated. NOTE: Root volume group gets activated on every reboot regardless of AUTO_VG_ACTIVATE settings.
Types of Snapshots Snapshots can be of two types: fully-allocated and space-efficient. • When a fully allocated snapshot is created, the number of extents required for the snapshot is allocated immediately, just like for a normal logical volume. However, the data contained in the original logical volume is not copied over to these extents. The copying of data occurs through the data unsharing process.
Refer to the lvremove(1M) manpage for more details and full syntax for deleting logical volumes on the snapshot tree. Displaying Snapshot Information The vgdisplay, lvdisplay, and pvdisplay commands will now display additional information when snapshots are involved. A summary of the additional fields displayed by these commands is listed here. See the respective manpages and the LVM Snapshot Logical Volumes white paper for more detailed information.
NOTE: The value of “Pre-allocated LE” should be the sum of “Current pre-allocated LE” and “Unshared LE.” But, in some instances, this might not be shown as the case while an operation that changes the logical volume size or while unsharing of extents is in progress. The correct information will be displayed once the operation is complete. For more information about the LVM snapshot feature and limitations when snapshots are involved, see the lvm(7) manpage and the LVM Snapshot Logical Volumes white paper.
Administration of boot disks of size greater than 2 TB There is no change in LVM command interfaces when administering boot disks of size greater than 2 TB. Administration of boot disks greater than 2 TB is done in the same way as the smaller disks. However there are a few compatibility constraints that are discussed in the next section. Starting March 2013 release, LVM allows you to configure system logical volumes (Root/Boot/Swap/Dump Logical Volumes) up to the size 16 TB using the lvlnboot(1M) command.
Hardware Requirements Currently, the feature (support for boot disk size greater than 2 TB) is supported with the following I/O Cards: • SAS HBAs: 51378-B21(P711m), AM311A(P411), AM312A(P812), Internal HBA P410i • Fiber Channel HBAs: 403619-B21, 403621-B21, 451871-B21, 456972-B21, AD193A, AD194A, AD221A, AD222A, AD393A, AH400A, AH401A, AH402A, AH403A, AT094A NOTE: For an updated list of cards that support this feature, see the support matrixes at http:// www.hp.
4 Troubleshooting LVM This chapter provides conceptual troubleshooting information as well as detailed procedures to help you plan for LVM problems, troubleshoot LVM, and recover from LVM failures.
Max Max Max Max Max Max Max Max Max Max PV Size (Tbytes) VGs LVs PVs Mirrors Stripes Stripe Size (Kbytes) LXs per LV PXs per PV Extent Size (Mbytes) 16 2048 2047 2048 5 511 262144 33554432 16777216 256 If your release does not support Version 2.1 volume groups, it displays the following: # lvmadm -t -V 2.1 Error: 2.1 is an invalid volume group version. • To display the contents of the /etc/lvmtab and /etc/lvmtab_p files in a human-readable fashion.
A maintenance mode boot differs from a standard boot as follows: • The system is booted in single-user mode. • No volume groups are activated. • Primary swap and dump are not available. • Only the root file system and boot file system are available. • If the root file system is mirrored, only one copy is used. Changes to the root file system are not propagated to the mirror copies, but those mirror copies are marked stale and will be synchronized when the system boots normally.
Temporarily Unavailable Device By default, LVM retries I/O requests with recoverable errors until they succeed or the system is rebooted. Therefore, if an application or file system stalls, your troubleshooting must include checking the console log for problems with your disk drives and taking action to restore the failing devices to service.
Media Errors If an I/O request fails because of a media error, LVM typically prints a message to the console log file (/var/adm/syslog/syslog.log) when the error occurs. In the event of a media error, you must replace the disk (see “Disk Troubleshooting and Recovery Procedures” (page 119)). If your disk hardware supports automatic bad block relocation (usually known as hardware sparing), enable it, because it minimizes media errors seen by LVM. NOTE: LVM does not perform software relocation of bad blocks.
or is not configured into the kernel. vgchange: Couldn't activate volume group "/dev/vg01": Either no physical volumes are attached or no valid VGDAs were found on the physical volumes. If a nonroot volume group does not activate because of a failure to meet quorum, follow these steps: 1. Check the power and data connections (including Fibre Channel zoning and security) of all the disks that are part of the volume group that you cannot activate.
# vgchange -a y /dev/vgtest vgchange: Error: The "lvmp" driver is not loaded. Here is another possible error message: # vgchange -a y /dev/vgtest vgchange: Warning: Couldn't attach to the volume group physical volume "/dev/disk/disk1": Illegal byte sequence vgchange: Couldn't activate volume group "/dev/vgtest": Quorum not present, or some physical volume(s) are missing.
Max Max Max Max Max Max Max Max Max Max Max Max VG Size (Tbytes) LV Size (Tbytes) PV Size (Tbytes) VGs LVs PVs Mirrors Stripes Stripe Size (Kbytes) LXs per LV PXs per PV Extent Size (Mbytes) 2048 256 16 512 511 511 6 511 262144 33554432 16777216 256 TIP: If your system has no Version 2.x volume groups, you can free up system resources associated with lvmp by unloading it from the kernel.
LVM Boot Failures There are several reasons why an LVM configuration cannot boot. In addition to the problems associated with boots from non-LVM disks, the following problems can cause an LVM-based system not to boot. Insufficient Quorum In this scenario, not enough disks are present in the root volume group to meet the quorum requirements.
successfully backed up in this step will be recoverable, but some or all of your data might not be successfully backed up because of file corruption. 3. 4. 5. Immediately unmount the corrupted file system if it is mounted. Use the logical volume for swap space or raw data storage, or use HP SMH or the newfs command to create a new file system in the logical volume. This new file system now matches the current reduced size of the logical volume.
The LVM OLR feature uses a new option (–a) in the pvchange command. The –a option disables or re-enables a specified path to an LVM disk, as used to halt LVM access to the disk under “Step 6: Replacing a Bad Disk (Persistent DSFs)” (page 132) or “Step 7: Replacing a Bad Disk (Legacy DSFs)” (page 140). For more information, see the pvchange(1M) manpage.
Step 2: Recognizing a Failing Disk This section explains how to look for signs that one of your disks is having problems, and how to determine which disk it is. I/O Errors in the System Log Often an error message in the system log file, /var/adm/syslog/syslog.log, is your first indication of a disk problem.
this volume group vgdisplay: Warning: couldn't query all of the physical volumes. #vgchange -a y /dev/vg01 vgchange: Warning: Couldn't attach to the volume group physical volume "/dev/dsk/c0t3d0": A component of the path of the physical volume does not exist. Volume group "/dev/vg01" has been successfully changed. Another sign of disk problem is seeing stale extents in the lvdisplay command output.
Step 3: Confirming Disk Failure Once you suspect a disk has failed or is failing, make certain that the suspect disk is indeed failing. Replacing or removing the incorrect disk makes the recovery process take longer. It can even cause data loss. For example, in a mirrored configuration, if you were to replace the wrong disk—the one holding the current good copy rather than the failing disk—the mirrored data on the good disk is lost. It is also possible that the suspect disk is not failing.
# dd if=/dev/rdsk/c0t5d0 of=/dev/null bs=1024k count=64 64+0 records in 64+0 records out NOTE: If the dd command hangs or takes a long time, Ctrl+C stops the read on the disk. To run dd on the background, add & at the end of the command. The following command shows an unsuccessful read of the whole disk: # dd if=/dev/rdsk/c1t3d0 of=/dev/null bs=1024k dd read error: I/O error 0+0 records in 0+0 records out 4.
Note the value calculated is used in the skip argument. The count is obtained by multiplying the PE size by 1024.
Step 4: Determining Action for Disk Removal or Replacement Once you know which disk is failing, you can decide how to deal with it. You can choose to remove the disk if your system does not need it, or you can choose to replace it. Before deciding on your course of action, you must gather some information to help guide you through the recovery process.
command shows '???' for the physical volume if it is unavailable. The issue with this approach is that it does not show precisely how many disks are unavailable. To ensure that multiple simultaneous disk failures have not occurred, run vgdispay to check the difference between the number of active and number of current physical volumes. For example, a difference of one means only one disk is failing.
Based on the gathered information, choose the appropriate disk removal or disk replacement procedure, detailed on the following pages.
Step 5: Removing a Bad Disk You can elect to remove the failing disk from the system instead of replacing it if you are certain that another valid copy of the data exists or the data can be moved to another disk. Removing a Mirror Copy from a Disk If you have a mirror copy of the data already, you can stop LVM from using the copy on the failing disk by reducing the number of mirrors. To remove the mirror copy from a specific disk, use lvreduce, and specify the disk from which to remove the mirror copy.
The physical volume key of a disk indicates its order in the volume group. The first physical volume has the key 0, the second has the key 1, and so on. This need not be the order of appearance in /etc/lvmtab file although it is usually the case, at least when a volume group is initially created. You can use the physical volume key to address a physical volume that is not attached to the volume group.
Total PE Free PE Allocated PE Stale PE IO Timeout (Seconds) Autoswitch 1023 1023 0 0 default On In this example, there are two entries for PV Name. Use the vgreduce command to reduce each path as follows: # vgreduce vgname /dev/dsk/c0t5d0 # vgreduce vgname /dev/dsk/c1t6d0 If the disk is unavailable, the vgreduce command fails. You can still forcibly reduce it, but you must then rebuild the lvmtab, which has two side effects.
Step 6: Replacing a Bad Disk (Persistent DSFs) If instead of removing the disk, you need to replace the faulty disk, this section provides a step-by-step guide to replacing a faulty LVM disk, for systems configured with persistent DSFs. For systems using legacy DSFs, refer to the next step “Step 7: Replacing a Bad Disk (Legacy DSFs)” (page 140) If you have any questions about the recovery process, contact your local HP Customer Response Center for assistance.
If the disk is hot-swappable, replace it. If the disk is not hot-swappable, shut down the system, turn off the power, and replace the disk. Reboot the system. 4. Notify the mass storage subsystem that the disk has been replaced. If the system was not rebooted to replace the failed disk, then run scsimgr before using the new disk as a replacement for the old disk.
8. Restore LVM access to the disk. If you did not reboot the system in Step 2, “Halt LVM access to the disk,” reattach the disk as follows: # pvchange –a y /dev/disk/disk14 If you did reboot the system, reattach the disk by reactivating the volume group as follows: # vgchange -a y /dev/vgnn NOTE: The vgchange command with the -a y option can be run on a volume group that is deactivated or already activated.
b. If fuser reports process IDs using the logical volume, use the ps command to map the list of process IDs to processes, and then determine whether you can halt those processes. For example, look up processes 27815 and 27184 as follows: # ps -fp27815 -p27184 UID PID PPID C STIME TTY root 27815 27184 0 09:04:05 pts/0 root 27184 27182 0 08:26:24 pts/0 c. TIME COMMAND 0:00 vi test.c 0:00 -sh If so, use fuser with the –k option to kill all processes accessing the logical volume.
6. Assign the old instance number to the replacement disk. For example: # io_redirect_dsf -d /dev/disk/disk14 -n /dev/disk/disk28 This assigns the old LUN instance number (14) to the replacement disk. In addition, the device special files for the new disk are renamed to be consistent with the old LUN instance number.
Replacing a Mirrored Boot Disk There are two additional operations you must perform when replacing a mirrored boot disk: 1. You must initialize boot information on the replacement disk. 2. If the replacement requires rebooting the system, and the primary boot disk is being replaced, you must boot from the alternate boot disk. In this example, the disk to be replaced is at lunpath hardware path 0/1/1/1.0x3.0x0, with device special files named /dev/disk/disk14 and /dev/rdisk/disk14.
For information on the boot process and how to select boot options, see HP-UX System Administrator's Guide: Configuration Management. 4. Notify the mass storage subsystem that the disk has been replaced. If the system was not rebooted to replace the failed disk, then run scsimgr before using the new disk as a replacement for the old disk.
8. Restore LVM configuration information to the new disk. For example: # vgcfgrestore -n /dev/vg00 /dev/rdisk/disk14_p2 NOTE: On an HP 9000 server, the boot disk is not partitioned, so the physical volume refers to the entire disk, not the HP-UX partition. Use the following command: # vgcfgrestore -n /dev/vg00 /dev/rdisk/disk14 9. Restore LVM access to the disk.
Step 7: Replacing a Bad Disk (Legacy DSFs) Follow these steps to replace a bad disk if your system is configured with only legacy DSFs. NOTE: LVM recommends the use of persistent device special files, because they support a greater variety of load balancing options. For replacing a disk with persistent device special files, see “Step 6: Replacing a Bad Disk (Persistent DSFs)” (page 132) To replace a bad disk, follow these steps. 1. Halt LVM Access to the Disk.
3. Initialize the Disk for LVM This step copies LVM configuration information onto the disk, and marks it as owned by LVM so it can subsequently be attached to the volume group. If you replaced a mirror of the root disk on an Integrity server, run the idisk and insf commands as described in “Mirroring the Boot Disk on HP Integrity Servers” (page 91). For PA-RISC servers or non-root disks, this step is unnecessary.
NOTE: The ITRC resource forums at http://www.itrc.hp.com offer peer-to-peer support to solve problems and are free to users after registration. If this is a new problem or if you need additional help, log your problem with the HP Response Center, either online through the support case manager at http://www.itrc.hp.com, or by calling HP Support.
5 Support and Other Resources New and Changed Information in This Edition The eighth edition of HP-UX System Administrator's Guide: Logical Volume Management addresses the following new topics: • Added information about converting cDSFs back to their corresponding persistent DSFs, in Table 5 (page 39). • Provided new information on LVM I/O timeout parameters, see “Configuring LVM I/O Timeout Parameters” (page 33) and “LVM I/O Timeout Parameters” (page 163).
Key The name of a keyboard key. Return and Enter both refer to the same key. Term The defined use of an important word or phrase. User input Commands and other text that you type. Variable or Replaceable The name of a placeholder in a command, function, or other syntax display that you replace with an actual value. -chars One or more grouped command options, such as -ikx. The chars are usually a string of literal characters that each represent a specific option.
Related Information HP-UX technical documentation can be found on HP's documentation website at http:// www.hp.com/go/hpux-core-docs. In particular, LVM documentation is provided on this web page:http://www.hp.com/go/ hpux-LVM-VxVM-docs. See the HP-UX Logical Volume Manager and Mirror Disk/UX Release Notes for information about new features and defect fixes on each release.
HP-UX 11i Release Names and Operating System Version Identifiers With HP-UX 11i, HP delivers a highly available, secure, and manageable operating system that meets the demands of end-to-end Internet-critical computing. HP-UX 11i supports enterprise, mission-critical, and technical computing environments. HP-UX 11i is available on both HP 9000 systems and HP Integrity systems. Each HP-UX 11i release has an associated release name and release identifier.
Document updates can be issued between editions to correct errors or document product changes. To ensure that you receive the updated or new editions, subscribe to the appropriate product support service. See your HP sales representative for details. You can find the latest version of this document online at: http://www.hp.com/go/hpux-LVM-VxVM-docs .
HP Encourages Your Comments HP encourages your comments concerning this document. We are committed to providing documentation that meets your needs. Send any errors found, suggestions for improvement, or compliments to: http://www.hp.com/bizsupport/feedback/ww/webfeedback.html Include the document title, manufacturing part number, and any comment, error found, or suggestion for improvement you have concerning this document.
A LVM Specifications and Limitations This appendix discusses LVM product specifications. NOTE: Do not infer that a system configured to these limits is automatically usable. Table 14 Volume Group Version Maximums Version 1.0 Volume Groups Version 2.0 Volume Groups Version 2.1 Volume Groups Version 2.
1 2 The limit of 2048 volume groups is shared among Version 2.x volume groups. Volume groups of Versions 2.x can be created with volume group numbers ranging from 0-2047. However, the maximum number of Version 2.0 volume groups that can be created is 512. For volume group Version 2.2 or higher, the total number of logical volumes includes normal logical volumes as well as snapshot logical volumes. Table 15 Version 1.
Table 16 Version 2.x Volume Group Limits Parameter Command to Set/Change Parameter Minimum Value Default Value Maximum Value 0 n/a 20481 Number of physical volumes n/a in a volume group 511 511 (2.0) 511 (2.0) 2048 (2.1, 2.2) 2048 (2.1, 2.2) Number of logical volumes in a volume group n/a 511 511 (2.0) 511 (2.0) 2047 (2.1, 2.2) 2047 (2.1, 2.
Determining LVM’s Maximum Limits on a System The March 2008 update to HP-UX 11i v3 (11.31) introduced a new command that enables the system administrator to determine the maximum LVM limits supported on the target system for a given volume group version. The lvmadm command displays the implemented limits for Version 1.0 and Version 2.x volume groups. It is impossible to create a volume group that exceeds these limits.
Max Max Min Max Max PXs per PV Extent Size (Mbytes) Unshare unit (Kbytes) Unshare unit (Kbytes) Snapshots per LV 16777216 256 512 4096 255 Determining LVM’s Maximum Limits on a System 153
B LVM Command Summary This appendix contains a summary of the LVM commands and descriptions of their use. Table 17 LVM Command Summary Command Description and Example extendfs Extends a file system: # extendfs /dev/vg00/rlvol3 lvmadm Displays the limits associated with a volume group version: # lvmadm -t -V 2.
Table 17 LVM Command Summary (continued) Command Description and Example pvchange Changes the characteristics of a physical volume: # pvchange -a n /dev/disk/disk2 pvck Performs a consistency check on a physical volume: # pvck /dev/disk/disk47_p2 pvcreate Creates a physical volume be to used as part of a volume group: # pvcreate /dev/rdisk/disk2 pvdisplay Displays information about a physical volume: # pvdisplay -v /dev/disk/disk2 pvmove Moves extents from one physical volume to another: # pvmove
Table 17 LVM Command Summary (continued) Command Description and Example vgscan Scans the system disks for volume groups: # vgscan -v vgreduce Reduces a volume group by removing one or more physical volumes from it: # vgreduce /dev/vg00 /dev/disk/disk2 vgremove Removes the definition of a volume group from the system and the disks: # vgremove /dev/vg00 /dev/disk/disk2 vgsync Synchronizes all mirrored logical volumes in the volume group: # vgsync vg00 vgversion Migrates a volume group to a differe
C Volume Group Provisioning Tips This appendix contains recommendations for parameters to use when creating your volume groups. Choosing an Optimal Extent Size for a Version 1.0 Volume Group When creating a Version 1.0 volume group, the vgcreate command may fail and display a message that the extent size is too small or that the VGRA is too big. In this situation, you must choose a larger extent size and run vgcreate again.
roundup(16 * lvs, BS) + roundup(16 + 4 * pxs, BS) * pvs) / BS, 8); if (length > 768) { printf("Warning: A bootable PV cannot be added to a VG \n" "created with the specified argument values. \n" "The metadata size %d Kbytes, must be less \n" "than 768 Kbytes.\n" "If the intention is not to have a boot disk in this \n" "VG then do not use '-B' option during pvcreate(1M) \n" "for the PVs to be part of this VG.
D Striped and Mirrored Logical Volumes This appendix provides more details on striped and mirrored logical volumes. It describes the difference between standard hardware-based RAID and LVM implementation of RAID. Summary of Hardware Raid Configuration RAID 0, commonly referred to as striping, refers to the segmentation of logical sequences of data across disks. RAID 1, commonly referred to as mirroring, refers to creating exact copies of logical sequences of data.
set, the logical extents are stripped and mirrored to obtain the data layout displayed in Figure 6 (page 159). Striping and mirroring in LVM combines the advantages of the hardware implementation of RAID 1+0 and RAID 0+1, and provides the following benefits: • Better write performance. Write operations take place in parallel and each physical write operation is directed to a different physical volume. • Excellent performance for read.
NOTE: Striping with mirroring always uses strict allocation policies where copies of data do not exist on the same physical disk. This results in a configuration similar to the RAID 01 as illustrated in Figure 7 (page 160).
Compatibility Note Releases prior to HP-UX 11i v3 only support striped or mirrored logical volumes and do not support combination of striped and mirrored logical volumes. If a logical volume using simultaneous mirroring and striping is created on HP-UX 11i v3, attempts to import or activate its associated volume group fails on a previous HP-UX release.
E LVM I/O Timeout Parameters When LVM receives an I/O to a logical volume, it converts this logical I/O to physical I/Os to one or more physical volumes from which the logical volume is allocated. There are two LVM timeout values which affect this operation: • Logical volume timeout (LV timeout). • Physical volume timeout (PV timeout). Logical Volume Timeout (LV timeout) LV timeout controls how long LVM retries a logical I/O after a recoverable physical I/O error.
Timeout Differences: 11i v2 and 11i v3 Since native multi-pathing is included in 11i v3 mass storage stack and it is enabled by default, the LVM timeout concepts may vary between 11i v2 and 11i v3 in certain cases. • Meaning of PV timeout. In 11i v2, LVM utilizes the configured PV timeout fully for a particular PV link to which it is set. If there is any I/O failure, LVM will retry the I/O on a next available PV link to the same physical volume with the new PV timeout budget.
F Warning and Error Messages This appendix lists some of the warning and error messages reported by LVM. For each message, the cause is described and an action is recommended. Matching Error Messages to Physical Disks and Volume Groups Often an error message contains the device number for a device, rather than the device file name. For example, you might see the following message in /var/adm/syslog/syslog.
The example error message refers to the Version 2.x volume group vgtest2. Messages For All LVM Commands Message Text vgcfgbackup: /etc/lvmtab is out of date with the running kernel: Kernel indicates # disks for "/dev/vgname"; /etc/lvmtab has # disks. Cannot proceed with backup. Cause The number of current physical volumes (Cur PV) and active physical volumes (Act PV) are not the same. Cur PV and Act PV must always agree for the volume group.
Max PE per PV VGDA PE Size (Mbytes) Total PE Alloc PE Free PE Total PVG Total Spare PVs Total Spare PVs in use VG Version VG Max Size VG Max Extents 4350 2 4 4340 3740 600 0 0 0 1.0 1082g 69248 In this example, the total free space is 600 physical extents of 4 MB, or 2400 MB. 2. 3. The logical volume is mirrored with a strict allocation policy, and there are not enough extents on a separate disk to comply with the allocation policy.
pvchange(1M) Message Text Unable to detach the path or physical volume via the pathname provided. Either use pvchange(1M) -a N to detach the PV using an attached path or detach each path to the PV individually using pvchange(1M) –a n Cause The specified path is not part of any volume group, because the path has not been successfully attached to the otherwise active volume group it belongs to. Recommended Action Check the specified path name to make sure it is correct.
Cause The vgcfgrestore command was used to initialize a disk that already belongs to an active volume group. Recommended Action Detach the physical volume or deactivate the volume group before attempting to restore the physical volume. If the disk may be corrupted, detach the disk and mark it using vgcfgrestore, then attach it again without replacing the disk. This causes LVM to reinitialize the disk and synchronize any mirrored user data mapped there.
1. 2. The disk was missing when the volume group was activated, but was later restored. This typically occurs when a system is rebooted or the volume group is activated with a disk missing, uncabled, or powered down. The disk LVM header was overwritten with the wrong volume group information. If the disk is shared between two systems, one system might not be aware that the disk was already in a volume group.
# mkdir /dev/vgname # mknod /dev/vgname/group c 64 unique_minor_number # vgimport -m vgname.map -v -f vgname.file /dev/vgname vgcreate(1M) Message Text vgcreate: "/dev/vgname/group": not a character device. Cause The volume group device file does not exist, and this version of the vgcreate command does not automatically create it. Recommended Action Create the directory for the volume group and create a group file, as described in “Creating the Volume Group Device File” (page 44).
vgdisplay(1M) Message Text vgdisplay: Couldn't query volume group "/dev/vgname". Possible error in the Volume Group minor number; Please check and make sure the group minor number is unique. vgdisplay: Cannot display volume group "/dev/vgname". Cause This error has the following possible causes: 1. There are multiple LVM group files with the same minor number. 2. Serviceguard was previously installed on the system and the /dev/slvmvg device file still exists. Recommended Action 1.
Recommended Action See the recommended actions under the “vgchange(1M)” (page 169) error messages. vgextend(1M) Message Text vgextend: Not enough physical extents per physical volume. Need: #, Have: #. Cause The disk size exceeds the volume group maximum disk size. This limitation is defined when the volume group is created, as a product of the extent size specified with the –s option of vgcreate and the maximum number of physical extents per disk specified with the –e option.
vgmodify(1M) Message Text Error: Cannot reduce max_pv below n+1 when the volume group is activated because the highest pvkey in use is n. Cause The command is trying to reduce max_pv below the highest pvkey in use. This is disallowed when a version 1.0 volume group is activated since it requires the compacting of pvkeys . Recommended Action Try executing the vgmodify operation with n+1 PVs. The permissible max_pv values can be obtained using the vgmodify -t option (used with and without the -a option).
Cause The pvkey of a physical volume can range between 0 to a value equal to one less than the maximum supported number of physical volumes for a volume group version. If the pvkey of the physical volume is not in this range for the target Version 2.x volume group, vgversion fails the migration. Recommended Action 1. Use lvmadm to determine the maximum supported number of physical volumes for the target volume group version. 2.
Message Text LVM: Begin: Contiguous LV (VG mmm 0x00n000, LV Number: p) movement: LVM: End: Contiguous LV (VG mmm 0x00n000, LV Number: p) movement: Cause This message is advisory. It is generated whenever the extents of a contiguous logical volume belonging to version 2.x volume group is moved using pvmove. This message is generated beginning with the September 2009 Update. Recommended Action None, if both the Begin and End message appear for a particular contiguous LV.
Message Text LVM: vg[nn] pv[nn] No valid MCR, resyncing all mirrored MWC LVs on the PV Cause This message appears when you import a volume group from a previous release of HP-UX. The format of the MWC changed at HP-UX 11i Version 3, so if the volume group contains mirrored logical volumes using MWC, LVM converts the MWC at import time. It also performs a complete resynchronization of all mirrored logical volumes, which can take substantial time. Recommended Action None.
Message Text vmunix: LVM:ERROR: The task to increase the pre-allocated extents could not be posted for this snapshot LV (VG 128 0x000000, LVM Number 3). Please check if lvmpud is running. Cause If the automatic increase of pre-allocated extents is enabled for a space efficient snapshot, the numbers of pre-allocated extents are automatically increased when the threshold value is reached. The lvmpud daemon must be running for this to succeed. When lvmpud is not running, the above message is logged.
Recommended Action Make sure that the disk devices being used by the entire snapshot tree (the original logical volume and all of its snapshots) are available and healthy before retrying the delete operation. Log Files and Trace Files: /var/adm/syslog/syslog.
Glossary Agile Addressing The ability to address a LUN with the same device special file regardless of the physical location of the LUN or the number of paths leading to it. In other words, the device special file for a LUN remains the same even if the LUN is moved from one Host Bus Adaptor (HBA) to another, moved from one switch/hub port to another, presented via a different target port to the host, or configured with multiple hardware paths. Also referred to as persistent binding.
Mirroring Simultaneous replication of data, ensuring a greater degree of data availability. LVM can map identical logical volumes to multiple LVM disks, thus providing the means to recover easily from the loss of one copy (or multiple copies in the case of multi-way mirroring) of data. Mirroring can provide faster access to data for applications using more data reads than writes. Mirroring requires the MirrorDisk/UX product.
Index Symbols /etc/default/fs, 96 /etc/fstab, 35, 56, 66, 96 /etc/lvmconf/ directory, 18, 35, 69 /etc/lvmpvg, 33 /etc/lvmtab, 11, 21, 39, 57, 70, 72, 111, 118, 165, 166 /stand/bootconf, 91, 94 /stand/rootconf, 111 /stand/system, 23 /var/adm/syslog/syslog.
backing up via mirroring, 68 boot file system see boot logical volume creating, 96 determining who is using, 56, 57, 68, 97, 99, 134 extending, 97 guidelines, 22 in /etc/fstab, 96 initial size, 21 OnlineJFS, 97 overhead, 21 performance considerations, 22 reducing, 98, 118 HFS or VxFS, 99 OnlineJFS, 99 resizing, 22 root file system see root logical volume short or long file names, 96 stripe size for HFS, 32 stripe size for VxFS, 32 unresponsive, 112 finding logical volumes using a disk, 126 fsadm command, 98
for root logical volume, 88 for swap logical volume, 88, 101 updating boot information, 36, 91, 93, 118 lvmadm command, 12, 39, 110, 111, 115, 152, 154, 165 lvmchk command, 39 lvmerge command, 40, 68, 154 synchronization, 26 lvmove command, 40, 154 lvmpud, 107 lvmpud command, 40, 154 lvreduce command, 40 and pvmove failure, 74 reducing a file system, 99, 118 reducing a logical volume, 55, 154 reducing a swap device, 102 removing a mirror, 56, 154 removing a mirror from a specific disk, 56 lvremove command,
policies for allocating, 24 policies for writing, 25 size, 9, 17 synchronizing, 26 physical volume groups, 30, 33 naming convention, 15 Physical Volume Reserved Area see PVRA physical volumes adding, 51 auto re-balancing, 74 commands for, 39 converting from bootable to nonbootable, 83, 171 creating, 43 defined, 9 device file, 14, 15, 165 disabling a path, 86 disk layout, 16 displaying information, 42 moving, 71, 72 moving data between, 72 naming convention, 14 removing, 51 resizing, 77 pre-allocated extents
detaching links, 86 reinstating a spare disk, 76 requirements, 27 splitting a mirrored logical volume, 68 splitting a volume group, 66 stale data, 26 strict allocation policy, 25 stripe size, 32 striping, 31 and mirroring, 33 benefits, 31 creating a striped logical volume, 52 defined, 8 interleaved disks, 31 performance considerations, 31 selecting stripe size, 32 setting up, 31 swap logical volume, 22, 23, 101 see also primary swap logical volume creating, 88, 101 extending, 101 guidelines, 23 information
changing physical volume type, 83, 171 collecting information, 59, 62 errors, 174 modifying volume group parameters, 58, 173 resizing physical volumes, 77, 167 vgmove command, 94, 155 VGRA and vgmodify, 59, 62 area on disk, 17 size dependency on extent size, 17, 171 vgreduce command, 39, 51, 156 with multipathed disks, 58 vgremove command, 39, 68, 156 vgscan command, 39, 156 moving disks, 71 recreating /etc/lvmtab, 118 vgsync command, 26, 39, 156 vgversion command, 40, 46, 156 errors, 174 volume group confi