When Good Disks Go Bad: Dealing with Disk Failures Under LVM Abstract .............................................................................................................................................. 3 Background ........................................................................................................................................ 3 1. Preparing for Disk Recovery ..............................................................................................................
New Options to Specify the DSF Naming Model ............................................................................... 44 Behavioral Differences of Commands After Disabling the Legacy Naming Model .................................. 45 Appendix C: Volume Group Versions and LVM Configuration Files ......................................................... 46 Volume Group Version ...................................................................................................................
Abstract This white paper discusses how to deal with disk failures under the HP-UX Logical Volume Manager (LVM). It is intended for system administrators or operators who have experience with LVM. It includes strategies to prepare for disk failure, ways to recognize that a disk has failed, and steps to remove or replace a failed disk. Background Whether managing a workstation or server, your goals include minimizing system downtime and maximizing data availability.
1. Preparing for Disk Recovery Forewarned is forearmed. Knowing that hard disks will fail eventually, you can take some precautionary measures to minimize your downtime, maximize your data availability, and simplify the recovery process. Consider the following guidelines before you experience a disk failure. Defining a Recovery Strategy As you create logical volumes, choose one of the following recovery strategies. Each choice strikes a balance between cost, data availability, and speed of data recovery.
Starting with the HP-UX 11i v3 release, HP no longer requires or recommends that you configure LVM with alternate links. However, it is possible to maintain the traditional LVM behavior. To do so, both of the following criteria must be met: Only the legacy device special file naming convention is used in the LVM volume group configuration. The scsimgr command is used to disable the Mass Storage Subsystem multipath behavior.
# swlist -l fileset | grep -i mirror LVM.LVM-MIRROR-RUN B.11.23 LVM Mirror The process of mirroring is usually straightforward, and can be easily accomplished using the system administration manager SAM, or with a single lvextend command. These processes are documented in Managing Systems and Workgroups (11i v1 and v2) and System Administrator's Guide: Logical Volume Management (11i v3). The only mirroring setup task that takes several steps is mirroring the root disk.
disks in any volume group leads to a more complex LVM configuration, which will be more difficult to recreate after a catastrophic failure. Finally, a small root volume group is quickly recovered. In some cases, you can reinstall a minimal system, restore a backup, and be back online within three hours of diagnosis and replacement of hardware. Three disks in the root volume group are better than two due to quorum restrictions.
While this list of preparatory actions does not keep a disk from failing, it makes it easier for you to deal with failures when they occur.
2. Recognizing a Failing Disk The guidelines in the previous section will not prevent disk failures on your system. Assuming you follow all the recommendations, how can you tell when a disk has failed? This section explains how to look for signs that one of your disks is having problems, and how to determine which disk it is. I/O Errors in the System Log Often an error message in the system log file is your first indication of a disk problem. In /var/adm/syslog/syslog.
Disk Failure Notification Messages from Diagnostics If you have Event Monitoring Service (EMS) hardware monitors installed on your system, and you enabled the disk monitor disk_em, a failing disk can trigger an event to the (EMS). Depending on how you configured EMS, you might get an email message, information in /var/adm/syslog/syslog.log, or messages in another log file. EMS error messages identify a hardware problem, what caused it, and what must be done to correct it.
# vgdisplay -v vg vgdisplay: Warning: couldn't query physical volume "/dev/dsk/c0t3d0": The specified path does not correspond to physical volume attached to this volume group vgdisplay: Warning: couldn't query all of the physical volumes. # vgchange -a y /dev/vg01 vgchange: Warning: Couldn't attach to the volume group physical volume "/dev/dsk/c0t3d0": A component of the path of the physical volume does not exist. Volume group "/dev/vg01" has been successfully changed.
3. Confirming Disk Failure Once you suspect a disk has failed or is failing, make certain that the suspect disk is indeed failing. Replacing or removing the incorrect disk makes the recovery process take longer. It can even cause data loss. For example, in a mirrored configuration, if you were to replace the wrong disk—the one holding the current good copy rather than the failing disk—the mirrored data on the good disk is lost. It is also possible that the suspect disk is not failing.
5. If both ioscan and diskinfo succeed, the disk might still be failing. As a final test, try to read from the disk using the dd command. Depending on the size of the disk, a comprehensive read can be time-consuming, so you might want to read only a portion of the disk. If the disk is functioning properly, no I/O errors are reported.
For a version 2.x VG, enter: xd -j 0x21a4 -t uI -N 4 /dev/dsk/c0t3d0 In this example, this is a version 1.0 VG. # xd -j 0x2048 -t uI 0000000 0000004 1024 # xd -j 0x2048 -t uI 0000000 0000004 -N 4 /dev/dsk/c0t3d0 -N 4 /dev/dsk/c1t3d0 1024 4. Calculate the location of the physical extent for each PV. Multiply the PE number by the PE size and then by 1024 to convert to Kb: 2 * 32 * 1024 = 65536 Add the offset to PE zero: 65536 + 1024 = 66560 5.
4. Gathering Information About a Failing Disk Once you know which disk is failing, you can decide how to deal with it. You can choose to remove the disk if your system does not need it, or you can choose to replace it. Before deciding on your course of action, you must gather some information to help guide you through the recovery process. Is the questionable disk hot-swappable? This determines whether you must power down your system to replace the disk.
/dev/vg00/lvol2 /dev/vg00/lvol3 /dev/vg00/lvol4 /dev/vg00/lvol5 /dev/vg00/lvol6 /dev/vg00/lvol7 /dev/vg00/lvol8 /dev/vg00/lvol9 /dev/vg00/lvol10 512 50 50 250 450 350 1000 1000 3 512 50 50 250 450 350 1000 1000 3 … If pvdisplay fails, you have several options. You can refer to any configuration documentation you created in advance. Alternately, you can run lvdisplay –v on all the logical volumes in the volume group and see if any extents are mapped to an unavailable physical volume.
There might be an instance where you see that only the failed physical volume holds the current copy of a given extent (and all other mirror copies of the logical volume hold the stale data for that given extent), and LVM does not permit you to remove that physical volume from the volume group. In this case, use the lvunstale command (available from your HP support representative) to mark one of the mirror copies as “nonstale” for that given extent. HP recommends you use the lvunstale tool with caution.
5. Removing the Disk If you have a copy of the data on the failing disk, or you can move the data to another disk, you can choose to remove the disk from the system instead of replacing it. Removing a Mirror Copy from a Disk If you have a mirror copy of the data already, you can stop LVM from using the copy on the failing disk by reducing the number of mirrors. To remove the mirror copy from a specific disk, use lvreduce, and specify the disk from which to remove the mirror copy.
In these situations where the disk was not available at boot time, or the disk has failed before volume group activation (pvdisplay failed), the lvreduce command fails with an error that it could not query the physical volume. You can still remove the mirror copy, but you must specify the physical volume key rather than the name. The physical volume key of a disk indicates its order in the volume group. The first physical volume has the key 0, the second has the key 1, and so on.
You can select a particular target disk or disks, if desired. For example, to move all the physical extents from c0t5d0 to the physical volume c0t2d0, enter the following command: # pvmove /dev/dsk/c0t5d0 /dev/dsk/c0t2d0 The pvmove command succeeds only if there is enough space on the destination physical volumes to hold all the allocated extents of the source physical volume.
groups were configured with only persistent device special files, there is no need to arrange them again. On releases prior to HP-UX 11i v3, you must rebuild the lvmtab file as follows: # vgreduce -f vgname # mv /etc/lvmtab /etc/lvmtab.
7. 6. Replacing the Disk (Releases Prior to 11i v3 or When LVM Volume Group is Configured with Only Legacy DSFs on 11i v3 or Later) If you decide to replace the disk, you must perform a five-step procedure. How you perform each step depends on the information you gathered earlier (hot-swap information, logical volume names, and recovery strategy), so this procedure varies. This section also includes several common scenarios for disk replacement, and a flowchart summarizing the disk replacement procedure.
# fuser –ku /dev/vgname/lvname 4. Then try to unmount the file system again as follows: # umount /dev/vgname/lvname o If the logical volume is being accessed as a raw device, you can use fuser to find out which applications are using it. Then you can halt those applications. If for some reason you cannot disable access to the logical volume—for example, you cannot halt an application or you cannot unmount the file system—you must shut down the system.
If the disk is hot-swappable, you can replace it without powering down the system. Otherwise, power down the system before replacing the disk. For the hardware details on how to replace the disk, see the hardware administrator’s guide for the system or disk array. If you powered down the system, reboot it normally. The only exception is if you replaced a disk in the root volume group.
The vgchange command attaches all paths for all disks in the volume group, and automatically resumes recovering any unattached failed disks in the volume group. Therefore, only run vgchange after all work has been completed on all disks and paths in the volume group, and it is desirable to attach them all. Step 5: Restoring Lost Data to the Disk This final step can be a straightforward resynchronization for mirrored configurations, or a recovery of data from backup media.
Swap: lvol2 Dump: lvol2 on: on: /dev/dsk/c0t5d0 /dev/dsk/c0t5d0, 0 # pvdisplay –v /dev/dsk/c2t15d0 … --- Distribution of physical LV Name LE of LV /dev/vg01/lvol1 4340 … | more volume --PE for LV 4340 # lvdisplay –v /dev/vg01/lvol1 | grep “Mirror copies” Mirror copies 1 # lvdisplay -v /dev/vg01/lvol1 | 00000 /dev/dsk/c2t15d0 00000 00001 /dev/dsk/c2t15d0 00001 00002 /dev/dsk/c2t15d0 00002 00003 /dev/dsk/c2t15d0 00003 … grep –e /dev/dsk/c2t15d0 –e ’???’ | more current /dev/dsk/c5t15d0 00000 current curr
# umount /dev/vg01/lvol1 umount: cannot unmount /dump : Device busy # fuser -u /dev/vg01/lvol1 /dev/vg01/lvol1: 27815c(root) 27184c(root) # ps -fp27815 -p27184 UID PID PPID C STIME TTY TIME COMMAND root 27815 27184 0 09:04:05 pts/0 0:00 vi test.c root 27184 27182 0 08:26:24 pts/0 0:00 -sh # fuser -ku /dev/vg01/lvol1 /dev/vg01/lvol1: 27815c(root) 27184c(root) # umount /dev/vg01/lvol1 For this example, it is assumed that you are permitted to halt access to the entire volume group while you recover the disk.
# newfs [options] /dev/vg01/rlvol1 # mount /dev/vg01/lvol1 /app # Disk Replacement Process Flowchart The following flowchart summarizes the disk replacement process.
Start No Is Disk Replaced Yes Gather all required information Eg : - What PV is to be replaced -Is the PV Hot swappable -What LV’s are affected - What their layout ? Are they mirrored - Is the PV root Disk or part of root VG Yes Check Disk is Okay No Try to close all affected LVs Any unmirrored logical Volumes? - halt applications - fuser -u /mnt - ps -f ppids - fuser -ku /mnt - umount /mnt End Yes No Yes LVM OLR Installed ? pvchange -a N PV No Yes Data on Disk ? The disk appears to have som
Check Root Disk No Root Disk? Yes Boot normally If the disk is not hot-swappable one BCH>boot pri ISL>hpux -lq Ignite/UX Recovery No Mirrored ? Recover from a Recovery tape or Ignite Server Boot from Mirror BCH>boot alt ISL>hpux -lq Yes Restore Header and Attach PV #vgcfgrestore -n vg PV #vgchange -a y VG No Yes Is Primary root mirrored? end Partition boot disk (Integrity Servers) LIF/BDRA Config Procedure Recover data from backup Eg..
8. 7. Replacing the Disk (11i v3 release Onwards when the LVM Volume Group is Configured with Persistent DSFs) After you isolate a failed disk, the replacement process depends on answers to the following questions: Is the disk hot-swappable? Is the disk the root disk or part of the root volume group? What logical volumes are on the disk, and are they mirrored? Based on the gathered information, choose the appropriate procedure.
# scsimgr replace_wwid –D /dev/rdisk/disk14 This command lets the storage subsystem replace the old disk’s LUN World-Wide-Identifier (WWID) with the new disk’s LUN WWID. The storage subsystem creates a new LUN instance and new device special files for the replacement disk. 5. Determine the new LUN instance number for the replacement disk.
# vgchange -a y /dev/vgnn Note: The vgchange command with the -a y option can be run on a volume group that is deactivated or already activated. It attaches all paths for all disks in the volume group and resumes automatically recovering any disks in the volume group that had been offline or any disks in the volume group that were replaced. Therefore, run vgchange only after all work has been completed on all disks and paths in the volume group, and it is necessary to attach them all.
b. If fuser reports process IDs using the logical volume, use the ps command to map the list of process IDs to processes, and determine whether you can halt those processes. For example, look up processes 27815 and 27184 as follows: # ps -fp27815 -p27184 UID PID PPID C STIME TTY root 27815 27184 0 09:04:05 pts/0 root 27184 27182 0 08:26:24 pts/0 c. TIME COMMAND 0:00 vi test.c 0:00 -sh If so, use fuser with the –k option to kill all processes accessing the logical volume.
In this example, LUN instance 28 was created for the new disk, with LUN hardware path 64000/0xfa00/0x1c, device special files /dev/disk/disk28 and /dev/rdisk/disk28, at the same lunpath hardware path as the old disk, 0/1/1/1.0x3.0x0. The old LUN instance 14 for the old disk now has no lunpath associated with it. Note: If the system was rebooted to replace the failed disk, ioscan –m lun does not display the old disk. 5. Assign the old instance number to the replacement disk.
o o For raw volumes, restore the full raw volume using the utility that was used to create your backup. Then restart the application. For file systems, you must recreate the file systems first. For example: # newfs -F fstype /dev/vgnn/rlvolnn Use the logical volume's character device file for the newfs command. For file systems that had nondefault configurations, see newfs(1M) for the correct options. After creating the file system, mount it under the mount point that it previously occupied.
Note: On an HP 9000 server, the boot disk is not partitioned so the physical volume refers to the entire disk, not the HP-UX partition. Enter the following command: # pvchange -a N /dev/disk/disk14 3. Replace the disk. For the hardware details on how to replace the disk, see the hardware administrator’s guide for the system or disk array. If the disk is hot-swappable, replace it. If the disk is not hot-swappable, shut down the system, turn off the power, and replace the disk. Reboot the system.
In this example, LUN instance 28 was created for the new disk, with LUN hardware path 64000/0xfa00/0x1c, device special files /dev/disk/disk28 and /dev/rdisk/disk28, at the same lunpath hardware path as the old disk, 0/1/1/1.0x3.0x0. The old LUN instance 14 for the old disk now has no lunpath associated with it. Note: If the system was rebooted to replace the failed disk, then ioscan –m lun does not display the old disk. 6. (HP Integrity servers only) Partition the replacement disk.
the volume group that were replaced. Therefore, run vgchange only after all work has been completed on all disks and paths in the volume group, and it is necessary to attach them all. 10. Initialize boot information on the disk. For an HP Integrity server, set up the boot area and update the autoboot file in the disk's EFI partition as described in step 5 and step 6 of Mirroring the Root Volume on Integrity Servers listed in Appendix D.
Start Disk Replaced Gather all required information Eg : - What PV is to be replaced -Is the PV Hot swappable -What LV’s are affected - What their layout ? Are they mirrored - Is the PV root Disk or part of root VG - Note down the ioscan -m lun output No Yes Is Disk Replaced Check Disk is Okay Yes Try to close all affected L No Any unmirrored logical Volumes? End Pvchange -a N PV No - halt applications - fuser -cu /mnt - ps -f ppids - fuser -ku /mnt - umount /mnt Yes Yes Is Disk Okay ?
Check Root Disk No Root Disk? Yes Boot normally If the disk is not hot-swappable one BCH>boot pri ISL>hpux -lq Ignite/UX Recovery No Mirrored ? Recover from a Recovery tape or Ignite Server Boot from Mirror BCH>boot alt ISL>hpux -lq Yes Restore Header and Attach PV #vgcfgrestore -n vg PV #vgchange -a y VG No Yes Is Primary root mirrored? end Partition boot disk (Integrity Servers) LIF/BDRA Config Procedure Recover data from backup Eg..
Conclusion In your role as system manager, you will encounter disk failures. LVM can lessen the impact of those disk failures, enabling you to configure your data storage to make a disk failure transparent to users, and to keep your system and data available during the recovery process. By making use of hardware features, such as hot-swappable disks, and software features, such as mirroring and online disk replacement, you can maximize your system availability and minimize data loss due to disk failure.
Appendix A: Using Device File Types Prior to the HP-UX 11i v3 release, there were only legacy device special files. Starting with the HP-UX 11i v3 release, mass storage devices, such as disk devices and tape devices, have two types of device files, persistent device special files and legacy device special files. You can use both to access the mass storage device independently, and can co-exist on the same system.
Appendix B: Device Special File Naming Model HP-UX 11i v3 introduces a new representation of mass storage devices called the agile view. In this representation, the device special file (DSF) name for each disk no longer contains path (or link) information. There are two DFS types: The multipathed disk has a single persistent DSF regardless of the number of physical paths to it. The legacy view, represented by the legacy DSF, continues to exist.
vgimport –N Configure the volume group using persistent DSFs. You can only use this option together with the scan option, –s. In the absence of the -N option, legacy DSFs are used. vgscan Recover the volume group information by using LVM data structures in kernel memory, and by probing all LVM disks on the system. In the case of volume groups activated at least once since the last system boot, the /etc/lvmtab file is recovered with legacy and persistent DSFs, as was the configuration during activation.
Appendix C: Volume Group Versions and LVM Configuration Files Volume Group Version With the March 2008 release of HP-UX 11i v3, LVM supports a new version of volume group- Version 2.0. Version 1.0 is the version supported on all current and previous versions of HP-UX 11i. A Version 2.0 volume group is a volume group whose metadata layout is different from the one used for Version 1.0 volume groups. Version 2.
1.0 Minor Number Encoding: brw-r----- 1 root sys Volume group number 18 (picked by customer) Note: first 8 bits 64 0x120001 Jun 23 1 root Logical volume number 01 (managed by LVM) Reserved Bits 11 & 23 Reserved. If non zero then no assumptions can be made about logical volume or volume group number 2.
Appendix D: Procedures This section contains details on some of the procedures described in earlier sections of this document. Mirroring the Root Volume on PA-RISC Servers To set up a mirrored root configuration, you must add a disk to the root volume group, mirror all the root logical volumes onto it, and make it bootable. For this example, the disk is at path 2/0/7.15.0 and has device special files named /dev/rdsk/c2t15d0 and /dev/dsk/c2t15d0. 1.
contains two physical volumes, and one of them is not accessible, the system rejects to boot unless you disable the quorum check using the –lq option. 6. Enter lvextend command to mirror each logical volume in vg00 (the root volume group) onto the specified physical volume. You must extend the logical volumes in the same order that they are configured on the original boot disk. Use the pvdisplay command with the -v option to determine the list of logical volumes and their order.
Mirroring the Root Volume on Integrity Servers The procedure to mirror the root disk on Integrity servers is similar to the procedure for PA-RISC servers. The difference is that Integrity server boot disks are partitioned; you must set up the partitions, copy utilities to the EFI partition, and use the HP-UX partition device files for LVM commands. For this example, the disk is at hardware path 0/1/1/0.1.0, with a device special file named /dev/rdsk/c2t1d0. 1.
You now have the following device files for this disk: /dev/disk/disk75 The entire disk (block access) /dev/rdisk/disk75 The entire disk (character access) /dev/disk/disk75_p1 The EFI partition (block access) /dev/rdisk/disk75_p1 The EFI partition (character access) /dev/disk/disk75_p2 The HP-UX partition (block access) /dev/rdisk/disk75_p2 The HP-UX partition (character access) /dev/disk/disk75_p3 The Service partition (block access) /dev/rdisk/disk75_p3 The Service partition (character acce
# echo “boot vmunix” > ./AUTO b. Copy the file from the current directory into the new disk EFI partition. Be sure to use the device file with the s1 suffix: # efi_cp -d /dev/rdsk/c2t1d0s1 ./AUTO /efi/hpux/auto c. To check the contents of the AUTO file on EFI partition on mirror disk: # efi_cp –d /dev/rdsk/c2t0d0s1 –u /EFI/HPUX/AUTO /tmp/mir # cat /tmp/mir boot vmunix 7. Use the lvextend command to mirror each logical volume in vg00 (the root volume group) onto the specified physical volume.
10. Specify the mirror disk as the alternate boot path in nonvolatile memory: # setboot –a 0/1/1/0.1.0 11. To add a line to /stand/bootconf for the new boot disk, use vi or another text editor as follows: # vi /stand/bootconf l /dev/dsk/c2t1d0s2 where l denotes LVM. 12.
Appendix E: LVM Error Messages This appendix lists some of the warning and error messages reported by LVM. For each message, the cause is listed, and an administrator action is recommended. The appendix is divided into two sections, one for LVM command errors, and one for the system log file /var/adm/syslog/syslog.log error messages.
Max PV Cur PV Act PV Max PE per PV VGDA PE Size (Mbytes) Total PE Alloc PE Free PE Total PVG Total Spare PVs Total Spare PVs in use 16 1 1 4350 2 4 4340 3740 600 0 0 0 In this example, the total free space is 600 * 4 MB, or 2400 MB. b. The logical volume is mirrored with a strict allocation policy, and there are not enough extents on a separate disk to comply with the allocation policy.
pvchange "a": Illegal option. Cause: LVM OLR is not installed. Recommended Action: Install the patches enabling LVM OLR, or use an alternate replacement procedure. The HP-UX kernel running on this system does not provide this feature. Install the appropriate kernel patch to enable it. Cause: LVM OLR is not completely installed. Both the LVM command and kernel components are required to enable LVM OLR. In this case, the command patch is installed and the kernel patch is not.
overwritten with a command like dd or pvcreate. If the disk is shared between two systems, it is likely that one of the systems was not aware that the disk was already in a volume group. The corruption can also be caused by running vgchgid incorrectly when using BCV split volumes. Recommended Action: Restore a known good configuration to the disk using vgcfgrestore. Be sure to use a valid copy dated before the first occurrence of the problem.
/dev/dsk/c40t0d4 (02) !VGID:35c8cf58 3f8df316 PVID:065f303f 3e63f003 /dev/dsk/c40t1d0 … In this example, note that the volume group ids (VGID) for the disks in /dev/vg01 are not consistent; inconsistencies are marked !VGID. Recommended Action: a. Use ioscan and diskinfo to confirm that the disk is functioning properly. Reactivate the volume group using the following command: # vgchange –a y vgname b. There are several methods of recovery from this error.
vgdisplay vgdisplay: Couldn't query volume group "/dev/vg00". Possible error in the Volume Group minor number; Please check and make sure the group minor number is unique. vgdisplay: Cannot display volume group "/dev/vg00". Cause: This error has several possible causes: a. There are multiple LVM group files with the same minor number. b. Serviceguard was previously installed on the system, and the /dev/slvmvg device file still exists. Recommended Action: a. List the LVM group files.
Recommended Action: The volume group extent size and number of physical extents per disk are not dynamic. The only way to use the entire disk is to re-create the volume group with new values for the –s and –e options. Alternatively, you can work with an HP support representative to adjust the volume group characteristics using the vgmodify command; note that this utility is currently unsupported and available only from your HP support representative.
Appendix F: Moving a Root Disk to a New Disk or Another Disk Follow these steps to move root disk /dev/dsk/c1t1d1 (source disk) to disk /dev/dsk/c2t2d2 (destination disk) staying within the same volume group: 1. To make the destination disk a bootable LVM disk, enter: # pvcreate -f -B /dev/rdsk/c2t2d2 2. To make the disk bootable. Enter: # mkboot /dev/rdsk/c2t2d2 # mkboot -a "hpux -a (;0)/stand/vmunix" /dev/rdsk/c2t2d2 3.
Appendix G: Recreating Volume Group Information There might be situations when the volume group directory, for example vgtest under /dev, is accidentally removed. In such a situation, use the following steps to re-create the vgtest volume group: 1. To manually create the directory, enter: # mkdir /dev/vgtest 2. To create the character special file group under /dev/vgtest directory, enter: # mknod /dev/vgtest/group c 64 0xXX0000 (XX = the minor number for the group file.
Appendix H: Disk Relocation and Recovery Using vgexport and vgimport Follow these steps to move a volume group /disks from one system (for example, A) to another system (for example, B): 1. Unmount all of the mounted lvols within the volume group, vgtest, that you are exporting, and close all of the logical volumes: # umount /dev/vgtest/lvolX /mount_point_name (repeat for all lvols) 2. To deactivate the volume group vgtest, enter: # vgchange -a n /dev/vgtest 3.
using vgimport command, followed by configuring the LVM with legacy device special files as well using vgextend command. If you choose to use the traditional LVM behavior on HP-UX 11i v3 as well, follow these steps: a. Use the scsimgr command to disable the Mass Storage Subsystem multipath behavior. b. Only the legacy device special files are used in the vgimport command line while configuring the LVM volume group. 3.
Appendix I: Splitting Mirrors to Perform Backups Make sure the database is not active before you split it. The following example uses /dev/vg02/lvol1 to show this process. Use lvdisplay -v /dev/vg02/lvol1 to verify that /dev/vg02/lvol1 is mirrored and current. 1. To split the logical volume, enter: # sync # lvsplit -s backup /dev/vg02/lvol1 The system console displays the following message: Logical volume "/dev/vg02/lvol1backup" has been successfully created with character device "/dev/vg02/rlvol1backup".
Appendix J: Moving an Existing Root Disk to a New Hardware Path Before you shut down the system, note the /etc/lvmtab contents, and note which disks are in vg00. Shut down the system and connect the existing root drive to the new path. Boot up the system and escape boot sequence, and then boot off the root drive at the new hardware path to ISL. 1. To boot into LVM maintenance mode, enter: ISL> hpux -lm (;0)/stand/vmunix 2. To find the new hardware path of the root disk, enter: # ioscan -fnC disk 3.
For more information To learn more about some of the LVM features, see the following documents on the HP documentation website: http://docs.hp.com (Use search with the given name of the whitepaper) http://www.docs.hp.com/en/oshpux11iv3#LVM%20Volume%20Manager LVM Version 2.