VERITAS Volume Manager 4.1 Administrator’s Guide HP-UX 11i v2 Manufacturing Part Number : 5991-1838 September 2005 Edition 5 Printed in the United States © Copyright 2005 - 2006 Hewlett-Packard Development Company L.P.
Legal Notices Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation and Technical Data for Commercial Items are licensed to the U.S. Government under vendor’s standard commercial license. The information contained herein is subject to change without notice.
Contents 1. Understanding VERITAS Volume Manager VxVM and the Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 How Data is Stored . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 How VxVM Handles Storage Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Physical Objects—Physical Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents FastResync Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Hot-Relocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 2. Administering Disks Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Disk Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Replacing a Failed or Removed Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enabling a Physical Disk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Taking a Disk Offline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Renaming a Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reserving Disks . . . . .
Contents Displaying Information About the DMP Error Daemons. . . . . . . . . . . . . . . . . . . . . 148 Configuring Array Policy Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 4. Creating and Administering Disk Groups Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Specifying a Disk Group to Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Dissociating Subdisks from Plexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 Removing Subdisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 Changing Subdisk Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 6. Creating and Administering Plexes Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Creating a Striped Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating a Mirrored-Stripe Volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating a Striped-Mirror Volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mirroring across Targets, Controllers or Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . Creating a RAID-5 Volume . . . . . . . . . . . . . . . . . . .
Contents Resizing Volumes using vxresize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Resizing Volumes using vxassist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Resizing Volumes using vxvol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Changing the Read Policy for Mirrored Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Removing a Volume . . . . . . . . . . . . . . . . . . . . . .
Contents Configuring Hot-Relocation to Use Only Spare Disks . . . . . . . . . . . . . . . . . . . . . . . . . Moving and Unrelocating Subdisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Moving and Unrelocating Subdisks using vxdiskadm . . . . . . . . . . . . . . . . . . . . . . . Moving and Unrelocating subdisks using vxassist . . . . . . . . . . . . . . . . . . . . . . . . . . Moving and Unrelocating Subdisks using vxunreloc . . . . . . . . . . . . . . . . . . . . . . . .
Contents Joining Disk Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Changing the Activation Mode on a Shared Disk Group . . . . . . . . . . . . . . . . . . . . . Setting the Connectivity Policy on a Shared Disk Group. . . . . . . . . . . . . . . . . . . . . Creating Volumes with Exclusive Open Access by a Node. . . . . . . . . . . . . . . . . . . . Setting Exclusive Open Access to a Volume by a Node . . . . . . . . . . . . . . . . . . . . . .
Contents A. Commands Summary Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Preface xiii
The VERITAS Volume Manager Administrator’s Guide provides information on how to use VERITAS Volume Manager (VxVM) from the command line. This guide is intended for system administrators responsible for installing, configuring, and maintaining systems under the control of VERITAS Volume Manager. The purpose of this guide is to provide the system administrator with a thorough knowledge of the procedures and concepts involved with volume management and system administration using VERITAS Volume Manager.
Organization This guide is organized as follows: • Chapter 1, “Understanding VERITAS Volume Manager,” on page 1 • Chapter 2, “Administering Disks,” on page 63 • Chapter 3, “Administering Dynamic Multipathing (DMP),” on page 109 • Chapter 4, “Creating and Administering Disk Groups,” on page 151 • Chapter 5, “Creating and Administering Subdisks,” on page 197 • Chapter 6, “Creating and Administering Plexes,” on page 211 • Chapter 7, “Creating Volumes,” on page 225 • Chapter 8, “Administering Vol
• VERITAS Storage Foundation 4.1 Cross-Platform Data Sharing Administrator’s Guide • VERITAS Flashsnap Point-In-Time-Copy Solutions Administrator Guide Typographic Conventions Table 1 describes the typographic conventions used in this document. Table 1 Typographic Conventions Typeface Usage Examples Computer output, files, directories, software elements such as command options, function names, and parameters Read tunables from the /etc/vx/tunefstab file.
Technical Support For license information (U.S. and Canadian Customers) contact: • Phone: 650-960-5111 • Email: hplicense@mayfield.hp.com For license information (Europe) contact: • Phone: +33.(0)4.76.14.15.29 • Email: codeword_europe@hp-france-gen1.om.hp.com For latest information on available patches visit: • http://itrc.hp.com For technical support visit: • http://welcome.hp.com/country/us/en/support.html HP Encourages Your Comments HP encourages your comments concerning this document.
xviii
1 Understanding VERITAS Volume Manager The VERITAS Volume Manager (VxVM) is a storage management subsystem that allows you to manage physical disks as logical devices called volumes. A volume is a logical device that appears to data management systems as a physical disk. VxVM provides easy-to-use online disk storage management for computing environments and Storage Area Network (SAN) environments.
Understanding VERITAS Volume Manager VxVM and the Operating System VxVM and the Operating System VxVM operates as a subsystem between your operating system and your data management systems, such as file systems and database management systems. VxVM is tightly coupled with the operating system. Before a disk can be brought under VxVM control, the disk must be accessible through the operating system device interface.
Understanding VERITAS Volume Manager How Data is Stored How Data is Stored There are several methods used to store data on physical disks. These methods organize data on the disk so the data can be stored and retrieved efficiently. The basic method of disk organization is called formatting. Formatting prepares the hard disk so that files can be written to and retrieved from the disk by using a prearranged storage pattern.
Understanding VERITAS Volume Manager How VxVM Handles Storage Management How VxVM Handles Storage Management VxVM uses two types of objects to handle storage management: physical objects and virtual objects. • Physical objects—Physical disks or other hardware with block and raw operating system device interfaces that are used to store data. • Virtual objects—When one or more physical disks are brought under the control of VxVM, it creates virtual objects called volumes on those physical disks.
Understanding VERITAS Volume Manager How VxVM Handles Storage Management The figure, “Physical Disk Example,” shows how a physical disk and device name (devname) are illustrated in this document. For example, device name c0t0d0 is the entire hard disk connected to controller number 0 in the system, with a target ID of 0, and physical disk number 0. Figure 1-1 Physical Disk Example devname VxVM writes identification information on physical disks under VxVM control (VM disks).
Understanding VERITAS Volume Manager How VxVM Handles Storage Management Data can be spread across several disks within an array to distribute or balance I/O operations across the disks. Using parallel I/O across multiple disks in this way improves I/O performance by increasing data transfer speed and overall throughput for the array.
Understanding VERITAS Volume Manager How VxVM Handles Storage Management Device Discovery Device Discovery is the term used to describe the process of discovering the disks that are attached to a host. This feature is important for DMP because it needs to support a growing number of disk arrays from a number of vendors. In conjunction with the ability to discover the devices attached to a host, the Device Discovery services enables you to add support dynamically for new disk arrays.
Understanding VERITAS Volume Manager How VxVM Handles Storage Management In a typical SAN environment, host controllers are connected to multiple enclosures in a daisy chain or through a Fibre Channel hub or fabric switch as illustrated inFigure 1-3 Figure 1-3 Example Configuration for Disk Enclosures Connected via a Fibre Channel Hub/Switch c1 Host Fibre Channel Hub/Switch Disk Enclosures enc0 enc1 enc2 In such a configuration, enclosure-based naming can be used to refer to each disk within an encl
Understanding VERITAS Volume Manager How VxVM Handles Storage Management configured only on the disks in enclosure enc1, the failure of the cable between the hub and the enclosure would make the entire volume unavailable. If required, you can replace the default name that VxVM assigns to an enclosure with one that is more meaningful to your configuration. See “Renaming an Enclosure” on page 146 for details.
Understanding VERITAS Volume Manager How VxVM Handles Storage Management See “Disk Device Naming in VxVM” on page 64 and “Changing the Disk-Naming Scheme” on page 78 for details of the standard and the enclosure-based naming schemes, and how to switch between them. Virtual Objects Virtual objects in VxVM include the following: • VM Disks • Disk Groups • Subdisks • Plexes • Volumes The connection between physical objects and VxVM objects is made when you place a physical disk under VxVM control.
Understanding VERITAS Volume Manager How VxVM Handles Storage Management NOTE The vxprint command displays detailed information on existing VxVM objects. For additional information on the vxprint command, see “Displaying Volume Information” on page 263 and the vxprint(1M) manual page. Combining Virtual Objects in VxVM VxVM virtual objects are combined to build volumes. The virtual objects contained in volumes are VM disks, disk groups, subdisks, and plexes.
Understanding VERITAS Volume Manager How VxVM Handles Storage Management The figure, “Connection Between Objects in VxVM,”, shows the connections between Volume Manager virtual objects and how they relate to physical disks.
Understanding VERITAS Volume Manager How VxVM Handles Storage Management NOTE Even though rootdg is the default disk group, it does not necessarily contain the root disk. In the current release, the root disk may be under VxVM or LVM control. You can create additional disk groups as necessary. Disk groups allow you to group disks into logical collections. A disk group and its components can be moved as a unit from one host machine to another.
Understanding VERITAS Volume Manager How VxVM Handles Storage Management Subdisks A subdisk is a set of contiguous disk blocks. A block is a unit of space on the disk. VxVM allocates disk space using subdisks. A VM disk can be divided into one or more subdisks. Each subdisk represents a specific portion of a VM disk, which is mapped to a specific region of a physical disk. The default name for a VM disk is disk## (such as disk01) and the default name for a subdisk is disk##-##.
Understanding VERITAS Volume Manager How VxVM Handles Storage Management Plexes VxVM uses subdisks to build virtual objects called plexes. A plex consists of one or more subdisks located on one or more physical disks.
Understanding VERITAS Volume Manager How VxVM Handles Storage Management NOTE VxVM uses the default naming conventions of vol## for volumes and vol##-## for plexes in a volume. For ease of administration, you can choose to select more meaningful names for the volumes that you create. A volume may be created under the following constraints: • Its name can contain up to 31 characters. • It can consist of up to 32 plexes, each of which contains one or more subdisks.
Understanding VERITAS Volume Manager How VxVM Handles Storage Management A volume with two or more data plexes is “mirrored” and contains mirror images of the data. See Figure 1-11, “Example of a Volume with Two Plexes,” Figure 1-11 Example of a Volume with Two Plexes Volume disk01-01 vol06-01 disk02-01 vol06-02 vol06 Each plex contains an identical copy of the volume data. For more information, see “Mirroring (RAID-1)” on page 25.
Understanding VERITAS Volume Manager Volume Layouts in VxVM Volume Layouts in VxVM A VxVM virtual device is defined by a volume. A volume has a layout defined by the association of a volume to one or more plexes, each of which map to subdisks. The volume presents a virtual device interface that is exposed to other applications for data access. These logical building blocks re-map the volume address space through which I/O is re-directed at run-time.
Understanding VERITAS Volume Manager Volume Layouts in VxVM To achieve the desired storage service from a set of virtual devices, it may be necessary to include an appropriate set of VM disks into a disk group, and to execute multiple configuration commands. To the extent that it can, VxVM handles initial configuration and on-line re-configuration with its set of layouts and administration interface to make this job easier and more deterministic.
Understanding VERITAS Volume Manager Volume Layouts in VxVM The figure, “Example of Concatenation,”, shows concatenation with one subdisk. Figure 1-12 Example of Concatenation VM Disk Physical Disk Plex B = Block of data B1 B2 disk01-01 devname disk01-01 B3 disk01 B4 You can use concatenation with multiple subdisks when there is insufficient contiguous space for the plex on any one disk.
Understanding VERITAS Volume Manager Volume Layouts in VxVM subdisk disk01-01 on disk01. However, the last two blocks of data, B7 and B8, use only a portion of the space on the disk to which VM disk disk02 is assigned. The remaining free space on VM disk disk02 can be put to other uses. In this example, subdisks disk02-02 and disk02-03 are available for other disk management tasks.
Understanding VERITAS Volume Manager Volume Layouts in VxVM Striping (RAID-0) NOTE You may need an additional license to use this feature. Striping (RAID-0) is useful if you need large amounts of data written to or read from physical disks, and performance is important. Striping is also helpful in balancing the I/O load from multi-user applications across multiple disks. By using parallel data transfer to and from multiple disks, striping significantly improves data-access performance.
Understanding VERITAS Volume Manager Volume Layouts in VxVM For example, if there are three columns in a striped plex and six stripe units, data is striped over the three columns, as illustrated in Figure 1-15, “Striping Across Three Columns,” Figure 1-15 Striping Across Three Columns Column 1 Column 2 Column 3 Stripe 1 su1 su2 su3 Stripe 2 su4 su5 su6 Subdisk 1 Subdisk 2 Subdisk 3 SU = Stripe Unit Plex A stripe consists of the set of stripe units at the same positions across all columns.
Understanding VERITAS Volume Manager Volume Layouts in VxVM Striping continues for the length of the columns (if all columns are the same length), or until the end of the shortest column is reached. Any space remaining at the end of subdisks in longer columns becomes unused space. Figure 1-16, “Example of a Striped Plex with One Subdisk per Column,” shows a striped plex with three equal sized, single-subdisk columns. There is one column per physical disk.
Understanding VERITAS Volume Manager Volume Layouts in VxVM of the same disk or from another disk (for example, if the size of the plex is increased). Columns can also contain subdisks from different VM disks. Figure 1-17 Example of a Striped Plex with Concatenated Subdisks per Column VM Disks Striped Plex SU = Stripe Unit Physical Disks disk01-01 disk01-02 disk01-03 su1 su4 su1 disk01-02 . . . devname1 disk01-01 disk01-03 disk01 Column 1 disk02-01 disk02-02 disk02-01 su2 su3 su2 su5 . . .
Understanding VERITAS Volume Manager Volume Layouts in VxVM NOTE Although a volume can have a single plex, at least two plexes are required to provide redundancy of data. Each of these plexes must contain disk space from different disks to achieve redundancy. When striping or spanning across a large number of disks, failure of any one of those disks can make the entire plex unusable.
Understanding VERITAS Volume Manager Volume Layouts in VxVM The figure, “Mirrored-Stripe Volume Laid out on Six Disks,” shows an example where two plexes, each striped across three disks, are attached as mirrors to the same volume to create a mirrored-stripe volume.
Understanding VERITAS Volume Manager Volume Layouts in VxVM NOTE A striped-mirror volume is an example of a layered volume. See “Layered Volumes” on page 36 for more information. As for a mirrored-stripe volume, a striped-mirror volume offers the dual benefits of striping to spread data across multiple disks, while mirroring provides redundancy of data. In addition, it enhances redundancy, and reduces recovery time after disk failure.
Understanding VERITAS Volume Manager Volume Layouts in VxVM vulnerable to being put out of use altogether should a second disk fail before the first failed disk has been replaced, either manually or by hot-relocation.
Understanding VERITAS Volume Manager Volume Layouts in VxVM NOTE The VERITAS Enterprise Administrator (VEA) terms a striped-mirror as Striped-Pro, and a concatenated-mirror as Concatenated-Pro. RAID-5 (Striping with Parity) NOTE VxVM supports RAID-5 for private disk groups, but not for shareable disk groups in a cluster environment. NOTE You may need an additional license to use this feature. Although both mirroring (RAID-1) and RAID-5 provide redundancy of data, they use different methods.
Understanding VERITAS Volume Manager Volume Layouts in VxVM all of the disks in the array, reducing the write time for large independent writes because the writes do not have to wait until a single parity disk can accept the data. Figure 1-21 Stripe 1 Stripe 2 Stripe 3 Stripe 4 Parity Locations in a RAID-5 Model Data Data Parity Data Data Parity Data Data Parity Data Data Parity RAID-5 and how it is implemented by the VxVM is described in “Volume Manager RAID-5 Arrays” on page 32.
Understanding VERITAS Volume Manager Volume Layouts in VxVM support the full width of a parity stripe. The figure, “Traditional RAID-5 Array,”, shows the row and column arrangement of a traditional RAID-5 array. Figure 1-22 Traditional RAID-5 Array Stripe 1 Stripe 3 Row 0 Stripe 2 Row 1 Column 0 Column 1 Column 2 Column 3 This traditional array structure supports growth by adding more rows per column.
Understanding VERITAS Volume Manager Volume Layouts in VxVM units are used for each column. For RAID-5, the default stripe unit size is 16 kilobytes. See “Striping (RAID-0)” on page 22 for further information about stripe units. Figure 1-23 Volume ManagerRAID-5 Array Stripe 1 Stripe 2 SD SD SD SD SD SD SD SD Column 0 Column 1 Column 2 Column 3 SD = Subdisk NOTE Mirroring of RAID-5 volumes is not currently supported.
Understanding VERITAS Volume Manager Volume Layouts in VxVM Left-symmetric layout stripes both data and parity across columns, placing the parity in a different column for every stripe of data. The first parity stripe unit is located in the rightmost column of the first stripe. Each successive parity stripe unit is located in the next stripe, shifted left one column from the previous parity stripe unit location.
Understanding VERITAS Volume Manager Volume Layouts in VxVM failure, the data for each stripe can be restored by XORing the contents of the remaining columns data stripe units against their respective parity stripe units. For example, if a disk corresponding to the whole or part of the far left column fails, the volume is placed in a degraded mode.
Understanding VERITAS Volume Manager Volume Layouts in VxVM complete. However, only the data write to disk A is complete. The parity write to disk C is incomplete, which would cause the data on disk B to be reconstructed incorrectly. Figure 1-25 Incomplete Write Completed Data Write Disk A Incomplete Parity Write Corrupted Data Disk B Disk C This failure can be avoided by logging all data and parity writes before committing them to the array.
Understanding VERITAS Volume Manager Volume Layouts in VxVM Underlying volumes in the “Managed by VxVM” area are used exclusively by VxVM and are not designed for user manipulation. You cannot detach a layered volume or perform any other operation on the underlying volumes by manipulating the internal structure.
Understanding VERITAS Volume Manager Volume Layouts in VxVM NOTE 38 The VERITAS Enterprise Administrator (VEA) terms a striped-mirror as Striped-Pro, and a concatenated-mirror as Concatenated-Pro.
Understanding VERITAS Volume Manager Online Relayout Online Relayout NOTE You may need an additional license to use this feature. Online relayout allows you to convert between storage layouts in VxVM, with uninterrupted data access. Typically, you would do this to change the redundancy or performance characteristics of a volume. VxVM adds redundancy to storage either by duplicating the data (mirroring) or by adding parity (RAID-5).
Understanding VERITAS Volume Manager Online Relayout The transformation is done by moving one portion of data at a time in the source layout to the destination layout. Data is copied from the source volume to the temporary area, and data is removed from the source volume storage area in portions. The source volume storage area is then transformed to the new layout, and the data saved in the temporary area is written back to the new layout.
Understanding VERITAS Volume Manager Online Relayout The following are examples of operations that you can perform using online relayout: • Figure 1-28 Change a RAID-5 volume to a concatenated, striped, or layered volume (remove parity). See Figure 1-28, “Example of Relayout of a RAID-5 Volume to a Striped Volume,” below. Note that removing parity (shown by the shaded area) decreases the overall storage space that the volume requires.
Understanding VERITAS Volume Manager Online Relayout • Figure 1-31 Change the column stripe width in a volume. See Figure 1-31, “Example of Increasing the Stripe Width for the Columns in a Volume,” below. Example of Increasing the Stripe Width for the Columns in a Volume For details of how to perform online relayout operations, see “Performing Online Relayout” on page 318. Limitations of Online Relayout Note the following limitations of online relayout: 42 • Log plexes cannot be transformed.
Understanding VERITAS Volume Manager Online Relayout • Online relayout involving RAID-5 volumes is not supported for shareable disk groups in a cluster environment. • Online relayout cannot transform sparse plexes, nor can it make any plex sparse. (A sparse plex is not the same size as the volume, or has regions that are not mapped to any subdisk.) . • The number of mirrors in a mirrored volume cannot be changed using relayout. . • Only one relayout may be applied to a volume at a time.
Understanding VERITAS Volume Manager Volume Resynchronization Volume Resynchronization When storing data redundantly and using mirrored or RAID-5 volumes, VxVM ensures that all copies of the data match exactly. However, under certain conditions (usually due to complete system failures), some redundant data on a volume can become inconsistent or unsynchronized. The mirrored data is not exactly the same as the original data.
Understanding VERITAS Volume Manager Volume Resynchronization Resynchronization Process The process of resynchronization depends on the type of volume. RAID-5 volumes that contain RAID-5 logs can “replay” those logs. If no logs are available, the volume is placed in reconstruct-recovery mode and all parity is regenerated. For mirrored volumes, resynchronization is done by placing the volume in recovery mode (also called read-writeback recovery mode).
Understanding VERITAS Volume Manager Dirty Region Logging (DRL) Dirty Region Logging (DRL) NOTE You may need an additional license to use this feature. Dirty region logging (DRL), if enabled, speeds recovery of mirrored volumes after a system crash. DRL keeps track of the regions that have changed due to I/O writes to a mirrored volume. DRL uses this information to recover only those portions of the volume that need to be recovered.
Understanding VERITAS Volume Manager Dirty Region Logging (DRL) subdisk is associated with one plex of the volume. Only one log subdisk can exist per plex. If the plex contains only a log subdisk and no data subdisks, that plex is referred to as a log plex. The log subdisk can also be associated with a regular plex that contains data subdisks. In that case, the log subdisk risks becoming unavailable if the plex must be detached due to the failure of one of its data subdisks.
Understanding VERITAS Volume Manager SmartSync Recovery Accelerator SmartSync Recovery Accelerator The SmartSync feature of Volume Manager increases the availability of mirrored volumes by only resynchronizing changed data. (The process of resynchronizing mirrored databases is also sometimes referred to as resilvering.) SmartSync reduces the time required to restore consistency, freeing more I/O bandwidth for business-critical applications.
Understanding VERITAS Volume Manager SmartSync Recovery Accelerator Because the database keeps its own logs, it is not necessary for VxVM to do logging. Data volumes should be configured as mirrored volumes without dirty region logs. In addition to improving recovery time, this avoids any run-time I/O overhead due to DRL which improves normal database write access. Redo Log Volume Configuration A redo log is a log of changes to the database data.
Understanding VERITAS Volume Manager Volume Snapshots Volume Snapshots VERITAS Volume Manager provides the capability for taking an image of a volume at a given point in time. Such an image is referred to as a volume snapshot. Such snapshots should not be confused with file system snapshots, which are point-in-time images of a VERITAS File System.
Understanding VERITAS Volume Manager Volume Snapshots Alternatively, you can use the vxassist snapclear command to break the association between the original volume and the snapshot volume. The snapshot volume then has an existence that is independent of the original volume. This is useful for applications that do not require the snapshot to be resynchronized with the original volume.
Understanding VERITAS Volume Manager FastResync FastResync NOTE You may need an additional license to use this feature. The FastResync feature (previously called fast mirror resynchronization or FMR) performs quick and efficient resynchronization of stale mirrors (a mirror that is not synchronized). This increases the efficiency of the VxVM snapshot mechanism, and improves the performance of operations such as backup and decision support applications.
Understanding VERITAS Volume Manager FastResync Once FastResync has been enabled on a volume, it does not alter how you administer mirrors. The only visible effect is that repair operations conclude more quickly. • FastResync allows you to refresh and re-use snapshots rather than discard them. You can quickly re-associate (snapback) snapshot plexes with their original volumes.
Understanding VERITAS Volume Manager FastResync Persistent FastResync can also track the association between volumes and their snapshot volumes after they are moved into different disk groups. When the disk groups are rejoined, this allows the snapshot plexes to be quickly resynchronized. This ability is not supported by Non-Persistent FastResync. See “Reorganizing the Contents of Disk Groups” on page 172 for details.
Understanding VERITAS Volume Manager FastResync Figure 1-32, “Mirrored Volume with Persistent FastResync Enabled,” shows an example of a mirrored volume with two plexes on which Persistent FastResync is enabled. Associated with the volume are a DCO object and a DCO volume with two plexes.
Understanding VERITAS Volume Manager FastResync Multiple snapshot plexes and associated DCO plexes may be created in the volume by re-running the snapstart operation. You can create up to a total of 32 plexes (data and log) in a volume. A snapshot volume can now be created from a snapshot plex by running the snapshot operation on the volume.
Understanding VERITAS Volume Manager FastResync See “Merging a Snapshot Volume (snapback)” on page 314, “Dissociating a Snapshot Volume (snapclear)” on page 315, and the vxassist(1M) manual page for more information.
Understanding VERITAS Volume Manager FastResync volume is the name of the volume being snapshotted. This default can be overridden by using the option -o name=pattern, as described on the vxassist(1M) manual page. To snapshot all the volumes in a single disk group, specify the option -o allvols to vxassist. However, this fails if any of the volumes in the disk group do not have a complete snapshot plex. It is also possible to take several snapshots of the same volume.
Understanding VERITAS Volume Manager FastResync area of the volume is marked as “dirty” so that this area is resynchronized. The snapback operation fails if it attempts to create an incomplete snapshot plex. In such cases, you must grow the replica volume, or the original volume, before invoking snapback. Growing the two volumes separately can lead to a snapshot that shares physical disks with another mirror in the volume. To prevent this, grow the volume after the snapback command is complete.
Understanding VERITAS Volume Manager FastResync replica. It is safe to perform these operations after the snapshot is completed. For more information, see the vxvol (1M), vxassist (1M), and vxplex (1M) manual pages.
Understanding VERITAS Volume Manager Hot-Relocation Hot-Relocation NOTE You may need an additional license to use this feature. Hot-relocation is a feature that allows a system to react automatically to I/O failures on redundant objects (mirrored or RAID-5 volumes) in VxVM and restore redundancy and access to those objects. VxVM detects I/O failures on objects and relocates the affected subdisks. The subdisks are relocated to disks designated as spare disks and/or free space within the disk group.
Understanding VERITAS Volume Manager Hot-Relocation 62 Chapter 1
2 Administering Disks Introduction This chapter describes the operations for managing disks used by the Volume Manager (VxVM). This includes placing disks under VxVM control, initializing disks, mirroring the root disk, and removing and replacing disks. NOTE Most VxVM commands require superuser or equivalent privileges. Rootability, which puts the root disk under VxVM control and allows it to be mirrored, is supported for this release of VxVM for HP-UX. See “Rootability” on page 88 for more information.
Administering Disks Disk Devices Disk Devices When performing disk administration, it is important to understand the difference between a disk name and a device name. When a disk is placed under VxVM control, a VM disk is assigned to it. You can define a symbolic disk name (also known as a disk media name) to refer to a VM disk for the purposes of administration. A disk name can be up to 31 characters long.
Administering Disks Disk Devices The syntax of a device name is c#t#d#, where c# represents a controller on a host bus adapter, t# is the target controller ID, and d# identifies a disk on the target controller. Fabric mode disk devices are named as follows: • Disk in supported disk arrays are named using the enclosure name_# format. For example, disks in the supported disk array name FirstFloor are named FirstFloor_0, FirstFloor_1, FirstFloor_2 and so on.
Administering Disks Disk Devices Private and Public Disk Regions A VM disk usually has two regions: private region A small area where configuration information is stored. A disk header label, configuration records for VxVM objects (such as volumes, plexes and subdisks), and an intent log for the configuration database are stored here. The default private region size is 2048 blocks (2048 kilobytes), which is large enough to record the details of about 4000 VxVM objects in a disk group.
Administering Disks Disk Devices cdsdisk The disk is formatted as a Cross-platform Data Sharing (CDS) disk that is suitable for moving between different operating systems. This is the default format for disks that are not used to boot the system.Typically, most disks on a system are configured as this disk type. However, it is not a suitable format for boot, root or swap disks, for mirrors or hot-relocation spares of such disks, or for EFI disks. hpdisk The disk is formatted as a simple disk.
Administering Disks Configuring Newly Added Disk Devices Configuring Newly Added Disk Devices When you physically connect new disks to a host or when you zone new fibre channel devices to a host, you can use the vxdctl command to rebuild the volume device node directories and to update the DMP internal database to reflect the new state of the system.
Administering Disks Configuring Newly Added Disk Devices NOTE The items in a list of physical controllers are separated by + characters. You can use the command vxdmpadm getctlr all to obtain a list of physical controllers. You can specify only one selection argument to the vxdisk scandisks command. Specifying multiple options results in an error. For more information, see the vxdisk(1M) manual page.
Device Discovery Function To have VxVM discover a new disk array, use the following command: # vxdctl enable This command scans all of the disk devices and their attributes, updates the VxVM device list, and reconfigures DMP with the new device database. There is no need to reboot the host. NOTE This command ensures that dynamic multipathing is set up correctly on the array. Otherwise, VxVM treats the independent paths to the disks as separate devices, which can result in data corruption.
See “Changing Device Naming for TPD-Controlled Enclosures” on page 78 for details of how to find out the TPD configuration information that is known to DMP. Autodiscovery of EMC Symmetric Arrays In VxVM 4.0, there were two possible ways to configure EMC Symmetrix arrays: • With EMC PowerPath installed, such devices could be configured as foreign devices as described in “Adding Foreign Devices” on page 75. • Without EMC PowerPath installed, DMP could be used to perform multipathing.
# vxddladm listsupport NOTE Use this command to obtain values for the vid and pid attributes that are used with other forms of the vxddladm command. To display more detailed information about a particular array library, use this form of the command: # vxddladm listsupport libname=libvxenc.sl This command displays the vendor ID (VID), product IDs (PIDs) for the arrays, array types (for example, A/A or A/P), and array names. The following is sample output. # vxddladm listsupport libname=libvxfujitsu.
This command adds the array library to the database so that the library can once again be used in device discovery. If vxconfigd is running, you can use the vxdisk scandisks command to discover the array and add its details to the database.
# vxddladm addjbod vid=SEAGATE pid=ST318404LSUN18G 3. Use the vxdctl enable command to bring the array under VxVM control as described in “Enabling Discovery of New Devices” on page 70: # vxdctl enable 4.
Removing Disks from the DISKS Category To remove disks from the DISKS (JBOD) category, use the vxddladm command with the rmjbod keyword. The following example illustrates the command for removing disks supplied by the vendor, Seagate: # vxddladm rmjbod vid=SEAGATE Adding Foreign Devices DDL cannot discover some devices that are controlled by third-party drivers, such as for EMC PowerPath and RAM disks.
Placing Disks Under VxVM Control When you add a disk to a system that is running VxVM, you need to put the disk under VxVM control so that VxVM can control the space allocation on the disk. Unless you specify a disk group, VxVM places new disks in a default disk group according to the rules given in “Rules for Determining the Default Disk Group” on page 137.
c0 You can exclude all disks in specific enclosures from initialization by listing those enclosures in the file /etc/vx/enclr.exclude. The following is an example of an entry in a enclr.exclude file: enc1 NOTE Chapter 2 Only the vxinstall and vxdiskadm commands use the contents of the /etc/vx/disks.exclude, /etc/vx/cntrls.exclude and /etc/vx/enclr.exclude files. You may need to create these files if they do not already exist on the system.
Changing the Disk-Naming Scheme NOTE Devices with very long device names (for example, Fibre Channel devices that include worldwide name (WWN) identifiers) are always represented by enclosure-based names. The operation in this section has no effect on such devices. You can either use enclosure-based naming for disks or the traditional naming scheme (such as c#t#d#). Select menu item 20 from the vxdiskadm main menu to change the disk-naming scheme that you want VxVM to use.
emcpower13 auto:hpdisk disk4 mydg online emcpower14 auto:hpdisk disk5 mydg online emcpower15 auto:hpdisk disk6 mydg online emcpower16 auto:hpdisk disk7 mydg online emcpower17 auto:hpdisk disk8 mydg online emcpower18 auto:hpdisk disk9 mydg online emcpower19 auto:hpdisk disk10 mydg online # vxdmpadm setattr enclosure EMC0 tpdmode=native # vxdisk list DEVICE TYPE DISK GROUP STATUS c6t0d10 auto:hpdisk disk1 mydg online c6t0d11 auto:hpdisk disk2 mydg online c6t0d12 au
Issues Regarding Persistent Simple/Nopriv Disks with Enclosure-Based Naming If you change from the c#t#d# based naming scheme to the enclosure-based naming scheme, persistent simple or nopriv disks may be put in the “error” state and cause VxVM objects on those disks to fail.
Step 3. If you want to use the enclosure-based naming scheme, use vxdiskadm to add a non-persistent simple disk to the rootdg disk group, change back to the enclosure-based naming scheme, and then run the following command: # /usr/bin/vxvm/bin/vxdarestore NOTE If not all the disks in rootdg go into the error state, you need only run vxdarestore to restore the disks that are in the error state and the objects that they contain.
Displaying and Changing Default Disk Layout Attributes .To display or change the default values for initializing disks, select menu item 21(Change/display the default disk layout) from the vxdiskadm main menu. For disk initialization, you can change the default format and the default length of the private region. The attribute settings for initializing disks are stored in the file /etc/default/vxdisk. See the vxdisk(1M) manual page for more information.
Adding a Disk to VxVM Formatted disks being placed under VxVM control may be new or previously used outside VxVM. The set of disks can consist of all disks on the system, all disks on a controller, selected disks, or a combination of these. Depending on the circumstances, all of the disks may not be processed in the same way. CAUTION Initialization does not preserve data on disks. When initializing multiple disks at one time, it is possible to exclude certain disks or certain controllers.
can be a single disk, or a series of disks and/or controllers (with optional targets). If consists of multiple items, separate them using white space, for example: c3t0d0 c3t1d0 c3t2d0 c3t3d0 specifies fours disks at separate target IDs on controller 3.
Step 5. If you specified the name of a disk group that does not already exist, vxdiskadm prompts for confirmation that you really want to create this new disk group: There is no active disk group named disk group name.
Step 11. You can now choose whether the disk is to be formatted as a CDS disk that is portable between different operating systems, or as a non-portable hpdisk-format disk: Enter the desired format [cdsdisk,hpdisk,q,?] (default: cdsdisk) Enter the format that is appropriate for your needs. In most cases, this is the default format, cdsdisk. Step 12. At the following prompt, vxdiskadm asks if you want to use the default private region size of 2048 blocks.
If the disk you want to add has previously been under LVM control, you can preserve the data it contains on a VxVM disk by the process of conversion (refer the VERITAS Volume Manager Migration Guide for more details). Using vxdiskadd to Place a Disk Under Control of VxVM As an alternative to vxdiskadm, you can use the vxdiskadd command to put a disk under VxVM control.
Rootability Rootability indicates that the volumes containing the root file system and the system swap area are under VxVM control. Without rootability, VxVM is usually started after the operating system kernel has passed control to the initial user mode process at boot time. However, if the volume containing the root file system is under VxVM control, the kernel starts portions of VxVM before starting the first user mode process.
specify are to be used as VxVM root disks and mirrors. • The volumes on the root disk cannot use dirty region logging (DRL). Root Disk Mirrors All the volumes on a VxVM root disk may be mirrored. The simplest way to achieve this is to mirror the VxVM root disk onto an identically sized or larger physical disk. If a mirror of the root disk must also be bootable, the restrictions listed in “Booting Root Volumes” on page 89 also apply to the mirror disk.
Setting up a VxVM Root Disk and Mirror NOTE These procedures should be carried out at init level 1. To set up a VxVM root disk and a bootable mirror of this disk, use the vxcp_lvmroot utility.
Administering Disks Rootability NOTE The target disk for a mirror that is added using the vxrootmir command must be large enough to accommodate the volumes from the VxVM root disk.
Administering Disks Rootability # /etc/vx/bin/vxdestroy_lvmroot -v c0t1d0 # /etc/vx/bin/vxres_lvmroot -v -b c0t1d0 The -b option to vxres_lvmroot sets c0t1d0 as the primary boot device. As these operations can take some time, the verbose option, -v, is specified to indicate how far the operation has progressed. For more information, refer the vxres_lvmroot (1M) manual page. Adding Swap Disks to a VxVM Rootable System On occasion, you may need to increase the amount of swap space for an HP-UX system.
Administering Disks Removing Disks Removing Disks NOTE You must disable a disk group as described in “Disabling a Disk Group” on page 172 before you can remove the last disk in that group. Alternatively, you can destroy the disk group as described in “Destroying a Disk Group” on page 172. You can remove a disk from a system and move it to another system if the disk is failing or has failed. Before removing the disk from the current system, you must: Step 1.
Administering Disks Removing Disks VxVM NOTICE V-5-2-284 Requested operation is to remove disk mydg01 from group mydg. Continue with operation? [y,n,q,?] (default: y) The vxdiskadm utility removes the disk from the disk group and displays the following success message: VxVM INFO V-5-2-268 Removal of disk mydg01 is complete. You can now remove the disk or leave it on your system as a replacement. Step 5.
Administering Disks Removing Disks Enter disk name [,list,q,?] mydg02 Requested operation is to remove disk mydg02 from group mydg. Continue with operation? [y,n,q,?] (default: y) y VxVM INFO V-5-2-268 Removal of disk disk02 is complete. Clobber disk headers? [y,n,q,?] (default: n) y Enter y to remove the disk completely from VxVM control. If you do not want to remove the disk completely from VxVM control, press Return or enter n.
Administering Disks Removing a Disk From VxVM Control Removing a Disk From VxVM Control After removing a disk from a disk group, you can permanently remove it from VERITAS Volume Manager control by running the vxdiskunsetup command: # /usr/lib/vxvm/bin/vxdiskunsetup c#t#d# CAUTION 96 The vxdiskunsetup command removes a disk from VERITAS Volume Manager control by erasing the VXVM metadata on the disk. To prevent data loss, any data on the disk should first be evacuated from the disk.
Administering Disks Removing and Replacing Disks Removing and Replacing Disks NOTE A replacement disk should have the same disk geometry as the disk that failed. That is, the replacement disk should have the same bytes per sector, sectors per track, tracks per cylinder and sectors per cylinder, same number of cylinders, and the same number of accessible cylinders. If failures are starting to occur on a disk, but the disk has not yet failed completely, you can replace the disk.
Administering Disks Removing and Replacing Disks Any applications using these volumes will fail future accesses. These volumes will require restoration from backup. Are you sure you want do this? [y,n,q,?] (default: n) To remove the disk, causing the named volumes to be disabled and data to be lost when the disk is replaced, enter y or press Return. To abandon removal of the disk, and back up or move the data associated with the volumes that would otherwise be disabled, enter n or q and press Return.
Administering Disks Removing and Replacing Disks successfully. VxVM NOTICE V-5-2-260 Proceeding to replace mydg02 with device c0t1d0. Step 6. You can now choose whether the disk is to be formatted as a CDS disk that is portable between different operating systems, or as a non-portable hpdisk-format disk: Enter the desired format [cdsdisk,hpdisk,q,?] (default: cdsdisk) Enter the format that is appropriate for your needs. In most cases, this is the default format, cdsdisk. Step 7.
Administering Disks Removing and Replacing Disks Replacing a Failed or Removed Disk NOTE You may need to run commands that are specific to the operating system or disk array when replacing a physical disk. Use the following procedure to replace a failed or removed disk with a new disk: Step 1. Select menu item 4 (Replace a failed or removed disk) from the vxdiskadm main menu. Step 2.
Administering Disks Removing and Replacing Disks device c0t1d0 and to then use that device to replace the removed or failed disk mydg02 in disk group mydg. Continue with operation? [y,n,q,?] (default: y) • If the disk has already been initialized, press Return at the following prompt to replace the disk: VxVM INFO V-5-2-382 The requested operation is to use the initialized device c0t1d0 to replace the removed or failed disk mydg02 in disk group mydg.
Administering Disks Enabling a Physical Disk Enabling a Physical Disk If you move a disk from one system to another during normal system operation, VxVM does not recognize the disk automatically. The enable disk task enables VxVM to identify the disk and to determine if this disk is part of a disk group. Also, this task re-enables access to a disk that was disabled by either the disk group deport task or the disk device disable (offline) task. To enable a disk, use the following procedure: Step 1.
Administering Disks Taking a Disk Offline Taking a Disk Offline There are instances when you must take a disk offline. If a disk is corrupted, you must disable the disk before removing it. You must also disable a disk before moving the physical disk device to another location to be connected to another system. NOTE Taking a disk offline is only useful on systems that support hot-swap removal and insertion of disks without needing to shut down and reboot the system.
Administering Disks Renaming a Disk Renaming a Disk If you do not specify a VM disk name, VxVM gives the disk a default name when you add the disk to VxVM control. The VM disk name is used by VxVM to identify the location of the disk or the disk type.
Administering Disks Reserving Disks Reserving Disks By default, the vxassist command allocates space from any disk that has free space. You can reserve a set of disks for special purposes, such as to avoid general use of a particularly slow or a particularly fast disk.
Administering Disks Displaying Disk Information Displaying Disk Information Before you use a disk, you need to know if it has been initialized and placed under VxVM control. You also need to know if the disk is part of a disk group because you cannot create volumes on a disk that is not part of a disk group. The vxdisk list command displays device names for all recognized disks, the disk names, the disk group names associated with each disk, and the status of each disk.
Administering Disks Displaying Disk Information Step 2. At the following display, enter the address of the disk you want to see, or enter all for a list of all disks: List disk information Menu: VolumeManager/Disk/ListDisk VxVM INFO V-5-2-475 Use this menu operation to display a list of disks. You can also choose to list detailed information about the disk at a specific disk device address.
Administering Disks Displaying Disk Information 108 Chapter 2
3 Administering Dynamic Multipathing (DMP) Chapter 3 109
Administering Dynamic Multipathing (DMP) NOTE You may need an additional license to use this feature. The Dynamic Multipathing (DMP) feature of VERITAS Volume Manager (VxVM) provides greater reliability and performance by using path failover and load balancing. This feature is available for multiported disk arrays from various vendors. See the VERITAS Volume Manager Hardware Notes for information about supported disk arrays.
Administering Dynamic Multipathing (DMP) How DMP Works How DMP Works Multiported disk arrays can be connected to host systems through multiple paths. To detect the various paths to a disk, DMP uses a mechanism that is specific to each supported array type. DMP can also differentiate between different enclosures of a supported array type that are connected to the same host system.
Administering Dynamic Multipathing (DMP) How DMP Works the disk array with the node. For disks in an unsupported array, DMP maps a separate metanode to each path that is connected to a disk. The raw and block devices for the nodes are created in the directories /dev/vx/rdmp and /dev/vx/dmp respectively. See the figure “How DMP Represents Multiple Physical Paths to a Disk as one Metanode,” for an illustration of how DMP sets up a metanode for a disk in a supported disk array.
Administering Dynamic Multipathing (DMP) How DMP Works Figure 3-2 c1 Example of Multipathing for a Disk Enclosure in a SAN Environment c2 VxVM Host enc0_0 Mapped by DMP DMP Fibre Channel Hubs/Switches c1t99d0 c2t99d0 Disk Enclosure enc0 Disk is c1t99d0 or c2t99d0 See “Changing the Disk-Naming Scheme” on page 78 for details of how to change the naming scheme that VxVM uses for disk devices. NOTE The operation of DMP relies on the vxdmp device driver. Unlike prior releases, from VxVM 3.1.
Administering Dynamic Multipathing (DMP) Path Failover Mechanism Path Failover Mechanism The DMP feature of VxVM enhances system reliability when used with multiported disk arrays. In the event of the loss of one connection to the disk array, DMP automatically selects the next available I/O path for I/O requests dynamically without action from the administrator.
Administering Dynamic Multipathing (DMP) Load Balancing Load Balancing DMP uses the balanced path mechanism to provide load balancing across paths for active/active disk arrays. Load balancing maximizes I/O throughput by using the total bandwidth of all available paths. Sequential I/O starting within a certain range is sent down the same path in order to benefit from disk track caching.
Administering Dynamic Multipathing (DMP) DMP in a Clustered Environment DMP in a Clustered Environment NOTE You need an additional license to use the cluster feature of VxVM. In a clustered environment where Active/Passive type disk arrays are shared by multiple hosts, all nodes in the cluster must access the disk via the same physical path. Accessing a disk via multiple paths simultaneously can severely degrade I/O performance (sometimes referred to as the ping-pong effect).
Administering Dynamic Multipathing (DMP) DMP in a Clustered Environment VxVM vxio ERROR V-5-1-3490 Operation not supported for shared disk arrays.
Administering Dynamic Multipathing (DMP) Disabling and Enabling Multipathing for Specific Devices Disabling and Enabling Multipathing for Specific Devices You can use vxdiskadm menu options 17 and 18 to disable or enable multipathing. These menu options also allow you to exclude devices from or include devices in the view of VxVM. For more information, see “Disabling Multipathing and Making Devices Invisible to VxVM” on page 118 and “Enabling Multipathing and Making Devices Visible to VxVM” on page 123.
Administering Dynamic Multipathing (DMP) Disabling and Enabling Multipathing for Specific Devices q Exit from menus Select an operation to perform: • Select option 1 to exclude all paths through the specified controller from the view of VxVM. These paths remain in the disabled state until the next reboot, or until the paths are re-included.
Administering Dynamic Multipathing (DMP) Disabling and Enabling Multipathing for Specific Devices NOTE This option requires a reboot of the system. Exclude VID:PID from VxVM Menu: VolumeManager/Disk/ExcludeDevices/VIDPID-VXVM Use this operation to exclude disks returning a specified :ProductID combination from VxVM. VendorID As a result of this operation, all disks that return VendorID:Produ ctID matching the specified combination will be excluded from the v iew of VxVM.
Administering Dynamic Multipathing (DMP) Disabling and Enabling Multipathing for Specific Devices Exclude all but one paths to a disk Menu: VolumeManager/Disk/ExcludeDevices/PATHGROUP-VXVM Use this operation to exclude all but one paths to a disk. In case of disks which are not multipathed by vxdmp, VxVM will see each pat h as a disk. In such cases, creating a pathgroup of all paths to th e disk will ensure that only one of the paths from the group is mad e visible to VxVM.
Administering Dynamic Multipathing (DMP) Disabling and Enabling Multipathing for Specific Devices fied paths will not be multipathed by DMP. This operation can be re versed using the vxdiskadm command. VxVM INFO V-5-2-1266 You can specify a pathname or a pattern at the prompt.
Administering Dynamic Multipathing (DMP) Disabling and Enabling Multipathing for Specific Devices characters respectively.
Administering Dynamic Multipathing (DMP) Disabling and Enabling Multipathing for Specific Devices 8 ? ?? List currently suppressed/non-multipathed devices Display help about menu Display help about the menuing system q Exit from menusSelect an operation to perform: • Select option 1 to make all paths through a specified controller visible to VxVM.
Administering Dynamic Multipathing (DMP) Disabling and Enabling Multipathing for Specific Devices As a result of this operation, disks that return VendorID:ProductID matching the specified combination will be made visible to VxVM again. VxVM INFO V-5-2-1407 You can specify a VID:PID combination at the prompt. Th e specification can be as follows: VID:PID where VID stands for Vendor ID PID stands for Product ID (The command vxdmpinq in /etc/vx/diag.
Administering Dynamic Multipathing (DMP) Disabling and Enabling Multipathing for Specific Devices Re-include controllers in DMP Menu: VolumeManager/Disk/IncludeDevices/CTLR-DMP Use this operation to make vxdmp multipath all disks on a controller again. As a result of this operation, all disks having a path through the specified controller will be multipathed by vxdmp again. VxVM INFO V-5-2-1264 You can specify a controller name at the prompt. A cont roller name is of the form c#, example c3, c11 etc.
Administering Dynamic Multipathing (DMP) Disabling and Enabling Multipathing for Specific Devices As a result of this operation, all disks that return VID:PID matching the sp ecified combination will be multipathed by vxdmp again. VxVM INFO V-5-2-1407 You can specify a VID:PID combination at the prompt. Th e specification can be as follows: VID:PID where VID stands for Vendor ID PID stands for Product ID (The command vxdmpinq in /etc/vx/diag.
Administering Dynamic Multipathing (DMP) Enabling and Disabling Input/Output (I/O) Controllers Enabling and Disabling Input/Output (I/O) Controllers DMP allows you to turn off I/O to a host I/O controller so that you can perform administrative operations. This feature can be used for maintenance of controllers attached to the host or of disk arrays supported by VxVM. I/O operations to the host I/O controller can be turned back on after the maintenance task is completed.
Administering Dynamic Multipathing (DMP) Displaying DMP Database Information Displaying DMP Database Information You can use the vxdmpadm command to list DMP database information and perform other administrative tasks. This command allows you to list all controllers that are connected to disks, and other related information that is stored in the DMP database. You can use this information to locate system hardware, and to help you decide which controllers need to be enabled or disabled.
Administering Dynamic Multipathing (DMP) Displaying Multipaths to a VM Disk Displaying Multipaths to a VM Disk The vxdisk command is used to display the multipathing information for a particular metadevice. The metadevice is a device representation of a particular physical disk having multiple physical paths from the I/O controller of the system. In VxVM, all the physical disks in the system are represented as metadevices with one or more physical paths.
Administering Dynamic Multipathing (DMP) Displaying Multipaths to a VM Disk iosize: min=1024 (bytes) max=64 (blocks) public: slice=0 offset=1152 len=4101723 private: slice=0 offset=128 len=1024 update: time=962923719 seqno=0.
Administering Dynamic Multipathing (DMP) Administering DMP Using vxdmpadm Administering DMP Using vxdmpadm The vxdmpadm utility is a command line administrative interface to the DMP feature of VxVM. You can use the vxdmpadm utility to perform the following tasks.
Administering Dynamic Multipathing (DMP) Administering DMP Using vxdmpadm NAME STATE ENCLR-TYPE PATHS ENBL DSBL ENCLR-NAME ========================================================================= c11t0d8 ENABLED ACME 2 2 0 enc1 c11t0d9 ENABLED ACME 2 2 0 enc1 c11t0d10 ENABLED ACME 2 2 0 enc1 c11t0d11 ENABLED ACME 2 2 0 enc1 Displaying All Paths Controlled by a DMP Node The following command displays the paths controlled by the specified DMP node: # vxdmpadm getsubpaths dmpn
Administering Dynamic Multipathing (DMP) Administering DMP Using vxdmpadm c2t3d0 ENABLED SECONDARY c2t3d0 ACME enc0 c2t4d0 ENABLED SECONDARY c2t4d0 ACME enc0 Listing Information About Host I/O Controllers The following command lists attributes of all host I/O controllers on the system: # vxdmpadm listctlr all CTLR-NAME ENCLR-TYPE STATE ENCLR-NAME ===================================================== c0 OTHER ENABLED others0 c1 X1 ENABLED jbod0 c2 ACME ENABLED enc0 c3 ACME ENABL
Administering Dynamic Multipathing (DMP) Administering DMP Using vxdmpadm ENCLR_NAME ENCLR_TYPE ENCLR_SNO STATUS ARRAY_TYPE ========================================================================== Disk Disk DISKS CONNECTED Disk ANA0 ACME 508002000001d660 CONNECTED A/A enc0 A3 60020f20000001a90000 CONNECTED A/P Displaying Information About TPD-Controlled Devices The third-party driver (TPD) coexistence feature allows I/O that is controlled by third-party multipathing drivers to bypass D
Administering Dynamic Multipathing (DMP) Administering DMP Using vxdmpadm NAME TPDNODENAME PATH-TYPE[-] DMP-NODENAME ENCLR-TYPE ENCLR-NAME ==================================================================================== c7t0d10 emcpower10s2 - emcpower10s2 EMC EMC0 c6t0d10 emcpower10s2 - emcpower10s2 EMC EMC0 Conversely, the next command displays information about the PowerPath node that corresponds to the path, c7t0d10, discovered by DMP: # vxdmpadm gettpdnode nodename=c7t0d10 NAME STAT
Administering Dynamic Multipathing (DMP) Administering DMP Using vxdmpadm Examples of Using the vxdmpadm iostat Command The follow is an example session using the vxdmpadm iostat command.
Administering Dynamic Multipathing (DMP) Administering DMP Using vxdmpadm c3t119d0 0 0 0 0 0.000000 0.
Administering Dynamic Multipathing (DMP) Administering DMP Using vxdmpadm c3t115d0 0 0 0 0 0.000000 0.000000 Setting the Attributes of the Paths to an Enclosure You can use the vxdmpadm setattr command to set the following attributes of the paths to an enclosure or disk array: • active Changes a standby (failover) path to an active path.
Administering Dynamic Multipathing (DMP) Administering DMP Using vxdmpadm Defines a path as being the primary path for an Active/Passive disk array. The following example specifies a primary path for an A/P disk array: # vxdmpadm setattr path c3t10d0 pathtype=primary • secondary Defines a path as being the secondary path for an Active/Passive disk array.
Administering Dynamic Multipathing (DMP) Administering DMP Using vxdmpadm NOTE Starting with release 4.1 of VxVM, I/O policies are recorded in the file /etc/vx/dmppolicy.info, and are persistent across reboots of the system. Do not edit this file yourself. The following policies may be set: • adaptive This policy attempts to maximize overall I/O throughput from/to the disks by dynamically scheduling I/O on the paths. It is suggested for use where I/O loads can vary over time.
Administering Dynamic Multipathing (DMP) Administering DMP Using vxdmpadm Table 3-1 Partition Size in Blocks and Bytes (Continued) Partition Size in Blocks Equivalent Size in Bytes 64 65,536 128 131,072 256 262,144 512 524,288 1024(default) 1,048,576 2048 2,097,152 4096 4,194,304 The default value for the partition size is 1024 blocks (1MB). A value that is not a power of 2 is silently rounded down to the nearest acceptable value.
Administering Dynamic Multipathing (DMP) Administering DMP Using vxdmpadm • priority This policy is useful when the paths in a SAN have unequal performance, and you want to enforce load balancing manually. You can assign priorities to each path based on your knowledge of the configuration and performance characteristics of the available paths, and of other aspects of your system.
Administering Dynamic Multipathing (DMP) Administering DMP Using vxdmpadm c2t0d15 state=enabled type=primary c2t1d15 state=enabled type=primary c3t1d15 state=enabled type=primary c3t2d15 state=enabled type=primary c4t2d15 state=enabled type=primary c4t3d15 state=enabled type=primary c5t3d15 state=enabled type=primary c5t4d15 state=enabled type=primary In addition, the device is in the enclosure ENC0, belongs to the disk group mydg, and contains a simple concatenated volume myvol1.
Administering Dynamic Multipathing (DMP) Administering DMP Using vxdmpadm ENCLR_NAME DEFAULT CURRENT ============================================ ENC0 Single-Active Single-Active This shows that the policy for the enclosure is set to singleactive, which explains why all the I/O is taking place on one path.
Administering Dynamic Multipathing (DMP) Administering DMP Using vxdmpadm Disabling a Controller Disabling I/O to a host disk controller prevents DMP from issuing I/O through the specified controller. The command blocks until all pending I/O issued through the specified disk controller are completed. To disable a controller, use the following command: # vxdmpadm disable ctlr=ctlr Enabling a Controller Enabling a controller allows a previously disabled host disk controller to accept I/O.
Administering Dynamic Multipathing (DMP) Administering DMP Using vxdmpadm Starting the DMP Restore Daemon The DMP restore daemon re-examines the condition of paths at a specified interval. The type of analysis it performs on the paths depends on the specified checking policy. NOTE The DMP restore daemon does not change the disabled state of the path through a controller that you have disabled using vxdmpadm disable.
Administering Dynamic Multipathing (DMP) Administering DMP Using vxdmpadm The interval attribute specifies how often the restore daemon examines the paths. For example, after stopping the restore daemon, the polling interval can be set to 400 seconds using the following command: # vxdmpadm start restore interval=400 The default interval is 300 seconds. Decreasing this interval can adversely affect system performance.
Administering Dynamic Multipathing (DMP) Administering DMP Using vxdmpadm Configuring Array Policy Modules An array policy module (APM) is a dynamically loadable kernel module that may be provided by some vendors for use in conjunction with an array. An APM defines procedures to: • Select an I/O path when multiple paths to a disk within the array are available. • Select the path failover mechanism. • Select the alternate path in the case of a path failure. • Put a path change into effect.
Administering Dynamic Multipathing (DMP) Administering DMP Using vxdmpadm 150 Chapter 3
4 Creating and Administering Disk Groups Introduction This chapter describes how to create and manage disk groups. Disk groups are named collections of disks that share a common configuration. Volumes are created within a disk group and are restricted to using disks within that disk group. A system with Volume Manager (VxVM) installed has a default disk group configured, rootdg. By default, operations are directed to the rootdg disk group.
The copy size in blocks can be obtained from the output of the command vxdg list diskgroup as the value of the permlen parameter on the line starting with the string “config:”. This value is the smallest of the len values for all copies of the configuration database in the disk group. The amount of remaining free space in the configuration database is shown as the value of the free parameter. An example is shown in “Displaying Disk Group Information” on page 154.
Specifying a Disk Group to Commands Many VxVM commands allow you to specify a disk group using the –g option. For example, to create a volume in disk group mktdg, use the following command: # vxassist -g mktdg make mktvol 50m The block special device for this volume is: /dev/vx/dsk/mktdg/mktvol The disk group does not have to be specified if the object names are unique. Most VxVM commands use object names specified on the command line to determine the disk group for the operation.
Displaying Disk Group Information To display information on existing disk groups, enter the following command: # vxdg list VxVM returns the following listing of current disk groups: NAME rootdg newdg STATE enabled enabled ID 730344554.1025.tweety 731118794.1213.tweety To display more detailed information on a specific disk group (such as rootdg), use the following command: # vxdg list rootdg The output from this command is similar to the following: Group: rootdg dgid: 962910960.1025.bass import-id: 0.
Displaying Free Space in a Disk Group Before you add volumes and file systems to your system, make sure you have enough free disk space to meet your needs.
Creating a Disk Group Data related to a particular set of applications or a particular group of users may need to be made accessible on another system. Examples of this are: • A system has failed and its data needs to be moved to other systems. • The work load must be balanced across a number of systems. It is important that you locate data related to particular applications or users on an identifiable set of disks.
The disk specified by the device name, c1t0d0, must have been previously initialized with vxdiskadd or vxdiskadm, and must not currently belong to a disk group.
Adding a Disk to a Disk Group To add a disk to an existing disk group, use menu item 1 (Add or initialize one or more disks) of the vxdiskadm command, as described in “Adding a Disk to VxVM” on page 83. You can also use the vxdiskadd command to add a disk to a disk group, for example: # vxdiskadd c1t2d0 where c1t2d0 is the device name of a disk that is not currently assigned to a disk group.
Removing a Disk from a Disk Group A disk that contains no subdisks can be removed from its disk group with this command: # vxdg [-g groupname] rmdisk diskname where the disk group name is only specified for a disk group other than the default, rootdg.
If you choose y, then all subdisks are moved off the disk, if possible. Some subdisks may not be movable. The most common reasons why a subdisk may not be movable are as follows: • There is not enough space on the remaining disks. • Plexes or striped subdisks cannot be allocated on different disks from existing plexes or striped subdisks in the volume.
Deporting a Disk Group Deporting a disk group disables access to a disk group that is currently enabled (imported) by the system. Deport a disk group if you intend to move the disks in a disk group to another system. Also, deport a disk group if you want to use all of the disks remaining in a disk group for a new purpose. To deport a disk group, use the following procedure: Step 1. Stop all activity by applications to volumes that are configured in the disk group that is to be deported.
the system. Disable (offline) the indicated disks? [y,n,q,?] (default: n) y Step 6. At the following prompt, press Return to continue with the operation: Continue with operation? [y,n,q,?] (default: y) Once the disk group is deported, the vxdiskadm utility displays the following message: Removal of disk group newdg was successful. Step 7.
Importing a Disk Group Importing a disk group enables access by the system to a disk group. To move a disk group from one system to another, first disable (deport) the disk group on the original system, and then move the disk between systems and enable (import) the disk group. To import a disk group, use the following procedure: Step 1. Use the following command to ensure that the disks in the deported disk group are online: # vxdisk -s list Step 2.
Select another disk group? [y,n,q,?] (default: n) Alternatively, you can use the vxdg command to import a disk group: # vxdg import diskgroup Chapter 4 164
Renaming a Disk Group Only one disk group of a given name can exist per system. It is not possible to import or deport a disk group when the target system already has a disk group of the same name. To avoid this problem, VxVM allows you to rename a disk group during import or deport. For example, because every system running VxVM must have a single rootdg default disk group, importing or deporting rootdg across systems is a problem. There cannot be two rootdg disk groups on the same system.
# vxdg -tC -n newdg import diskgroup The -t option indicates a temporary import name, and the -C option clears import locks. The -n option specifies an alternate name for the rootdg being imported so that it does not conflict with the existing rootdg. diskgroup is the disk group ID of the disk group being imported (for example, 774226267.1025.tweety). If a reboot or crash occurs at this point, the temporarily imported disk group becomes unimported and requires a reimport. Step 3.
Moving Disks between Disk Groups To move a disk between disk groups, remove the disk from one disk group and add it to the other. For example, to move the physical disk c0t3d0 (attached with the disk name disk04) from disk group rootdg and add it to disk group mktdg, use the following commands: # vxdg rmdisk disk04 # vxdg -g mktdg adddisk mktdg02=c0t3d0 CAUTION This procedure does not save the configurations nor data on the disks. You can also move a disk by using the vxdiskadm command.
Moving Disk Groups Between Systems An important feature of disk groups is that they can be moved between systems. If all disks in a disk group are moved from one system to another, then the disk group can be used by the second system. You do not have to re-specify the configuration. To move a disk group between systems, use the following procedure: Step 1.
CAUTION The purpose of the lock is to ensure that dual-ported disks (disks that can be accessed simultaneously by two systems) are not used by both systems at the same time. If two systems try to manage the same disks at the same time, configuration information stored on the disk is corrupted. The disk and its data become unusable. When you move disks from a system that has crashed or failed to detect the group before the disk is moved, the locks stored on the disks remain and must be cleared.
# vxdg -f import diskgroup CAUTION Be careful when using the -f option. It can cause the same disk group to be imported twice from different sets of disks, causing the disk group to become inconsistent. These operations can also be performed using the vxdiskadm utility. To deport a disk group using vxdiskadm, select menu item 8 (Enable access to (import) a disk group). The vxdiskadm import operation checks for host import locks and prompts to see if you want to clear any that are found.
If you do not specify the base of the minor number range for a disk group, VxVM chooses one at random. The number chosen is at least 1000, is a multiple of 1000, and yields a usable range of 1000 device numbers. The chosen number also does not overlap within a range of 1000 of any currently imported disk groups, and it does not overlap any currently allocated volume device numbers. NOTE The default policy ensures that a small number of disk groups can be merged successfully between a set of machines.
Reorganizing the Contents of Disk Groups NOTE You may need an additional license to use this feature. There are several circumstances under which you might want to reorganize the contents of your existing disk groups: • To group volumes or disks differently as the needs of your organization change. For example, you might want to split disk groups to match the boundaries of separate departments, or to join disk groups when departments are merged.
• move—moves a self-contained set of VxVM objects between imported disk groups. This operation fails if it would remove all the disks from the source disk group. Volume states are preserved across the move. The move operation is illustrated in Figure 4-1, “Disk Group Move Operation,” below.
destroyed if it has the same name as the target disk group (as is the case for the vxdg init command). The split operation is illustrated in Figure 4-2, “Disk Group Split Operation,” below.
• Figure 4-3 join—removes all VxVM objects from an imported disk group and moves them to an imported target disk group. The source disk group is removed when the join is complete. The join operation is illustrated in Figure 4-3, “Disk Group Join Operation,” below.
If the system crashes or a hardware subsystem fails, VxVM attempts to complete or reverse an incomplete disk group reconfiguration when the system is restarted or the hardware subsystem is repaired, depending on how far the reconfiguration had progressed.
Creating and Administering Disk Groups Reorganizing the Contents of Disk Groups • In a cluster environment, disk groups involved in a move or join must both be private or must both be shared. The following sections describe how to use the vxdg command to reorganize disk groups. For more information about the vxdg command, see the vxdg(1M) manual page.
Creating and Administering Disk Groups DCO volume accompany their parent volume during the move. Use the vxprint command on a volume to examine the configuration of its associated DCO volume. Figure 4-4, “Examples of Disk Groups That Can and Cannot be Split,” illustrates some instances in which it is not be possible to split a disk group because of the location of the DCO plexes.
Creating and Administering Disk Groups Reorganizing the Contents of Disk Groups For more information about relocating DCO plexes, see “Specifying Storage for DCO Plexes” on page 279.
Creating and Administering Disk Groups Figure 4-4 Examples of Disk Groups That Can and Cannot be Split Volume Data Plexes The disk group can be split as the DCO plexes are on the same disks as the data plexes and can therefore accompany their volumes. Snapshot Plex Split Volume DCO Plexes Snapshot DCO Plex Volume Data Plexes The disk group cannot be split as the DCO plexes have been separated from their data plexes and so cannot accompany their volumes. One solution is to relocate the DCO plexes.
Creating and Administering Disk Groups Reorganizing the Contents of Disk Groups Moving Objects Between Disk Groups To move a self-contained set of VxVM objects from an imported source disk group to an imported target disk group, use the following command: # vxdg [-o expand] [-o override|verify] move sourcedg targetdg object ...
Creating and Administering Disk Groups dg dm dm dm dm v pl sd pl sd dg1 disk01 disk05 disk07 disk08 vol1 vol1-01 disk01-01 vol1-02 disk05-01 dg1 c0t1d0 c1t96d0 c1t99d0 c1t100d0 fsgen vol1 vol1-01 vol1 vol1-02 ENABLED ENABLED ENABLED ENABLED ENABLED 17678493 17678493 17678493 17678493 2048 3591 3591 3591 3591 0 0 ACTIVE ACTIVE ACTIVE - - - - The following command moves the self-contained set of objects implied by specifying disk disk01 from disk group dg1 to rootdg: # vxdg -o expand move dg1 rootd
Creating and Administering Disk Groups Reorganizing the Contents of Disk Groups IL0 dg dg1 dm disk07 dm disk08 dg1 c1t99d0 c1t100d0 - 17678493 17678493 - - - - The following commands would also achieve the same result: # vxdg move dg1 rootdg disk01 disk05 # vxdg move dg1 rootdg vol1 Splitting Disk Groups To remove a self-contained set of VxVM objects from an imported source disk group to a new target disk group, use the following command: # vxdg [-o expand] [-o override|verify] split sourcedg targe
Creating and Administering Disk Groups dm v pl sd pl sd disk08 c1t100d0 - 17678493 - - - vol1 fsgen ENABLED 2048 - ACTIVE - vol1-01 vol1 ENABLED 3591 - ACTIVE - disk01-01 vol1-01 ENABLED 3591 0 - - vol1-02 ENABLED 3591 - ACTIVE - ENABLED 3591 0 - - vol1 disk05-01 vol1-02 The following command removes disks disk07 and disk08 from rootdg to form a new disk group, dg1: # vxdg -o expand split rootdg dg1 disk07 disk08 The moved volumes are initially disabled following th
Creating and Administering Disk Groups Reorganizing the Contents of Disk Groups v pl sd pl sd vol1 fsgen ENABLED 2048 - ACTIVE - vol1-01 vol1 ENABLED 3591 - ACTIVE - disk01-01 vol1-01 ENABLED 3591 0 - - vol1-02 ENABLED 3591 - ACTIVE - ENABLED 3591 0 - - KSTATE LENGTH PLOFFS STATE TUTIL0 - - - - - - 17678493 - - - - 17678493 - - - vol1 disk05-01 vol1-02 Disk group: dg1 TY NAME ASSOC PUTIL0 dg dg1 dg1 dm disk07 c1t99d0 dm disk08 c1t100d0 - Joining Disk Gr
Creating and Administering Disk Groups dg dm dm dm dm dm dm - rootdg rootdg - - - - - disk01 c0t1d0 - 17678493 - - - disk02 c1t97d0 - 17678493 - - - disk03 c1t112d0 - 17678493 - - - disk04 c1t114d0 - 17678493 - - - disk07 c1t99d0 - 17678493 - - - disk08 c1t100d0 - 17678493 - - - KSTATE LENGTH PLOFFS STATE TUTIL0 - - - - - - 17678493 - - - - 17678493 - - - ENABLED 2048 - ACTIVE - ENABLED 3591 - ACTIVE - ENABLED 3591 0 - - E
Creating and Administering Disk Groups Reorganizing the Contents of Disk Groups The output from vxprint after the join shows that disk group dg1 has been removed: # vxprint Disk group: rootdg TY NAME ASSOC PUTIL0 dg rootdg rootdg dm disk01 c0t1d0 dm disk02 c1t97d0 dm disk03 c1t112d0 dm disk04 c1t114d0 dm disk05 c1t96d0 dm disk06 c1t98d0 dm disk07 c1t99d0 dm disk08 c1t100d0 v vol1 fsgen pl vol1-01 vol1 sd disk01-01 vol1-01 pl vol1-02 vol1 sd disk05-01 vol1-02 Chapter 4 KSTATE LENGTH PLOFFS STATE TUTIL0
Creating and Administering Disk Groups Disabling a Disk Group To disable a disk group, unmount and stop any volumes in the disk group, and then use the following command to deport it: # vxdg deport diskgroup Deporting a disk group does not actually remove the disk group. It disables use of the disk group by the system. Disks in a deported disk group can be reused, reinitialized, added to other disk groups, or imported for use on other systems.
Creating and Administering Disk Groups Destroying a Disk Group Destroying a Disk Group The vxdg command provides a destroy option that removes a disk group from the system and frees the disks in that disk group for reinitialization: # vxdg destroy diskgroup CAUTION This command destroys all data on the disks. When a disk group is destroyed, the disks that are released can be re-used in other disk groups.
Creating and Administering Disk Groups Upgrading a Disk Group NOTE This information is not applicable for platforms whose first release was Volume Manager 3.0. However, it is applicable for subsequent releases. Prior to the release of Volume Manager 3.0, the disk group version was automatically upgraded (if needed) when the disk group was imported. From release 3.0 of Volume Manager, the two operations of importing a disk group and upgrading its version are separate.
Creating and Administering Disk Groups Upgrading a Disk Group Table 4-2 summarizes the Volume Manager releases that introduce and support specific disk group versions: Table 4-1 Disk Group Version Assignments Introduces Disk Group Version VxVM Release Chapter 4 Supports Disk Group Versions 1.2 10 10 1.3 15 15 2.0 20 20 2.2 30 30 2.3 40 40 2.5 50 50 3.0 60 20-40, 60 3.1 70 20-70 3.1.1 80 20-80 3.2, 3.5 90 20-90 4.
Creating and Administering Disk Groups Importing the disk group of a previous version on a Volume Manager 4.1 system prevents the use of features introduced since that version was released.
Creating and Administering Disk Groups Upgrading a Disk Group Table 4-2 Features Supported by Disk Group Versions (Continued) Disk Group Version New Features Supported Previous Version Features Supported 50 • SRVM (now known as VERITAS Volume Replicator or VVR) 20, 30, 40 40 • Hot-Relocation 20, 30 30 • VxSmartSync Recovery Accelerator 20 20 • Dirty Region Logging • Disk Group Configuration Copy Limiting, • Mirrored Volumes Logging • New-Style Stripes • RAID-5 Volumes • Recovery
Creating and Administering Disk Groups It may sometimes be necessary to create a disk group for an older version. The default disk group version for a disk group created on a system running Volume Manager 4.1 is 120. Such a disk group would not be importable on a system running Volume Manager 3.5, which only supports up to version 90. Therefore, to create a disk group on a system running Volume Manager 4.1 that can be imported by a system running Volume Manager 3.
Creating and Administering Disk Groups Managing the Configuration Daemon in VxVM Managing the Configuration Daemon in VxVM The VxVM configuration daemon (vxconfigd) provides the interface between VxVM commands and the kernel device drivers. vxconfigd handles configuration change requests from VxVM utilities, communicates the change requests to the VxVM kernel, and modifies configuration information stored on disk. vxconfigd also initializes VxVM when the system is booted.
Creating and Administering Disk Groups 196 Chapter 4
5 Chapter 5 Creating and Administering Subdisks 197
Creating and Administering Subdisks Introduction Introduction This chapter describes how to create and maintain subdisks. Subdisks are the low-level building blocks in a Volume Manager (VxVM) configuration that are required to create plexes and volumes. NOTE 198 Most VxVM commands require superuser or equivalent privileges.
Creating and Administering Subdisks Creating Subdisks Creating Subdisks NOTE Subdisks are created automatically if you use the vxassist command or the VERITAS Enterprise Administrator (VEA) to create volumes. For more information, see “Creating a Volume” on page 228.
Creating and Administering Subdisks Displaying Subdisk Information Displaying Subdisk Information The vxprint command displays information about VxVM objects. To display general information for all subdisks, use this command: # vxprint -st The -s option specifies information about subdisks. The -t option prints a single-line output record that depends on the type of object being listed.
Creating and Administering Subdisks Moving Subdisks Moving Subdisks Moving a subdisk copies the disk space contents of a subdisk onto one or more other subdisks. If the subdisk being moved is associated with a plex, then the data stored on the original subdisk is copied to the new subdisks. The old subdisk is dissociated from the plex, and the new subdisks are associated with the plex. The association is at the same offset within the plex as the source subdisk.
Creating and Administering Subdisks Splitting Subdisks Splitting Subdisks Splitting a subdisk divides an existing subdisk into two separate subdisks. To split a subdisk, use the following command: # vxsd –s size split subdisk newsd1 newsd2 where subdisk is the name of the original subdisk, newsd1 is the name of the first of the two subdisks to be created and newsd2 is the name of the second subdisk to be created. The –s option is required to specify the size of the first of the two subdisks to be created.
Creating and Administering Subdisks Joining Subdisks Joining Subdisks Joining subdisks combines two or more existing subdisks into one subdisk. To join subdisks, the subdisks must be contiguous on the same disk. If the selected subdisks are associated, they must be associated with the same plex, and be contiguous in that plex. To join several subdisks, use the following command: # vxsd join subdisk1 subdisk2 ...
Creating and Administering Subdisks Associating Subdisks with Plexes Associating Subdisks with Plexes Associating a subdisk with a plex places the amount of disk space defined by the subdisk at a specific offset within the plex. The entire area that the subdisk fills must not be occupied by any portion of another subdisk. There are several ways that subdisks can be associated with plexes, depending on the overall state of the configuration.
Creating and Administering Subdisks Associating Subdisks with Plexes create a subdisk of a size that fits the hole in the sparse plex exactly. Then, associate the subdisk with the plex by specifying the offset of the beginning of the hole in the plex, using the following command: # vxsd -l offset assoc sparse_plex exact_size_subdisk NOTE The subdisk must be exactly the right size. VxVM does not allow the space defined for two subdisks to overlap within a plex.
Creating and Administering Subdisks Associating Log Subdisks Associating Log Subdisks Log subdisks are defined and added to a plex that is to become part of a volume on which dirty region logging (DRL) is enabled. DRL is enabled for a volume when the volume is mirrored and has at least one log subdisk. For a description of DRL, see “Dirty Region Logging (DRL)” on page 46. Log subdisks are ignored as far as the usual plex policies are concerned, and are only used to hold the dirty region log.
Creating and Administering Subdisks Dissociating Subdisks from Plexes Dissociating Subdisks from Plexes To break an established connection between a subdisk and the plex to which it belongs, the subdisk is dissociated from the plex. A subdisk is dissociated when the subdisk is removed or used in another plex.
Creating and Administering Subdisks Removing Subdisks Removing Subdisks To remove a subdisk, use the following command: # vxedit rm subdisk For example, to remove a subdisk named disk02-01, use the following command: # vxedit rm disk02-01 208 Chapter 5
Creating and Administering Subdisks Changing Subdisk Attributes Changing Subdisk Attributes CAUTION Change subdisk attributes with extreme care. The vxedit command changes attributes of subdisks and other VxVM objects. To change subdisk attributes, use the following command: # vxedit set attribute=value ... subdisk ...
Creating and Administering Subdisks Changing Subdisk Attributes 210 Chapter 5
6 Creating and Administering Plexes Introduction This chapter describes how to create and maintain plexes. Plexes are logical groupings of subdisks that create an area of disk space independent of physical disk size or other restrictions. Replication (mirroring) of disk data is set up by creating multiple data plexes for a single volume. Each data plex in a mirrored volume contains an identical copy of the volume data.
Creating and Administering Plexes Creating a Striped Plex Creating a Striped Plex To create a striped plex, you must specify additional attributes. For example, to create a striped plex named pl-01 with a stripe width of 32 sectors and 2 columns, use the following command: # vxmake plex pl-01 layout=stripe stwidth=32 ncolumn=2 \ sd=disk01-01,disk02-01 To use a plex to build a volume, you must associate the plex with the volume.
Creating and Administering Plexes Displaying Plex Information Plex States Plex states reflect whether or not plexes are complete and are consistent copies (mirrors) of the volume contents. VxVM utilities automatically maintain the plex state. However, if a volume should not be written to because there are changes to that volume and if a plex is associated with that volume, you can modify the state of the plex.
Creating and Administering Plexes Displaying Plex Information • when the volume is stopped as a result of a system crash and the plex is ACTIVE at the moment of the crash In the latter case, a system failure can leave plex contents in an inconsistent state. When a volume is started, VxVM does the recovery action to guarantee that the contents of the plexes marked as ACTIVE are made identical. NOTE On a system running well, ACTIVE should be the most common state you see for any volume plexes.
Creating and Administering Plexes Displaying Plex Information LOG Plex State The state of a dirty region logging (DRL) or RAID-5 log plex is always set to LOG. OFFLINE Plex State The vxmend off task indefinitely detaches a plex from a volume by setting the plex state to OFFLINE. Although the detached plex maintains its association with the volume, changes to the volume do not update the OFFLINE plex. The plex is not updated until the plex is put online and reattached with the vxplex att task.
Creating and Administering Plexes Displaying Plex Information STALE Plex State If there is a possibility that a plex does not have the complete and current volume contents, that plex is placed in the STALE state. Also, if an I/O error occurs on a plex, the kernel stops using and updating the contents of that plex, and the plex state is set to STALE. A vxplex att operation recovers the contents of a STALE plex from an ACTIVE plex. Atomic copy operations copy the contents of the volume to the STALE plexes.
Creating and Administering Plexes Displaying Plex Information Plex Condition Flags vxprint may also display one of the following condition flags in the STATE field: IOFAIL Plex Condition The plex was detached as a result of an I/O failure detected during normal volume I/O. The plex is out-of-date with respect to the volume, and in need of complete recovery. However, this condition also indicates a likelihood that one of the disks in the system should be replaced.
Creating and Administering Plexes Attaching and Associating Plexes NOTE No user intervention is required to set these states; they are maintained internally. On a system that is operating properly, all plexes are enabled. The following plex kernel states are defined: DETACHED Plex Kernel State Maintenance is being performed on the plex. Any write request to the volume is not reflected in the plex. A read request from the volume is not satisfied from the plex.
Creating and Administering Plexes Taking Plexes Offline # vxmake [-g diskgroup] -U usetype vol volume plex=plex1[,plex2...] For example, to create a mirrored, fsgen-type volume named home, and to associate two existing plexes named home-1 and home-2 with home, use the following command: # vxmake -U fsgen vol home plex=home-1,home-2 NOTE You can also use the command vxassist mirror volume to add a data plex as a mirror to an existing volume.
Creating and Administering Plexes Detaching Plexes Detaching Plexes To temporarily detach one data plex in a mirrored volume, use the following command: # vxplex det plex For example, to temporarily detach a plex named vol01-02 and place it in maintenance mode, use the following command: # vxplex det vol01-02 This command temporarily detaches the plex, but maintains the association between the plex and its volume. However, the plex is not used for I/O.
Creating and Administering Plexes Moving Plexes As when returning an OFFLINE plex to ACTIVE, this command starts to recover the contents of the plex and, after the revive is complete, sets the plex utility state to ACTIVE. • If the volume is not in use (not ENABLED), use the following command to re-enable the plex for use: # vxmend on plex For example, to re-enable a plex named vol01-02, enter: # vxmend on vol01-02 In this case, the state of vol01-02 is set to STALE.
Creating and Administering Plexes Copying Plexes • If the new plex is smaller or more sparse than the original plex, an incomplete copy is made of the data on the original plex. If an incomplete copy is desired, use the -o force option to vxplex. • If the new plex is longer or less sparse than the original plex, the data that exists on the original plex is copied onto the new plex.
Creating and Administering Plexes Dissociating and Removing Plexes CAUTION • to reduce the number of mirrors in a volume so you can increase the length of another mirror and its associated volume. When the plexes and subdisks are removed, the resulting space can be added to other volumes • to remove a temporary mirror that was created to back up a volume and is no longer needed • to change the layout of a plex To save the data on a plex to be removed, the configuration of that plex must be known.
Creating and Administering Plexes Changing Plex Attributes Changing Plex Attributes CAUTION Change plex attributes with extreme care. The vxedit command changes the attributes of plexes and other volume Manager objects. To change plex attributes, use the following command: # vxedit set attribute=value ...
7 Creating Volumes Introduction This chapter describes how to create volumes in Volume Manager (VxVM). Volumes are logical devices that appear as physical disk partition devices to data management systems. Volumes enhance recovery from hardware failure, data availability, performance, and storage configuration. Volumes are created to take advantage of the VxVM concept of virtual disks. A file system can be placed on the volume to organize the disk space with files and directories.
Creating Volumes Types of Volume Layouts Types of Volume Layouts VxVM allows you to create volumes with the following layout types: 226 • Concatenated—A volume whose subdisks are arranged both sequentially and contiguously within a plex. Concatenation allows a volume to be created from multiple regions of one or more disks if there is not enough space for an entire volume on a single region of a disk. For more information, see “Concatenation and Spanning” on page 19.
Creating Volumes Types of Volume Layouts of this layout are increased performance by spreading data across multiple disks and redundancy of data. “Striping Plus Mirroring (Mirrored-Stripe or RAID-0+1)” on page 26. • Layered Volume—A volume constructed from other volumes. Non-layered volumes are constructed by mapping their subdisks to VM disks.
Creating Volumes Creating a Volume Creating a Volume You can create volumes using either an advanced approach or an assisted approach. Each method uses different tools although you may switch from one set to another at will. NOTE Most VxVM commands require superuser or equivalent privileges. Advanced Approach The advanced approach consists of a number of commands that typically require you to specify detailed input.
Creating Volumes Creating a Volume Assisted Approach The assisted approach takes information about what you want to accomplish and then performs the necessary underlying tasks. This approach requires only minimal input from you, but also permits more detailed specifications. Assisted operations are performed primarily through the vxassist command or the VERITAS Enterprise Administrator (VEA).
Creating Volumes Using vxassist Using vxassist You can use the vxassist command to create and modify volumes. Specify the basic requirements for volume creation or modification, and vxassist performs the necessary tasks. The advantages of using vxassist rather than the advanced approach include: • Most actions require that you enter only one command rather than several. • You are required to specify only minimal information to vxassist.
Creating Volumes Using vxassist where keyword selects the task to perform. The first argument after a vxassist keyword, volume, is a volume name, which is followed by a set of desired volume attributes. For example, the keyword make allows you to create a new volume: # vxassist [options] make volume length [attributes] The length of the volume can be specified in sectors, kilobytes, megabytes, or gigabytes using a suffix character of s, k, m, or g.
Creating Volumes Using vxassist NOTE You must create the /etc/default directory and the vxassist default file if these do not already exist on your system. The format of entries in a defaults file is a list of attribute-value pairs separated by new lines. These attribute-value pairs are the same as those specified as options on the vxassist command line. Refer to the vxassist(1M) manual page for details.
Creating Volumes Using vxassist nraid5log=1 # by default, limit mirroring log lengths to 32Kbytes max_regionloglen=32k # use 64K as the default stripe unit size for regular volumes stripe_stwid=64k # use 16K as the default stripe unit size for RAID-5 volumes raid5_stwid=16k Chapter 7 233
Creating Volumes Discovering the Maximum Size of a Volume Discovering the Maximum Size of a Volume To find out how large a volume you can create within a disk group, use the following form of the vxassist command: # vxassist [-g diskgroup] maxsize layout=layout [attributes] For example, to discover the maximum size RAID-5 volume with 5 columns and 2 logs that you can create within the disk group dgrp, enter the following command: # vxassist -g dgrp maxsize layout=raid5 nlog=2 You can use storage attributes
Creating Volumes Creating a Volume on Any Disk Creating a Volume on Any Disk By default, the vxassist make command creates a concatenated volume that uses one or more sections of disk space. On a fragmented disk, this allows you to put together a volume larger than any individual section of free disk space available. NOTE To change the default layout, edit the definition of the layout attribute defined in the /etc/default/vxassist file.
Creating Volumes Creating a Volume on Specific Disks Creating a Volume on Specific Disks VxVM automatically selects the disks on which each volume resides, unless you specify otherwise. If you want a volume to be created on specific disks, you must designate those disks to VxVM. More than one disk can be specified. To create a volume on a specific disk or disks, use the following command: # vxassist [-b] [-g diskgroup] make volume length [layout=layout] \ diskname ...
Creating Volumes Creating a Volume on Specific Disks # vxassist -b make volmega 20g diskgroup=bigone disk10 disk11 NOTE Any storage attributes that you specify for use must belong to the disk group. Otherwise, vxassist will not use them to create a volume. You can also use storage attributes to control how vxassist uses available storage, for example, when calculating the maximum size of a volume, when growing a volume or when removing mirrors or logs from a volume.
Creating Volumes Creating a Volume on Specific Disks This command places columns 1, 2 and 3 of the first mirror on disk01, disk02 and disk03 respectively, and columns 1, 2 and 3 of the second mirror on disk04, disk05 and disk06 respectively.
Creating Volumes Creating a Volume on Specific Disks This command mirrors column 1 across disk01 and disk03, and column 2 across disk02 and disk04 as illustrated in Figure 7-2, “Example of using Ordered Allocation to Create a Striped-Mirror Volume,”.
Creating Volumes Creating a Volume on Specific Disks formed from disks disk05 through disk08.
Creating Volumes Creating a Volume on Specific Disks c2, and so on as illustrated inFigure 7-4, “Example of Storage Allocation Used to Create a Mirrored-Stripe Volume Across Controllers,” Figure 7-4 Example of Storage Allocation Used to Create a Mirrored-Stripe Volume Across Controllers c2 c1 Column 1 Column 2 c3 Controllers Column 3 Striped Plex Mirror Column 1 Column 2 Column 3 Striped Plex Mirrored-Stripe Volume c4 c5 c6 Controllers For other ways in which you can control how vxassist lays
Creating Volumes Creating a Mirrored Volume Creating a Mirrored Volume A mirrored volume provides data redundancy by containing more than one copy of its data. Each copy (or mirror) is stored on different disks from the original copy of the volume and from other mirrors. Mirroring a volume ensures that its data is not lost if a disk in one of its component mirrors fails.
Creating Volumes Creating a Mirrored Volume NOTE Specify the -b option if you want to make the volume immediately available for use. See “Initializing and Starting a Volume” on page 258 for details. Alternatively, first create a concatenated volume, and then mirror it as described in “Adding a Mirror to a Volume” on page 273. Creating a Concatenated-Mirror Volume NOTE You may need an additional license to use this feature.
Creating Volumes Creating a Mirrored Volume NOTE You may need an additional license to use the Persistent FastResync feature. Even if you do not have a license, you can configure a DCO object and DCO volume so that snap objects are associated with the original and snapshot volumes. For more information about snap objects, see “How Persistent FastResync Works with Snapshots” on page 54.
Creating Volumes Creating a Mirrored Volume number. It is recommended that you configure as many DCO plexes as there are data plexes in the volume. For example, specify ndcomirror=3 when creating a 3-way mirrored volume. The default size of each plex is 132 blocks unless you use the dcolen attribute to specify a different size. If specified, the size of the plex must be a multiple of 33 blocks from 33 up to a maximum of 2112 blocks. By default, FastResync is not enabled on newly created volumes.
Creating Volumes Creating a Mirrored Volume If you use ordered allocation when creating a mirrored volume on specified storage, you can use the optional logdisk attribute to specify on which disks the log plexes should be created. Use the following form of the vxassist command to specify the disks from which space for the logs is to be allocated: # vxassist [-g diskgroup] -o ordered make volume length layout=mirror logtype=log_type logdisk=disk[,disk,...
Creating Volumes Creating a Striped Volume Creating a Striped Volume NOTE You may need an additional license to use this feature. A striped volume contains at least one plex that consists of two or more subdisks located on two or more physical disks. For more information on striping, see “Striping (RAID-0)” on page 22. NOTE A striped volume requires space to be available on at least as many disks in the disk group as the number of columns in the volume.
Creating Volumes Creating a Striped Volume To change the default number of columns from 2, or the stripe width from 64 kilobytes, use the ncolumn and stripeunit modifiers with vxassist. For example, the following command creates a striped volume with 5 columns and a 32-kilobyte stripe size: # vxassist -b make stripevol 30g layout=stripe stripeunit=32k \ ncol=5 Creating a Mirrored-Stripe Volume A mirrored-stripe volume mirrors several striped data plexes.
Creating Volumes Creating a Striped Volume NOTE A striped-mirror volume requires space to be available on at least as many disks in the disk group as the number of columns multiplied by the number of stripes in the volume.
Creating Volumes Mirroring across Targets, Controllers or Enclosures Mirroring across Targets, Controllers or Enclosures To create a volume whose mirrored data plexes lie on different controllers, you can use either of the commands described in this section. # vxassist [-b] [-g diskgroup] make volume length layout=layout mirror=target [attributes] NOTE Specify the -b option if you want to make the volume immediately available for use. See “Initializing and Starting a Volume” on page 258 for details.
Creating Volumes Mirroring across Targets, Controllers or Enclosures # vxassist -b make volspec 10g layout=mirror nmirror=2 mirror=enclr enclr:enc1 enclr:enc2 The disks in one data plex are all taken from enclosure enc1, and the disks in the other data plex are all taken from enclosure enc2. This arrangement ensures continued availability of the volume should either enclosure become unavailable.
Creating Volumes Creating a RAID-5 Volume Creating a RAID-5 Volume NOTE VxVM supports this feature for private disk groups, but not for shareable disk groups in a cluster environment. NOTE You may need an additional license to use this feature. You can create RAID-5 volumes by using either the vxassist command (recommended) or the vxmake command. Both approaches are described below.
Creating Volumes Creating a RAID-5 Volume NOTE Specify the -b option if you want to make the volume immediately available for use. See “Initializing and Starting a Volume” on page 258 for details. For example, to create the RAID-5 volume volraid together with 2 RAID-5 logs, use the following command: # vxassist -b make volraid 10g layout=raid5 nlog=2 This creates a RAID-5 volume with the default stripe unit size on the default number of disks.
Creating Volumes Creating a RAID-5 Volume # vxassist -b make volraid 10g layout=raid5 ncol=3 nlog=2 \ logdisk=disk07,disk08 disk04 disk05 disk06 NOTE The number of logs must equal the number of disks specified to logdisk. For more information about ordered allocation, see “Specifying Ordered Allocation of Storage to Volumes” on page 237 and the vxassist(1M) manual page. If you need to add more logs to a RAID-5 volume at a later date, follow the procedure described in “Adding a RAID-5 Log” on page 285.
Creating Volumes Creating a Volume Using vxmake Creating a Volume Using vxmake As an alternative to using vxassist, you can create a volume using the vxmake command to arrange existing subdisks into plexes, and then to form these plexes into a volume. Subdisks can be created using the method described in “Creating Subdisks” on page 199. The example given in this section is to create a RAID-5 volume using vxmake.
Creating Volumes Creating a Volume Using vxmake This command stacks subdisks disk00-00 and disk03-00 consecutively in column 0, subdisks disk01-00 and disk04-00 consecutively in column 1, and subdisks disk02-00 and disk05-00 in column 2. Offsets can also be specified to create sparse RAID-5 plexes, as for striped plexes.
Creating Volumes Creating a Volume Using vxmake # vxmake -d description_file The following sample description file defines a volume, db, with two plexes: #rectyp sd sd sd sd sd plex #name disk3-01 disk3-02 disk4-01 disk4-02 disk4-03 db-01 sd ramd1-01 plex vol db-02 db #options disk=disk3 offset=0 len=10000 disk=disk3 offset=25000 len=10480 disk=disk4 offset=0 len=8000 disk=disk4 offset=15000 len=8000 disk=disk4 offset=30000 len=4480 layout=STRIPE ncolumn=2 stwidth=16k sd=disk3-01:0/0,disk3-02:0/10000,
Creating Volumes Initializing and Starting a Volume Initializing and Starting a Volume A volume must be initialized if it was created by the vxmake command and has not yet been initialized, or if the volume has been set to an uninitialized state. NOTE If you create a volume using the vxassist command, vxassist initializes and starts the volume automatically unless you specify the attribute init=none.
Creating Volumes Initializing and Starting a Volume # vxvol init enable volume This allows you to restore data on the volume from a backup before using the following command to make the volume fully active: # vxvol init active volume If you want to zero out the contents of an entire volume, use this command to initialize it: # vxvol init zero volume This command writes zeroes to the entire length of the volume and to any log plexes. It then makes the volume active.
Creating Volumes Accessing a Volume Accessing a Volume As soon as a volume has been created and initialized, it is available for use as a virtual disk partition by the operating system for the creation of a file system, or by application programs such as relational databases and other data management software.
8 Chapter 8 Administering Volumes 261
Administering Volumes Introduction Introduction This chapter describes how to perform common maintenance tasks on volumes in Volume Manager (VxVM). This includes displaying volume information, monitoring tasks, adding and removing logs, resizing volumes, removing mirrors, removing volumes, backing up volumes using mirrors and snapshots, and changing the layout of volumes without taking them offline. NOTE 262 Most VxVM commands require superuser or equivalent privileges.
Administering Volumes Displaying Volume Information Displaying Volume Information You can use the vxprint command to display information about how a volume is configured.
Administering Volumes Displaying Volume Information For example, to display information about the voldef volume, use the following command: # vxprint -t voldef This is example output from this command: Disk group: rootdg V NAME USETYPE v voldef fsgen NOTE KSTATE STATE LENGTH READPOL PREFPLEX ENABLED ACTIVE 20480 SELECT - If you enable enclosure-based naming, and use the vxprint command to display the structure of a volume, it shows enclosure-based disk device names (disk access names) rather than c#t#d#
Administering Volumes Displaying Volume Information EMPTY Volume State The volume contents are not initialized. The kernel state is always DISABLED when the volume is EMPTY. NEEDSYNC Volume State The volume requires a resynchronization operation the next time it is started. For a RAID-5 volume, a parity resynchronization operation is required. REPLAY Volume State The volume is in a transient state as part of a log replay. A log replay occurs when it becomes necessary to use logged parity and data.
Administering Volumes Displaying Volume Information Volume Kernel States The volume kernel state indicates the accessibility of the volume. The volume kernel state allows a volume to have an offline (DISABLED), maintenance (DETACHED), or online (ENABLED) mode of operation. NOTE No user intervention is required to set these states; they are maintained internally. On a system that is operating properly, all volumes are enabled.
Administering Volumes Monitoring and Controlling Tasks Monitoring and Controlling Tasks NOTE VxVM supports this feature for private disk groups, but not for shareable disk groups in a cluster environment. The VxVM task monitor tracks the progress of system recovery by monitoring task creation, maintenance, and completion. The task monitor allows you to monitor task progress and to modify characteristics of tasks, such as pausing and recovery rate (for example, to reduce the impact on system performance).
Administering Volumes Monitoring and Controlling Tasks Managing Tasks with vxtask NOTE New tasks take time to be set up, and so may not be immediately available for use after a command is invoked. Any script that operates on tasks may need to poll for the existence of a new task. You can use the vxtask command to administer operations on VxVM tasks that are running on the system.
Administering Volumes Monitoring and Controlling Tasks • pausePuts a running task in the paused state, causing it to suspend operation. resumeCauses a paused task to continue operation. setChanges modifiable parameters of a task. Currently, there is only one modifiable parameter, slow[=iodelay], which can be used to reduce the impact that copy operations have on system performance. If slow is specified, this introduces a delay between such operations with a default value for iodelay of 250 milliseconds.
Administering Volumes Monitoring and Controlling Tasks This command causes VxVM to attempt to reverse the progress of the operation so far. For an example of how to use vxtask to monitor and modify the progress of the Online Relayout feature, see “Controlling the Progress of a Relayout” on page 320.
Administering Volumes Stopping a Volume Stopping a Volume Stopping a volume renders it unavailable to the user, and changes the volume state from ENABLED or DETACHED to DISABLED. If the volume cannot be disabled, it remains in its current state. To stop a volume, use the following command: # vxvol stop volume ...
Administering Volumes Starting a Volume Starting a Volume Starting a volume makes it available for use, and changes the volume state from DISABLED or DETACHED to ENABLED. To start a DISABLED or DETACHED volume, use the following command: # vxvol -g diskgroup start volume ... If a volume cannot be enabled, it remains in its current state.
Administering Volumes Adding a Mirror to a Volume Adding a Mirror to a Volume A mirror can be added to an existing volume with the vxassist command, as follows: # vxassist [-b] [-g diskgroup] mirror volume NOTE If specified, the -b option makes synchronizing the new mirror a background task.
Administering Volumes Adding a Mirror to a Volume Mirroring Volumes on a VM Disk Mirroring volumes on a VM disk gives you one or more copies of your volumes in another disk location. By creating mirror copies of your volumes, you protect your system against loss of data in case of a disk failure. NOTE This task only mirrors concatenated volumes. Volumes that are already mirrored or that contain subdisks that reside on multiple disks are ignored.
Administering Volumes Adding a Mirror to a Volume The requested operation is to mirror all volumes on disk disk02in disk group rootdg onto available disk space on disk disk01. NOTE: This operation can take a long time to complete. Continue with operation? [y,n,q,?] (default: y) The vxdiskadm program displays the status of the mirroring operation, as follows: Mirror volume voltest-bk00 ... Mirroring of disk disk01 is complete. Step 5.
Administering Volumes Removing a Mirror Removing a Mirror When a mirror is no longer needed, you can remove it to free up disk space. NOTE The last valid plex associated with a volume cannot be removed. To remove a mirror from a volume, use the following command: # vxassist remove mirror volume Additionally, you can use storage attributes to specify the storage to be removed.
Administering Volumes Adding a DCO and DCO Volume Adding a DCO and DCO Volume CAUTION If the existing volume was created before release 3.2 of VxVM, and it has any attached snapshot plexes or it is associated with any snapshot volumes, follow the procedure given in “Enabling Persistent FastResync on Existing Volumes with Associated Snapshots” on page 302. The procedure given in this section is for existing volumes without existing snapshot plexes or associated snapshot volumes.
Administering Volumes Adding a DCO and DCO Volume Step 2. Use the following command to turn off Non-Persistent FastResync on the original volume if it is currently enabled: # vxvol [-g diskgroup] set fastresync=off volume If you are uncertain about which volumes have Non-Persistent FastResync enabled, use the following command to obtain a listing of such volumes: # vxprint [-g diskgroup] -F “%name” \ -e “v_fastresync=on && !v_hasdcolog” Step 3.
Administering Volumes Adding a DCO and DCO Volume pl sd zoo_dcl-02 c1t67d0-01 zoo_dcl zoo_dcl-02 ENABLED ENABLED 132 132 0 ACTIVE - In this output, the DCO object is shown as zoo_dco, and the DCO volume as zoo_dcl with 2 plexes, zoo_dcl-01 and zoo_dcl-02. For more information, see the vxassist(1M) manual page. Attaching a DCO and DCO volume to a RAID-5 Volume The procedure in the previous section can be used to add a DCO and DCO volume to a RAID-5 volume.
Administering Volumes Adding a DCO and DCO Volume placed on disks which are used to hold the plexes of other volumes, this may cause problems when you subsequently attempt to move volumes into other disk groups. You can use storage attributes to specify explicitly which disks to use for the DCO plexes. If possible, specify the same disks as those on which the volume is configured.
Administering Volumes Removing a DCO and DCO Volume Removing a DCO and DCO Volume To dissociate a DCO object, DCO volume and any snap objects from a volume, use the following command: # vxassist [-g diskgroup] remove log volume logtype=dco This completely removes the DCO object, DCO volume and any snap objects. It also has the effect of disabling FastResync for the volume.
Administering Volumes Reattaching a DCO and DCO Volume Reattaching a DCO and DCO Volume If the DCO object and DCO volume are not removed by specifying the -o rm option to vxdco, they can be reattached to the parent volume using the following command: # vxdco [-g diskgroup] att volume dco_obj For example, to reattach the DCO object, myvol_dco, to the volume, myvol, use the following command: # vxdco -g mydg att myvol myvol_dco For more information, see the vxdco(1M) manual page.
Administering Volumes Adding DRL Logging to a Mirrored Volume Adding DRL Logging to a Mirrored Volume To put dirty region logging (DRL) into effect for a mirrored volume, a log subdisk must be added to that volume. Only one log subdisk can exist per plex. To add DRL logs to an existing volume, use the following command: # vxassist [-b] addlog volume logtype=drl [nlog=n] NOTE If specified, the -b option makes adding the new logs a background task.
Administering Volumes Removing a DRL Log Removing a DRL Log To remove a DRL log, use the vxassist command as follows: # vxassist remove log volume [nlog=n] Use the optional attribute nlog=n to specify the number, n, of logs to be removed. By default, the vxassist command removes one log.
Administering Volumes Adding a RAID-5 Log Adding a RAID-5 Log NOTE You may need an additional license to use this feature. Only one RAID-5 plex can exist per RAID-5 volume. Any additional plexes become RAID-5 log plexes, which are used to log information about data and parity being written to the volume. When a RAID-5 volume is created using the vxassist command, a log plex is created for that volume by default.
Administering Volumes Adding a RAID-5 Log The attach operation can only proceed if the size of the new log is large enough to hold all of the data on the stripe. If the RAID-5 volume already contains logs, the new log length is the minimum of each individual log length. This is because the new log is a mirror of the old logs. If the RAID-5 volume is not enabled, the new log is marked as BADLOG and is enabled when the volume is started. However, the contents of the log are ignored.
Administering Volumes Removing a RAID-5 Log Removing a RAID-5 Log To identify the plex of the RAID-5 log, use the following command: # vxprint -ht volume where volume is the name of the RAID-5 volume. For a RAID-5 log, the output lists a plex with a STATE field entry of LOG.
Administering Volumes Resizing a Volume Resizing a Volume Resizing a volume changes the volume size. For example, you might need to increase the length of a volume if it is no longer large enough for the amount of data to be stored on it. To resize a volume, use one of the commands: vxresize (preferred), vxassist, or vxvol. Alternatively, you can use the graphical VERITAS Enterprise Administrator (VEA) to resize volumes.
Administering Volumes Resizing a Volume Resizing Volumes using vxresize Use the vxresize command to resize a volume containing a file system. Although other commands can be used to resize volumes containing file systems, the vxresize command offers the advantage of automatically resizing certain types of file system as well as the volume.
Administering Volumes Resizing a Volume vxvm:vxresize: ERROR: Volume volume has different organization in each mirror For more information about the vxresize command, see the vxresize(1M) manual page.
Administering Volumes Resizing a Volume NOTE If specified, the -b option makes growing the volume a background task. For example, to extend volcat by 100 sectors, use the following command: # vxassist growby volcat 100 NOTE If you previously performed a relayout on the volume, additionally specify the attribute layout=nodiskalign to the growby command if you want the subdisks to be grown using contiguous disk space.
Administering Volumes Resizing a Volume CAUTION Do not shrink the volume below the current size of the file system or database using the volume. The vxassist shrinkby command can be safely used on empty volumes.
Administering Volumes Changing the Read Policy for Mirrored Volumes Changing the Read Policy for Mirrored Volumes VxVM offers the choice of the following read policies on the data plexes in a mirrored volume: • round—reads each plex in turn in “round-robin” fashion for each nonsequential I/O detected. Sequential access causes only one plex to be accessed. This takes advantage of the drive or controller read-ahead caching policies.
Administering Volumes Changing the Read Policy for Mirrored Volumes # vxvol rdpol select volume For more information about how read policies affect performance, see “Volume Read Policies” on page 398.
Administering Volumes Removing a Volume Removing a Volume Once a volume is no longer necessary (it is inactive and its contents have been archived, for example), it is possible to remove the volume and free up the disk space for other uses. Before removing a volume, use the following procedure to stop all activity on the volume: Step 1. Remove all references to the volume by application programs, including shells, that are running on the system. Step 2.
Administering Volumes Moving Volumes from a VM Disk Moving Volumes from a VM Disk Before you disable or remove a disk, you can move the data from that disk to other disks on the system. To do this, ensure that the target disks have sufficient space, and then use the following procedure: Step 1. Select menu item 6 (Move volumes from a disk) from the vxdiskadm main menu. Step 2.
Administering Volumes Moving Volumes from a VM Disk Move volume voltest ... Move volume voltest-bk00 ... When the volumes have all been moved, the vxdiskadm program displays the following success message: Evacuation of disk disk01 is complete. Step 3.
Administering Volumes Enabling FastResync on a Volume Enabling FastResync on a Volume NOTE You may need an additional license to use this feature. FastResync performs quick and efficient resynchronization of stale mirrors. It also increases the efficiency of the VxVM snapshot mechanism when used with operations such as backup and decision support. See “Backing Up Volumes Online Using Snapshots” on page 308 and “FastResync” on page 52 for more information. From Release 3.
Administering Volumes Enabling FastResync on a Volume NOTE It is not possible to configure both Persistent and Non-Persistent FastResync on a volume. Persistent FastResync is used if a DCO object and a DCO volume are associated with the volume. Otherwise, Non-Persistent FastResync is used.
Administering Volumes Enabling FastResync on a Volume # vxprint [-g diskgroup] -F “%name” -e “v_fastresync=on \ && v_hasdcolog” 300 Chapter 8
Administering Volumes Disabling FastResync Disabling FastResync Use the vxvol command to turn off Persistent or Non-Persistent FastResync for an existing volume, as shown here: # vxvol [-g diskgroup] set fastresync=off volume Turning FastResync off releases all tracking maps for the specified volume. All subsequent reattaches will not use the FastResync facility, but perform a full resynchronization of the volume. This occurs even if FastResync is later turned on.
Administering Volumes Enabling Persistent FastResync on Existing Volumes with Associated Snapshots Enabling Persistent FastResync on Existing Volumes with Associated Snapshots The procedure described in this section describes how to enable Persistent FastResync on a volume created before release 3.2 of VxVM, and which has attached snapshot plexes or is associated with one or more snapshot volumes.
Administering Volumes Enabling Persistent FastResync on Existing Volumes with Associated Snapshots use the disk group move feature to bring in spare disks from a different disk group. For more information, see “Reorganizing the Contents of Disk Groups” on page 172. Perform the following steps to enable Persistent FastResync on an existing volume that has attached snapshot plexes or associated snapshot volumes: Step 1.
Administering Volumes Enabling Persistent FastResync on Existing Volumes with Associated Snapshots # vxvol [-g diskgroup] set fastresync=off volume If you are uncertain about which volumes have Non-Persistent FastResync enabled, use the following command to obtain a listing of such volumes: # vxprint [-g diskgroup] -F “%name” \ -e “v_fastresync=on && !v_hasdcolog” Step 4. Use the following command on the original volume and on each of its snapshot volumes (if any) to add a DCO and DCO volume.
Administering Volumes Enabling Persistent FastResync on Existing Volumes with Associated Snapshots # vxassist -g egdg addlog SNAP-vol logtype=dco \ dcolen=264 ndcomirror=1 !disk01 !disk02 disk03 NOTE If the DCO plexes of the snapshot volume are configured on disks that also contain the plexes of other volumes, this prevents the snapshot volume from being moved to a different disk group. See “Considerations for Placing DCO Plexes” on page 177 for more information. Step 5.
Administering Volumes Enabling Persistent FastResync on Existing Volumes with Associated Snapshots Step 6. Perform this step on any snapshot volumes as well as on the original volume.
Administering Volumes Backing up Volumes Online Backing up Volumes Online It is important to make backup copies of your volumes. These provide replicas of the data as it existed at the time of the backup. Backup copies are used to restore volumes lost due to disk failure, or data destroyed due to human error. VxVM allows you to back up volumes online with minimal interruption to users.
Administering Volumes Backing up Volumes Online Step 4. Use fsck (or some utility appropriate for the application running on the volume) to clean the temporary volume’s contents. For example, you can use this command: # fsck vxfs /dev/vx/rdsk/diskgroup/tempvol Step 5. Perform appropriate backup procedures, using the temporary volume. Step 6. Stop the temporary volume, using the following command: # vxvol [-g diskgroup] stop tempvol Step 7.
Administering Volumes Backing up Volumes Online NOTE You may need an additional license to use this feature. VxVM provides snapshot images of volume devices using vxassist and other commands. If the fsgen volume usage type is set on a volume that contains a VERITAS File System (VxFS), the snapshot mechanism ensures the internal consistency of the file system that is backed up. For file system types, there may be inconsistencies between in-memory data and the data in the snapshot image.
Administering Volumes Backing up Volumes Online The online backup procedure is completed by running the vxassist snapshot command on a volume with a SNAPDONE mirror. This task detaches the finished snapshot (which becomes a normal mirror), creates a new normal volume and attaches the snapshot mirror to the snapshot volume. The snapshot then becomes a normal, functioning mirror and the state of the snapshot is set to ACTIVE.
Administering Volumes Backing up Volumes Online If vxassist snapstart is not run in the background, it does not exit until the mirror has been synchronized with the volume. The mirror is then ready to be used as a plex of a snapshot volume. While attached to the original volume, its contents continue to be updated until you take the snapshot. Use the nmirror attribute to create as many snapshot mirrors as you need for the snapshot volume. For a backup, you should usually only require the default of one.
Administering Volumes Backing up Volumes Online Step 5. Use a backup utility or operating system command to copy the temporary volume to tape, or to some other appropriate backup media. When the backup is complete, you have three choices for what to do with the snapshot volume: • Reattach some or all of the plexes of the snapshot volume with the original volume as described in “Merging a Snapshot Volume (snapback)” on page 314.
Administering Volumes Backing up Volumes Online # vxplex [-g diskgroup] dcoplex=dcologplex convert \ state=SNAPDONE plex dcologplex is the name of an existing DCO plex that is to be associated with the new snapshot plex. You can use the vxprint command to find out the name of the DCO volume as described in “Adding a DCO and DCO Volume” on page 277.
Administering Volumes Backing up Volumes Online To snapshot all the volumes in a single disk group, specify the option -o allvols to vxassist: # vxassist -g diskgroup -o allvols snapshot This operation requires that all snapstart operations are complete on the volumes. It fails if any of the volumes in the disk group do not have a complete snapshot plex in the SNAPDONE state.
Administering Volumes Backing up Volumes Online Here the nmirror attribute specifies the number of mirrors in the snapshot volume that are to be re-attached. Once the snapshot plexes have been reattached and their data resynchronized, they are ready to be used in another snapshot operation. By default, the data in the original volume is used to update the snapshot plexes that have been re-attached.
Administering Volumes Backing up Volumes Online If you have split or moved the snapshot volume and the original volume into different disk groups, you must run snapclear on the each volume separately, specifying the snap object in the volume that points to the other volume: # vxassist snapclear volume snap_object For example, if myvol1 and SNAP-myvol1 are in separate disk groups mydg1 and mydg2 respectively, the following commands stop the tracking on SNAP-myvol1 with respect to myvol1 and on myvol1 with re
Administering Volumes Backing up Volumes Online v ss SNAP-v2 -- fsgen v2 20480 20480 0 In this example, Persistent FastResync is enabled on volume v1, and Non-Persistent FastResync on volume v2. Lines beginning with v, dp and ss indicate a volume, detached plex and snapshot plex respectively. The %DIRTY field indicates the percentage of a snapshot plex or detached plex that is dirty with respect to the original volume.
Administering Volumes Performing Online Relayout Performing Online Relayout NOTE You may need an additional license to use this feature. You can use the vxassist relayout command to reconfigure the layout of a volume without taking it offline. The general form of this command is: # vxassist [-b] [-g diskgroup] relayout volume [layout=layout] \ [relayout_options] NOTE If specified, the -b option makes relayout of the volume a background task.
Administering Volumes Performing Online Relayout Specifying a Non-Default Layout You can specify one or more relayout options to change the default layout configuration. Examples of these options are: ncol=number—specifies the number of columns ncol=+number—specifies the number of columns to add ncol=-number—specifies the number of colums to remove stripeunit=size—specifies the stripe width See the vxassist(1M) manual page for more information about relayout options.
Administering Volumes Performing Online Relayout Tagging a Relayout Operation If you want to control the progress of a relayout operation, for example to pause or reverse it, use the -t option to vxassist to specify a task tag for the operation.
Administering Volumes Performing Online Relayout # vxtask pause myconv To resume the operation, use the vxtask command: # vxtask resume myconv For relayout operations that have not been stopped using the vxtask pause command (for example, the vxtask abort command was used to stop the task, the transformation process died, or there was an I/O failure), resume the relayout by specifying the start keyword to vxrelayout, as shown here: # vxrelayout -o bg start vol04 NOTE If you use the vxrelayout start comman
Administering Volumes Converting Between Layered and Non-Layered Volumes Converting Between Layered and Non-Layered Volumes The vxassist convert command transforms volume layouts between layered and non-layered forms: # vxassist [-b] convert volume [layout=layout] [convert_options] NOTE If specified, the -b option makes conversion of the volume a background task.
Administering Volumes Converting Between Layered and Non-Layered Volumes NOTE Chapter 8 If the system crashes during relayout or conversion, the process continues when the system is rebooted. However, if the crash occurred during the first stage of a two-stage relayout and convert operation, only the first stage will be completed. You must run vxassist convert manually to complete the operation.
Administering Volumes Converting Between Layered and Non-Layered Volumes 324 Chapter 8
9 Chapter 9 Administering Hot-Relocation 325
Administering Hot-Relocation Introduction Introduction If a volume has a disk I/O failure (for example, because the disk has an uncorrectable error), Volume Manager (VxVM) can detach the plex involved in the failure. I/O stops on that plex but continues on the remaining plexes of the volume. If a disk fails completely, VxVM can detach the disk from its disk group. All plexes on the disk are disabled. If there are any unmirrored volumes on a disk when it is detached, those volumes are also disabled.
Administering Hot-Relocation How Hot-Relocation works How Hot-Relocation works Hot-relocation allows a system to react automatically to I/O failures on redundant (mirrored or RAID-5) VxVM objects, and to restore redundancy and access to those objects. VxVM detects I/O failures on objects and relocates the affected subdisks to disks designated as spare disks or to free space within the disk group.
Administering Hot-Relocation How Hot-Relocation works Step 1. vxrelocd informs the system administrator (and other nominated users, see “Modifying the Behavior of Hot-Relocation” on page 349) by electronic mail of the failure and which VxVM objects are affected. See “Partial Disk Failure Mail Messages” on page 331 and “Complete Disk Failure Mail Messages” on page 332 for more information. Step 2. vxrelocd next determines if any subdisks can be relocated.
Administering Hot-Relocation How Hot-Relocation works • The only available space is on a disk that already contains the RAID-5 log plex or one of its healthy subdisks, failing subdisks in the RAID-5 plex cannot be relocated. • If a mirrored volume has a dirty region logging (DRL) log subdisk as part of its data plex, failing subdisks belonging to that plex cannot be relocated. • If a RAID-5 volume log plex or a mirrored volume DRL log plex fails, a new log plex is created elsewhere.
Administering Hot-Relocation How Hot-Relocation works Figure 9-1, “Example of Hot-Relocation for a Subdisk in a RAID-5 Volume,” illustrates the hot-relocation process in the case of the failure of a single subdisk of a RAID-5 volume. Figure 9-1 Example of Hot-Relocation for a Subdisk in a RAID-5 Volume a) Disk group contains five disks. Two RAID-5 volumes are configured across four of the disks. One spare disk is available for hot-relocation.
Administering Hot-Relocation How Hot-Relocation works Partial Disk Failure Mail Messages If hot-relocation is enabled when a plex or disk is detached by a failure, mail indicating the failed objects is sent to root. If a partial disk failure occurs, the mail identifies the failed plexes.
Administering Hot-Relocation How Hot-Relocation works # vxrecover -b home src This starts recovery of the failed plexes in the background (the command prompt reappears before the operation completes). If an error message appears later, or if the plexes become detached again and there are no obvious cabling failures, replace the disk (see “Removing and Replacing Disks” on page 97).
Administering Hot-Relocation How Hot-Relocation works any available free space in the disk group in which the failure occurs. If there is not enough spare disk space, a combination of spare space and free space is used. The free space used in hot-relocation must not have been excluded from hot-relocation use. Disks can be excluded from hot-relocation use by using vxdiskadm, vxedit or the VERITAS Enterprise Administrator (VEA).
Administering Hot-Relocation Configuring a System for Hot-Relocation Configuring a System for Hot-Relocation By designating spare disks and making free space on disks available for use by hot relocation, you can control how disk space is used for relocating subdisks in the event of a disk failure. If the combined free space and space on spare disks is not sufficient or does not meet the redundancy constraints, the subdisks are not relocated.
Administering Hot-Relocation Displaying Spare Disk Information Displaying Spare Disk Information Use the following command to display information about spare disks that are available for relocation: # vxdg spare The following is example output: GROUP AGS rootdg DISK DEVICE TAG OFFSET LENGTH FL disk02 c0t2d0 c0t2d0 0 658007 s Here disk02 is the only disk designated as a spare. The LENGTH field indicates how much spare space is currently available on disk02 for relocation.
Administering Hot-Relocation Marking a Disk as a Hot-Relocation Spare Marking a Disk as a Hot-Relocation Spare Hot-relocation allows the system to react automatically to I/O failure by relocating redundant subdisks to other disks. Hot-relocation then restores the affected VxVM objects and data. If a disk has already been designated as a spare in the disk group, the subdisks from the failed disk are relocated to the spare disk. Otherwise, any suitable free space in the disk group is used.
Administering Hot-Relocation Marking a Disk as a Hot-Relocation Spare Step 3. At the following prompt, indicate whether you want to add more disks as spares (y) or return to the vxdiskadm main menu (n): Mark another disk as a spare? [y,n,q,?] (default: n) Any VM disk in this disk group can now use this disk as a spare in the event of a failure. If a disk fails, hot-relocation should automatically occur (if possible). You should be notified of the failure and relocation through electronic mail.
Administering Hot-Relocation Removing a Disk from Use as a Hot-Relocation Spare Removing a Disk from Use as a Hot-Relocation Spare While a disk is designated as a spare, the space on that disk is not used for the creation of VxVM objects within its disk group. If necessary, you can free a spare disk for general use by removing it from the pool of hot-relocation disks.
Administering Hot-Relocation Excluding a Disk from Hot-Relocation Use Excluding a Disk from Hot-Relocation Use To exclude a disk from hot-relocation use, use the following command: # vxedit -g disk_group set nohotuse=on diskname Alternatively, using vxdiskadm: Step 1. Select menu item 15 (Exclude a disk from hot-relocation use) from the vxdiskadm main menu. Step 2.
Administering Hot-Relocation Making a Disk Available for Hot-Relocation Use Making a Disk Available for Hot-Relocation Use Free space is used automatically by hot-relocation in case spare space is not sufficient to relocate failed subdisks. You can limit this free space usage by hot-relocation by specifying which free disks should not be touched by hot-relocation. If a disk was previously excluded from hot-relocation use, you can undo the exclusion and add the disk back to the hot-relocation pool.
Administering Hot-Relocation Making a Disk Available for Hot-Relocation Use Make another disk available for hot-relocation use? [y,n,q,?] (default: n) Chapter 9 341
Administering Hot-Relocation Configuring Hot-Relocation to Use Only Spare Disks Configuring Hot-Relocation to Use Only Spare Disks If you want VxVM to use only spare disks for hot-relocation, add the following line to the file /etc/default/vxassist: spare=only If not enough storage can be located on disks marked as spare, the relocation fails. Any free space on non-spare disks is not used.
Administering Hot-Relocation Moving and Unrelocating Subdisks Moving and Unrelocating Subdisks When hot-relocation occurs, subdisks are relocated to spare disks and/or available free space within the disk group. The new subdisk locations may not provide the same performance or data layout that existed before hot-relocation took place. You can move the relocated subdisks (after hot-relocation is complete) to improve performance.
Administering Hot-Relocation Moving and Unrelocating Subdisks CAUTION During subdisk move operations, RAID-5 volumes are not redundant. Moving and Unrelocating Subdisks using vxdiskadm To move the hot-relocated subdisks back to the disk where they originally resided after the disk has been replaced following a failure, use the following procedure: Step 1. Select menu item 14 (Unrelocate subdisks back to a disk) from the vxdiskadm main menu. Step 2.
Administering Hot-Relocation Moving and Unrelocating Subdisks Requested operation is to move all the subdisks which were hot-relocated from disk10 back to disk10 of disk group rootdg. Continue with operation? [y,n,q,?] (default: y) A status message is displayed at the end of the operation. Unrelocate to disk disk10 is complete.
Administering Hot-Relocation Moving and Unrelocating Subdisks vxunreloc allows you to restore the system back to the configuration that existed before the disk failure. vxunreloc allows you to move the hot-relocated subdisks back onto a disk that was replaced due to a failure. When vxunreloc is invoked, you must specify the disk media name where the hot-relocated subdisks originally resided. When vxunreloc moves the subdisks, it moves them to the original offsets.
Administering Hot-Relocation Moving and Unrelocating Subdisks The destination disk should have at least as much storage capacity as was in use on the original disk. If there is not enough space, the unrelocate operation will fail and none of the subdisks will be moved. Forcing hot-relocated subdisks to accept different offsets By default, vxunreloc attempts to move hot-relocated subdisks to their original offsets.
Administering Hot-Relocation Moving and Unrelocating Subdisks subdisk is moved back to the original disk or to a new disk using vxunreloc, the information is erased. The original disk-media name and the original offset are saved in the subdisk records. To print all of the subdisks that were hot-relocated from disk01 in the rootdg disk group, use the following command: # vxprint -g rootdg -se 'sd_orig_dmname="disk01"' Restarting vxunreloc After Errors vxunreloc moves subdisks in three phases: Step 1.
Administering Hot-Relocation Modifying the Behavior of Hot-Relocation Modifying the Behavior of Hot-Relocation Hot-relocation is turned on as long as vxrelocd is running. You leave hot-relocation turned on so that you can take advantage of this feature if a failure occurs. However, if you choose to disable this feature (perhaps because you do not want the free space on some of your disks to be used for relocation), prevent vxrelocd from starting at system startup time.
Administering Hot-Relocation Modifying the Behavior of Hot-Relocation When executing vxrelocd manually, either include /etc/vx/bin in your PATH or specify vxrelocd’s absolute pathname, for example: # PATH=/etc/vx/bin:$PATH # export PATH # nohup vxrelocd root & or # nohup /etc/vx/bin/vxrelocd root user1 user2 & See the vxrelocd (1M) manual page for more information.
10 Administering Cluster Functionality Introduction A cluster consists of a number of hosts or nodes that share a set of disks. The main benefits of cluster configurations are: • Availability—If one node fails, the other nodes can still access the shared disks. When configured with suitable software, mission-critical applications can continue running by transferring their execution to a standby node in the cluster.
Administering Cluster Functionality Overview of Cluster Volume Management Overview of Cluster Volume Management In recent years, tightly coupled cluster systems have become increasingly popular in the realm of enterprise-scale mission-critical data processing. The primary advantage of clusters is protection against hardware failure. Should the primary node fail or otherwise become unavailable, applications can continue to run by transferring their execution to standby nodes in the cluster.
Administering Cluster Functionality Overview of Cluster Volume Management joining or leaving the cluster, and which have failed. The private network requires at least two communication channels to provide redundancy against one of the channels failing. If only one channel were used, its failure would be indistinguishable from node failure—a condition known as network partitioning.
Administering Cluster Functionality Overview of Cluster Volume Management NOTE You must run commands that configure or reconfigure VxVM objects on the master node. Tasks that must be initiated from the master node include setting up shared disk groups, creating and reconfiguring volumes, and performing snapshot operations. VxVM determines that the first node to join a cluster performs the function of master node. If the master node leaves a cluster, one of the slave nodes is chosen to be the new master.
Administering Cluster Functionality Overview of Cluster Volume Management Each physical disk is marked with a unique disk ID. When cluster functionality for VxVM starts on the master, it imports all shared disk groups (except for any that have the noautoimport attribute set). When a slave tries to join a cluster, the master sends it a list of the disk IDs that it has imported, and the slave checks to see if it can access them all.
Administering Cluster Functionality Overview of Cluster Volume Management NOTE The default activation mode for shared disk groups is off (inactive). Special uses of clusters, such as high availability (HA) applications and off-host backup, can use disk group activation to explicitly control volume access from different nodes in the cluster. The activation mode of a disk group controls volume I/O from different nodes in the cluster.
Administering Cluster Functionality Overview of Cluster Volume Management Table 10-2 summarizes the allowed and conflicting activation modes for shared disk groups: Table 10-2 Allowed and Conflicting Activation Modes Disk group activated in cluster as... Attempt to activate disk group on another node as...
Administering Cluster Functionality Overview of Cluster Volume Management NOTE If the default activation node is anything other than off, an activation following a cluster join, or a disk group creation or import can fail if another node in the cluster has activated the disk group in a conflicting mode. To display the activation mode for a shared disk group, use the vxdg list diskgroup command as described in “Listing Shared Disk Groups” on page 372.
Administering Cluster Functionality Cluster Initialization and Configuration See “Setting the Connectivity Policy on a Shared Disk Group” on page 377 for information on how to use the vxedit command to set the connectivity policy on a shared disk group. Limitations of Shared Disk Groups The cluster functionality of VxVM does not support RAID-5 volumes, or task monitoring for cluster-shareable disk groups.
Administering Cluster Functionality Cluster Initialization and Configuration When a node joins the cluster, this information is automatically loaded into VxVM on that node at node startup time. NOTE The cluster functionality of VxVM requires that a cluster monitor (such as provided by MC/ServiceGuard) has been configured. If MC/ServiceGuard is chosen as your cluster monitor, no additional configuration of VxVM is required, apart from the cluster configuration requirements of MC/ServiceGuard.
Administering Cluster Functionality Cluster Initialization and Configuration held up and restarted later. In most cases, cluster reconfiguration takes precedence. However, if the volume reconfiguration is in the commit stage, it completes first. For more information on cluster reconfiguration, see “vxclustd Daemon” on page 361 vxclustd Daemon The vxclustd daemon is the VxVM cluster reconfiguration daemon.
Administering Cluster Functionality Cluster Initialization and Configuration fails I/O in progress to shared disks, and stops access to shared disks and the vxclustd daemon. The vxclustd daemon invokes the cluster monitor command to halt the cluster on this node. When a clean node shutdown is performed, vxclustd waits until kernel cluster reconfiguration completes and then exits.
Administering Cluster Functionality Cluster Initialization and Configuration If a vxconfigd daemon on any node goes away during reconfiguration, all nodes are notified and the operation fails. If any node leaves the cluster, the operation fails unless the master has already committed it.
Administering Cluster Functionality Cluster Initialization and Configuration • network address of the vxconfigd daemon on each node On the master node, the vxconfigd daemon sets up the shared configuration by importing shared disk groups, and informs the vxclustd daemon when it is ready for the slave nodes to join the cluster. On slave nodes, the vxconfigd daemon is notified when the slave node can join the cluster.
Administering Cluster Functionality Cluster Initialization and Configuration on the slave node has successfully reconnected to the vxconfigd daemon on the master node, it has very little information about the shared configuration and any attempts to display or modify the shared configuration can fail. For example, shared disk groups listed using the vxdg list command are marked as disabled; when the rejoin completes successfully, they are marked as enabled.
Administering Cluster Functionality Cluster Initialization and Configuration successful shutdown can require a lot of time (minutes to hours). For instance, many applications have the concept of draining, where they accept no new work, but complete any work in progress before exiting. This process can take a long time if, for example, a long-running transaction is active. When the VxVM shutdown procedure is invoked, it checks all volumes in all shared disk groups on the node that is being shut down.
Administering Cluster Functionality Upgrading Cluster Functionality Node Abort If a node does not leave a cluster cleanly, this is because it crashed or because some cluster component made the node leave on an emergency basis. The ensuing cluster reconfiguration calls the VxVM abort function. This procedure immediately attempts to halt all access to shared volumes, although it does wait until pending I/O from or to the disk completes.
Administering Cluster Functionality Dirty Region Logging (DRL) in Cluster Environments Each new Volume Manager release supports two cluster protocol versions. The lower version number corresponds to a previous Volume Manager release. This has a fixed set of features and communication protocols. The higher version number corresponds to the new release of VxVM which has a new set of these features.
Administering Cluster Functionality Dirty Region Logging (DRL) in Cluster Environments In a cluster environment, the VxVM implementation of DRL differs slightly from the normal implementation. The following sections outline some of the differences and discuss some aspects of the cluster environment implementation. Header Compatibility Except for the addition of a cluster-specific magic number, DRL headers in a cluster environment are the same as their non-clustered counterparts.
Administering Cluster Functionality Dirty Region Logging (DRL) in Cluster Environments If a shared disk group is imported by a system without cluster support, VxVM considers the logs of the shared volumes to be invalid and conducts a full volume recovery. After the recovery completes, VxVM uses DRL. The cluster functionality of VxVM can perform a DRL recovery on a non-shared volume.
Administering Cluster Functionality Administering VxVM in Cluster Environments Administering VxVM in Cluster Environments The following sections describe procedures for administering the cluster functionality of VxVM. NOTE Most VxVM commands require superuser or equivalent privileges. Requesting the Status of a Cluster Node The vxdctl utility controls the operation of the vxconfigd volume configuration daemon. The -c option can be used to request cluster information.
Administering Cluster Functionality Administering VxVM in Cluster Environments A portion of the output from this command (for the device c4t1d0) is shown here: Device: devicetag: type: clusterid: disk: timeout: group: flags: ... c4t1d0 c4t1d0 sliced cvm2 name=disk01 id=963616090.1034.cvm2 30 name=rootdg id=963616065.1032.cvm2 online ready autoconfig shared imported Note that the clusterid field is set to cvm2 (the name of the cluster), and the flags field includes an entry for shared.
Administering Cluster Functionality Administering VxVM in Cluster Environments To display information about one specific disk group, use the following command: # vxdg list diskgroup where diskgroup is the disk group name. For example, the output for the command vxdg list group1 on the master is as follows: Group: group1 dgid: 774222028.1090.teal import-id: 32768.
Administering Cluster Functionality Administering VxVM in Cluster Environments CAUTION The operating system cannot tell if a disk is shared. To protect data integrity when dealing with disks that can be accessed by multiple systems, use the correct designation when adding a disk to a disk group. VxVM allows you to add a disk that is not physically shared to a shared disk group if the node where the disk is accessible is the only node in the cluster.
Administering Cluster Functionality Administering VxVM in Cluster Environments # vxdg -s import diskgroup where diskgroup is the disk group name or ID. On subsequent cluster restarts, the disk group is automatically imported as shared. Note that it can be necessary to deport the disk group (using the vxdg deport diskgroup command) before invoking the vxdg utility. Forcibly Importing a Disk Group You can use the -f option to the vxdg command to import a disk group forcibly.
Administering Cluster Functionality Administering VxVM in Cluster Environments Moving Objects Between Disk Groups As described in “Moving Objects Between Disk Groups” on page 181, you can use the vxdg move command to move a self-contained set of VxVM objects such as disks and top-level volumes between disk groups. In a cluster, you can move such objects between private disk groups on any cluster node where those disk groups are imported.
Administering Cluster Functionality Administering VxVM in Cluster Environments Changing the Activation Mode on a Shared Disk Group NOTE The activation mode for access by a cluster node to a shared disk group is set on that node. The activation mode of a shared disk group can be changed using the following command: # vxdg -g diskgroup set activation=mode The activation mode is one of exclusive-write or ew, read-only or ro, shared-read or sr, shared-write or sw, or off.
Administering Cluster Functionality Administering VxVM in Cluster Environments When using the vxassist command to create a volume, you can use the exclusive=on attribute to specify that the volume may only be opened by one node in the cluster at a time. For example, to create the mirrored volume volmir in the disk group dskgrp, and configure it for exclusive open, use the following command: # vxassist -g dskgrp make volmir 5g layout=mirror exclusive=on Multiple opens by the same node are also supported.
Administering Cluster Functionality Administering VxVM in Cluster Environments # vxdctl list This command produces output similar to the following: version: 3/1 seqno: 0.
Administering Cluster Functionality Administering VxVM in Cluster Environments Upgrading the Cluster Protocol Version NOTE The cluster protocol version can only be updated on the master node. After all the nodes in the cluster have been updated with a new cluster protocol, you can upgrade the entire cluster using the following command on the master node: # vxdctl upgrade Recovering Volumes in Shared Disk Groups NOTE Volumes can only be recovered on the master node.
Administering Cluster Functionality Administering VxVM in Cluster Environments where node is an integer. If a comma-separated list of nodes is supplied, the vxstat utility displays the sum of the statistics for the nodes in the list. For example, to obtain statistics for node 2, volume vol1,use the following command: # vxstat -g group1 -n 2 vol1 This command produces output similar to the following: TYP NAME vol vol1 OPERATIONS READ WRITE 2421 0 BLOCKS READ WRITE 600000 0 AVG TIME(ms) READ WRITE 99.0 0.
Administering Cluster Functionality Administering VxVM in Cluster Environments 382 Chapter 10
11 Configuring Off-Host Processing Introduction Off-host processing allows you to implement the following activities: • Data Backup—As the requirement for 24 x 7 availability becomes essential for many businesses, organizations cannot afford the downtime involved in backing up critical data offline. By taking a snapshot of the data, and backing up from this snapshot, business-critical applications can continue to run without extended down time or impacted performance.
Configuring Off-Host Processing Introduction FastResync of Volume Snapshots NOTE You may need an additional license to use this feature. VxVM allows you to take multiple snapshots of your data at the level of a volume. A snapshot volume contains a stable copy of a volume’s data at a given moment in time that you can use for online backup or decision support. If FastResync is enabled on a volume, VxVM uses a FastResync map to keep track of which blocks are updated in the volume and in the snapshot.
Configuring Off-Host Processing Implementing Off-Host Processing Solutions option to resynchronize the snapshot plexes. You cannot use vxassist snapback for this purpose. This restriction does not apply if you split a snapshot volume into a separate disk group from its original volume, and subsequently return the snapshot volume to the original disk group. For more information, see “Volume Snapshots” on page 50 and “FastResync” on page 52.
Configuring Off-Host Processing Implementing Off-Host Processing Solutions that are attached to different host controllers than the disks in the primary volumes, it is possible to avoid contending with the primary host for I/O resources.
Configuring Off-Host Processing Implementing Off-Host Processing Solutions Persistent FastResync and disk group split and join features of VxVM. It is beyond the scope of this guide to describe how to configure a database to use this procedure, or how to perform the backup itself. To back up a volume in a private disk group, use the following procedure. Step 1.
Configuring Off-Host Processing Implementing Off-Host Processing Solutions NOTE By default, VxVM attempts to avoid placing a snapshot mirrors on a disk that already holds any plexes of a data volume. However, this may be impossible if insufficient space is available in the disk group. In this case, VxVM uses any available space on other disks in the disk group.
Configuring Off-Host Processing Implementing Off-Host Processing Solutions If a database spans more than one volume, specify all the volumes and their snapshot volumes on the same line, for example: # vxassist -g dbasedg snapshot vol1 snapvol1 vol2 snapvol2 \ vol3 snapvol3 Step 6. If you temporarily suspended updates to the volume by a database in step 4, release all the tables from hot backup mode. Step 7.
Configuring Off-Host Processing Implementing Off-Host Processing Solutions Step 12. On the OHP host, use the following command to deport the snapshot volume’s disk group: # vxdg deport snapvoldg Step 13. On the primary host, re-import the snapshot volume’s disk group using the following command: # vxdg import snapvoldg Step 14. On the primary host, use the following command to rejoin the snapshot volume’s disk group with the original volume’s disk group: # vxdg join snapvoldg volumedg Step 15.
Configuring Off-Host Processing Implementing Off-Host Processing Solutions # vxprint -g volumedg -F%hasdcolog volume This command returns on if there is a DCO and DCO volume; otherwise, it returns off. If the volume is not associated with a DCO object and DCO volume, follow the procedure described in “Adding a DCO and DCO Volume” on page 277. Step 2.
Configuring Off-Host Processing Implementing Off-Host Processing Solutions If you start vxassist snapstart in the background using the -b option, you can use the vxassist snapwait command to wait for the creation of the mirror to complete as shown here: # vxassist -g volumedg snapwait volume If vxassist snapstart is not run in the background, it does not exit until the mirror has been synchronized with the volume. The mirror is then ready to be used as a plex of a snapshot volume.
Configuring Off-Host Processing Implementing Off-Host Processing Solutions Step 9. On the primary host, deport the snapshot volume’s disk group using the following command: # vxdg deport snapvoldg Step 10. On the OHP host where the replica database is to be set up, use the following command to import the snapshot volume’s disk group: # vxdg import snapvoldg Step 11. The snapshot volume is initially disabled following the split.
Configuring Off-Host Processing Implementing Off-Host Processing Solutions Step 4. On the primary host, use the following command to rejoin the snapshot volume’s disk group with the original volume’s disk group: # vxdg join snapvoldg volumedg Step 5. The snapshot volume is initially disabled following the join. Use the following commands on the primary host to recover and restart the snapshot volume: # vxrecover -g volumedg -m snapvol # vxvol -g volumedg start snapvol Step 6.
12 Performance Monitoring and Tuning Introduction Volume Manager (VxVM) can improve overall system performance by optimizing the layout of data storage on the available hardware. This chapter contains guidelines establishing performance priorities, for monitoring performance, and for configuring your system appropriately. Performance Guidelines VxVM allows you to optimize data storage performance using the following two strategies: • Balance the I/O load among the available disk drives.
Performance Monitoring and Tuning Performance Guidelines Striping Striping improves access performance by cutting data into slices and storing it on multiple devices that can be accessed in parallel. Striped plexes improve access performance for both read and write operations. Having identified the most heavily accessed volumes (containing file systems or databases), you can increase access bandwidth to this data by striping it across portions of multiple disks.
Performance Monitoring and Tuning Performance Guidelines In some cases, you can also use mirroring to improve I/O performance. Unlike striping, the performance gain depends on the ratio of reads to writes in the disk accesses. If the system workload is primarily write-intensive (for example, greater than 30 percent writes), mirroring can result in reduced performance. Combining Mirroring and Striping NOTE You may need an additional license to use this feature.
Performance Monitoring and Tuning Performance Guidelines RAID-5 NOTE You may need an additional license to use this feature. RAID-5 offers many of the advantages of combined mirroring and striping, but requires less disk space. RAID-5 read performance is similar to that of striping and RAID-5 parity offers redundancy similar to mirroring. Disadvantages of RAID-5 include relatively slow write performance.
Performance Monitoring and Tuning Performance Monitoring In the configuration example shown in the figure “Use of Mirroring and Striping for Improved Performance” the read policy of the mirrored-stripe volume labeled Hot Vol is set to prefer for the striped plex PL1. This policy distributes the load when reading across the otherwise lightly-used disks in PL1, as opposed to the single disk in plex PL2.
Performance Monitoring and Tuning Performance Monitoring Setting Performance Priorities The important physical performance characteristics of disk hardware are the relative amounts of I/O on each drive, and the concentration of the I/O within a drive to minimize seek time. Based on monitored results, you can then move the location of subdisks to balance I/O activity across the disks. The logical priorities involve software operations and how they are managed.
Performance Monitoring and Tuning Performance Monitoring trace data, or both of these (this is the default action). Selection can be limited to a specific disk group, to specific VxVM kernel I/O object types, or to particular named objects or devices. For detailed information about how to use vxtrace, refer to the vxtrace(1M) manual page.
Performance Monitoring and Tuning Performance Monitoring OPERATIONS BLOCKS TYP NAME READ vol blop 0 vol foobarvol 0 vol rootvol 73017 vol swapvol 13197 vol testvol 0 WRITE 0 0 181735 20252 0 AVG TIME(ms) READ WRITE 0 0 0 0 718528 1114227 105569 162009 0 0 READ 0.0 0.0 26.8 25.8 0.0 WRITE 0.0 0.0 27.9 397.0 0.0 Additional volume statistics are available for RAID-5 configurations. For detailed information about how to use vxstat, refer to the vxstat (1M) manual page.
Performance Monitoring and Tuning Performance Monitoring vol src vol swapvol 79174 22751 23603 32364 425472 182001 139302 258905 22.4 25.3 30.9 323.2 Such output helps to identify volumes with an unusually large number of operations or excessive read or write times. To display disk statistics, use the vxstat -d command.
Performance Monitoring and Tuning Performance Monitoring where dest_disk is the disk to which you want to move the volume. It is not necessary to specify a dest_disk. If you do not specify a dest_disk, the volume is moved to an available disk with enough space to contain the volume. For example, to move the volume from disk03 to disk04, use the following command: # vxassist move archive !disk03 disk04 This command indicates that the volume is to be reorganized so that no part remains on disk03.
Performance Monitoring and Tuning Performance Monitoring Use I/O tracing (or subdisk statistics) to determine whether volumes have excessive activity in particular regions of the volume. If the active regions can be identified, split the subdisks in the volume and move those regions to a less busy disk. CAUTION Striping a volume, or splitting a volume across multiple disks, increases the chance that a disk failure results in failure of that volume.
Performance Monitoring and Tuning Tuning VxVM Tuning VxVM This section describes how to adjust the tunable parameters that control the system resources used by VxVM. Depending on the system resources that are available, adjustments may be required to the values of some tunable parameters to optimize performance. General Tuning Guidelines VxVM is optimally tuned for most configurations ranging from small systems to larger servers.
Performance Monitoring and Tuning Tuning VxVM A general recommendation for users of disk array subsystems is to create a single disk group for each array so the disk group can be physically moved as a unit between systems. Number of Configuration Copies for a Disk Group Selection of the number of configuration copies for a disk group is based on a trade-off between redundancy and performance.
Performance Monitoring and Tuning Tuning VxVM The values of system tunables can be examined by selecting Kernel Configuration > Configuration Parameters in the System Administration Manager (SAM). Tunable Parameters The following sections describe specific tunable parameters. dmp_pathswitch_blks_shift The number of contiguous I/O blocks (expressed as an integer power of 2) that are sent along a DMP path to an Active/Active array before switching to the next available path.
Performance Monitoring and Tuning Tuning VxVM The default for this tunable is 50 ticks. Increasing this value results in slower recovery operations and consequently lower system impact while recoveries are being performed. vol_fmr_logsz The maximum size in kilobytes of the bitmap that Non-Persistent FastResync uses to track changed blocks in a volume.
Performance Monitoring and Tuning Tuning VxVM vol_max_vol The maximum number of volumes that can be created on the system. This value can be set to between 1 and the maximum number of minor numbers representable in the system. The default value for this tunable is 16777215. vol_maxio The maximum size of logical I/O operations that can be performed without breaking up the request. I/O requests to VxVM that are larger than this value are broken up and performed synchronously.
Performance Monitoring and Tuning Tuning VxVM vol_maxparallelio The number of I/O operations that the vxconfigd (1M) daemon is permitted to request from the kernel in a single VOL_VOLDIO_READ per VOL_VOLDIO_WRITE ioctl call. The default value for this tunable is 256. It is not desirable to change this value. vol_maxspecialio The maximum size of an I/O request that can be issued by an ioctl call. Although the ioctl request itself can be small, it can request a large I/O request be performed.
Performance Monitoring and Tuning Tuning VxVM volcvm_smartsync If set to 0, volcvm_smartsync disables SmartSync on shared disk groups. If set to 1, this parameter enables the use of SmartSync with shared disk groups. See“SmartSync Recovery Accelerator” on page 48 for more information. voldrl_max_drtregs The maximum number of dirty regions that can exist for non-sequential DRL on a volume. A larger value may result in improved system performance at the expense of recovery time.
Performance Monitoring and Tuning Tuning VxVM voliomem_maxpool_sz The maximum memory requested from the system by VxVM for internal purposes. This tunable has a direct impact on the performance of VxVM as it prevents one I/O operation from using all the memory in the system. VxVM allocates two pools that can grow up to voliomem_maxpool_sz, one for RAID-5 and one for mirrored volumes.
Performance Monitoring and Tuning Tuning VxVM If trace data is often being lost due to this buffer size being too small, then this value can be tuned to a more generous amount. voliot_iobuf_limit The upper limit to the size of memory that can be used for storing tracing buffers in the kernel. Tracing buffers are used by the VxVM kernel to store the tracing event records. As trace buffers are requested to be stored in the kernel, the memory for them is drawn from this pool.
Performance Monitoring and Tuning Tuning VxVM volraid_rsrtransmax The maximum number of transient reconstruct operations that can be performed in parallel for RAID-5. A transient reconstruct operation is one that occurs on a non-degraded RAID-5 volume that has not been predicted. Limiting the number of these operations that can occur simultaneously removes the possibility of flooding the system with many reconstruct operations, and so reduces the risk of causing memory starvation.
Performance Monitoring and Tuning Tuning VxVM 416 Chapter 12
A Commands Summary This appendix summarizes the usage and purpose of important commands in VERITAS Volume Manager (VxVM). References are included to longer descriptions in the remainder of this book.
Commands Summary appropriate manual page in the 1M section. Table A-1 Obtaining Information About Objects in VxVM Command Table A-2 vxdisk list [diskname] Lists disks under control of VxVM. See “Displaying Disk Information” on page 106. vxdg list [diskgroup] Lists information about disk groups. See “Displaying Disk Group Information” on page 154. vxdg -s list Lists information about shared disk groups in a cluster. See “Creating Volumes with Exclusive Open Access by a Node” on page 377.
Commands Summary Table A-2 Administering Disks (Continued) Command Table A-3 vxedit rename olddisk newdisk Renames a disk under control of VxVM.See “Renaming a Disk” on page 104. vxedit set reserve=on|off diskname Sets aside/does not set aside a disk from use in a disk group. See “Reserving Disks” on page 105. vxedit set nohotuse=on|off diskname Does not/does allow free space on a disk to be used for hot-relocation.
Commands Summary Table A-3 Creating and Administering Disk Groups (Continued) Command vxdg -s init diskgroup \ [diskname=]devicename 420 Description Creates a shared disk group in a cluster using a pre-initialized disk. See “Creating a Shared Disk Group” on page 373. vxdg [-n newname] deport diskgroup “Deporting a Disk Group” on page 161Deports a disk group and optionally renames it. See . vxdg [-n newname] import diskgroup Imports a disk group and optionally renames it.
Commands Summary Table A-3 Creating and Administering Disk Groups (Continued) Command Table A-4 vxrecover -g diskgroup -sb Starts all volumes in an imported disk group. See .“Moving Disk Groups Between Systems” on page 168 vxdg destroy diskgroup Destroys a disk group and releases its disks. See “Destroying a Disk Group” on page 189. Creating and Administering Subdisks Command Description vxmake sd subdisk diskname,offset,length Creates a subdisk. See “Creating Subdisks” on page 199.
Commands Summary Table A-4 Creating and Administering Subdisks (Continued) Command Table A-5 vxunreloc [-g diskgroup] original_disk Relocates subdisks to their original disks. See “Moving and Unrelocating Subdisks using vxunreloc” on page 345 vxsd dis subdisk Dissociates a subdisk from a plex. See “Dissociating Subdisks from Plexes” on page 207. vxedit rm subdisk Removes a subdisk. See “Removing Subdisks” on page 208. vxsd -o rm dis subdisk Dissociates and removes a subdisk from a plex.
Commands Summary Table A-5 Table A-6 Creating and Administering Plexes (Continued) Command Description vxplex mv oldplex newplex Replaces a plex. See “Moving Plexes” on page 221. vxplex cp volume newplex Copies a volume onto a plex. See “Copying Plexes” on page 222. vxplex fix clean plex Sets the state of a plex in an unstartable volume to CLEAN. See “Reattaching Plexes” on page 220. vxplex -o rm dis plex Dissociates and removes a plex from a volume.
Commands Summary Table A-6 Creating Volumes (Continued) Command vxassist make volume length \ layout=stripe|raid5 \ [stripeunit=W] [ncol=N] [attributes] vxassist make volume length \ layout=layout mirror=ctlr [attributes] Table A-7 Creates a striped or RAID-5 volume. See “Creating a Striped Volume” on page 247 and “Creating a RAID-5 Volume” on page 252. Creates a volume with mirrored data plexes on separate controllers. See “Mirroring across Targets, Controllers or Enclosures” on page 250.
Commands Summary Table A-7 Administering Volumes (Continued) Command vxassist remove log volume [attributes] Removes a log from a volume. See “Removing a DCO and DCO Volume” on page 281, “Removing a DRL Log” on page 284 and “Removing a RAID-5 Log” on page 287. vxvol set fastresync=on|off volume Turns FastResync on or off for a volume. See “Adding a RAID-5 Log” on page 285. vxassist growto volume length Grows a volume to a specified size. See “Resizing Volumes using vxassist” on page 290.
Commands Summary Table A-7 Administering Volumes (Continued) Command 426 Description vxassist snapclear snapshot Makes the snapshot volume independent. See “Dissociating a Snapshot Volume (snapclear)” on page 315. vxassist [-g diskgroup] relayout volume \ [layout=layout] [relayout_options] Performs online relayout of a volume. See “Performing Online Relayout” on page 318.
Commands Summary Table A-8 Monitoring and Controlling Tasks Command Appendix A Description vxcommand -t tasktag [options] [arguments] Specifies a task tag to a command. See “Specifying Task Tags” on page 267. vxtask [-h] list Lists tasks running on a system. See “vxtask Usage” on page 269. vxtask monitor task Monitors the progress of a task. See “vxtask Usage” on page 269. vxtask pause task Suspends operation of a task. See “vxtask Usage” on page 269. vxtask -p list Lists all paused tasks.
Commands Summary 428 Appendix A
Glossary active/active disk arrays This type of multipathed disk array allows you to access a disk in the disk array through all the paths to the disk simultaneously, without any performance degradation. active/passive disk arrays This type of multipathed disk array allows one path to a disk to be designated as primary and used to access the disk at any time.
Glossary associate associate The process of establishing a relationship between VxVM objects; for example, a subdisk that has been created and defined as having a starting point within a plex is referred to as being associated with that plex. associated plex A plex associated with a volume. associated subdisk A subdisk associated with a plex. atomic operation An operation that either succeeds completely or fails and leaves everything as it was before the operation was started.
Glossary disk media name detached A state in which a VxVM object is associated with another object, but not enabled for use. device name The device name or address used to access a physical disk, such as c0t0d0. The c#t#d# syntax identifies the controller, target address, and disk.
Glossary disk media record disk media record A configuration record that identifies a particular disk, by disk ID, and gives that disk a logical (or administrative) name. disk name A logical or administrative name chosen for a disk that is under the control of VxVM, such as disk03. The term disk media name is also used to refer to a disk name. dissociate The process by which any link that exists between two VxVM objects is removed.
Glossary partition initiating node The node on which the system administrator is running a utility that requests a change to VxVM objects. This node initiates a volume reconfiguration. JBOD The common name for an unintelligent disk array which may, or may not, support the hot-swapping of disks. The name is derived from “just a bunch of disks.” log plex A plex used to store a RAID-5 log. The term log plex may also be used to refer to a Dirty Region Logging plex.
Glossary path path When a disk is connected to a host, the path to the disk consists of the HBA (Host Bus Adapter) on the host, the SCSI or fibre cable connector and the controller on the disk or disk array. These components constitute a path to a disk. A failure on any of these results in DMP trying to shift all I/O for that disk onto the remaining (alternate) paths. Also see active/passive disk arrays, primary path and secondary path.
Glossary stripe other disk groups, which are used for backup purposes only. It also contains disk records that define all disk devices on the system. root disk The disk containing the root file system. This disk may be under VxVM control. root disk group A special private disk group that always exists on the system. The root disk group is named rootdg. Even though rootdg is the default disk group, it does not necessarily contain the root disk unless this is under VxVM control.
Glossary stripe size stripe size The sum of the stripe unit sizes comprising a single stripe across all columns being striped. stripe unit Equally-sized areas that are allocated alternately on the subdisks (within columns) of each striped plex. In an array, this is a set of logically contiguous blocks that exist on each disk before allocations are made from the next disk in the array. A stripe unit may also be referred to as a stripe element. stripe unit size The size of each stripe unit.
Index Symbols /dev/vx/dsk block device files, 260 /dev/vx/rdsk character device files, 260 /etc/default/vxassist defaults file, 231 /etc/default/vxassist file, 342 /etc/default/vxdg defaults file, 357 /etc/fstab file, 295 /etc/vx/cntrls.exclude file, 76 /etc/vx/disks.exclude file, 76 /etc/vx/enclr.exclude file, 77 /etc/vx/volboot file, 195 /sbin/rc2.
Index upgrading cluster protocol version, 380 upgrading online, 367 use of MC/ServiceGuard with VxVM, 360 vol_fmr_logsz tunable, 409 volume reconfiguration, 362 vxclustd, 361 vxdctl, 371 vxrecover, 380 vxstat, 380 cluster-shareable disk groups in clusters, 354 cmhaltnode interaction with VXVM, 366 columns changing number of, 319 in striping, 22 mirroring in striped-mirror volumes, 249 comment plex attribute, 224 subdisk attribute, 209 concatenated volumes, 19, 226 concatenated-mirror volumes converting to m
Index re-including support for, 72 removing vendor-supplied support package, 70 disk groups activating shared, 377 activation in clusters, 357 adding disks to, 158 avoiding conflicting minor numbers on import, 170 clearing locks on disks, 169 cluster-shareable, 354 converting to private, 375 creating, 156 creating shared, 373 creating with old version number, 194 default, 151 defaults file for shared, 357 defined, 12 deporting, 161 designating as shareable, 354 destroying, 189 disabling, 188 displaying fre
Index layout of DCO plexes, 177 making available for hot-relocation, 336 making free space available for hot-relocation use, 340 marking as spare, 336 mirroring volumes on, 274 moving between disk groups, 167, 181 moving disk groups between systems, 168 moving volumes from, 296 naming schemes, 64 number, 4 obtaining performance statistics, 403 partial failure messages, 331 postponing replacement, 97 primary path, 131 putting under control of VxVM, 76 reinitializing, 86 releasing from disk groups, 189 removi
Index plex state, 214 volume state, 265 ENABLED plex kernel state, 218 volume kernel state, 266 enabled paths, displaying, 131 enclosure-based naming, 7, 78 displayed by vxprint, 79 DMP, 112 enclosure-based naming scheme, 65 enclosures, 7 discovering disk access names in, 79 issues with nopriv disks, 80 issues with simple disks, 80 mirroring across, 250 error messages Disk for disk group not found, 169 Disk group has no valid configuration copies, 169 Disk group version doesn’t support feature, 190 Disk is
Index use of spare disks and free space, 333 vxrelocd, 327 I I/O use of statistics in performance tuning, 402 using traces for performance tuning, 405 I/O operations maximum number in parallel, 410 maximum size of, 410 identifiers for tasks, 267 initialization of disks, 76, 83 ioctl calls, 410, 411 IOFAIL plex condition, 217 IOFAIL plex state, 214 K kernel states for plexes, 217 volumes, 266 L layered volumes converting to non-layered, 322 defined, 36, 227 striped-mirror, 28 layouts changing default used by
Index converting to striped-mirror, 322 creating, 248 defined, 226 performance, 397 mirroring defined, 25 mirroring plus striping, 27 mirrors adding to volumes, 273 boot disk, 89 creating of VxVM root disk, 90 creating snapshot, 310 defined, 17 removing from volumes, 276 specifying number of, 242 multipathing displaying information about, 130 N names changing for disk groups, 165 defining for snapshot volumes, 313 device, 4, 64 disk media, 13 plex, 16 plex attribute, 224 renaming disks, 104 subdisk, 14 subd
Index moving volumes to improve, 403 obtaining statistics for disks, 403 obtaining statistics for volumes, 401 RAID-5 volumes, 398 setting priorities, 400 striped volumes, 396 striping to improve, 404 tracing volume operations, 400 tuning large systems, 406 tuning VxVM, 406 using I/O statistics, 402 Persistent FastResync, 53, 54 physical disks adding to disk groups, 158 clearing locks on, 169 complete failure messages, 332 determining failed, 331 displaying information, 106 displaying information about, 154
Index maximum number per volume, 16 mirrors, 17 moving, 221, 280 name attribute, 224 names, 16 partial failure messages, 331 putil attribute, 224 putting online, 220, 271 reattaching, 220 recovering after correctable hardware failure, 332 removing, 222 removing from volumes, 276 sparse, 204 specifying for online relayout, 319 states, 213 striped, 22 taking offline, 219, 271 tutil attribute, 224 types, 15 polling interval for DMP restore, 147 preferred plex performance of read policy, 398 read policy, 293 pr
Index performing online, 318 resuming, 321 reversing direction of, 321 specifying non-default, 319 specifying plexes, 319 specifying task tags for, 320 storage, 39 transformation characteristics, 43 viewing status of, 320 relocation automatic, 326 complete failure messages, 332 limitations, 328 partial failure messages, 331 REMOVED plex condition, 217 removing disks, 97 removing physical disks, 93 replacing disks, 97 replay logs and sequential DRL, 47 REPLAY volume state, 265 resilvering databases, 48 resto
Index displaying information about, 316 merging with original volumes, 314 of RAID-5 volumes, 308 on multiple volumes, 58 removing, 312 resynchronization on snapback, 58 resynchronizing volumes from, 315 used to back up volumes online, 309 SNAPTMP plex state, 215 spanned volumes, 19 spanning, 19 spare disks displaying, 335 marking disks as, 336 used for hot-relocation, 332 sparse plexes, 204 STALE plex state, 216 states for plexes, 213 volume, 264 storage ordered allocation of, 237, 246, 253 storage attribu
Index increasing for VxVM rootable system, 92 SYNC volume state, 265 T t#, 4, 65 tags for tasks, 267 specifying for online relayout tasks, 320 specifying for tasks, 267 target IDs number, 4 specifying to vxassist, 236 target mirroring, 238, 250 task monitor in VxVM, 267 tasks aborting, 268 changing state of, 268, 269 identifiers, 267 listing, 268 managing, 268 modifying parameters of, 269 monitoring, 268 monitoring online relayout, 320 pausing, 269 resuming, 269 specifying tags, 267 specifying tags on onlin
Index vol_maxparallelio tunable, 411 vol_maxspecialio tunable, 411 vol_subdisk_num tunable, 411 volboot file, 195 adding entry to, 195 volcvm_smartsync tunable, 412 voldrl_max_drtregs tunable, 412 voldrl_max_seq_dirty tunable, 47, 412 voldrl_min_regionsz tunable, 412 voliomem_chunk_size tunable, 412 voliomem_maxpool_sz tunable, 413 voliot_errbuf_default tunable, 413 voliot_iobuf_dflt tunable, 413 voliot_iobuf_limit tunable, 414 voliot_iobuf_max tunable, 414 voliot_max_open tunable, 414 volraid_minpool_size
Index finding out by how much can grow, 288 flagged as dirty, 44 initializing, 258 initializing contents to zero, 259 kernel states, 266 layered, 27, 36, 227 limit on number of plexes, 16 limitations, 16 making immediately available for use, 258 maximum number of, 410 maximum number of data plexes, 399 merging snapshots, 314 mirrored, 25, 226 mirrored-concatenated, 27 mirrored-stripe, 26, 226 mirroring across controllers, 240, 250 mirroring across targets, 238, 250 mirroring all, 273 mirroring on disks, 274
Index used to create concatenated-mirror volumes, 243 used to create mirrored volumes, 242 used to create mirrored-concatenated volumes, 242 used to create mirrored-stripe volumes, 248 used to create RAID-5 volumes, 252 used to create snapshots, 309 used to create striped volumes, 247 used to create striped-mirror volumes, 249 used to create volumes, 230 used to define layout on specified storage, 236 used to discover maximum volume size, 234 used to display information about snapshots, 316 used to dissoci
Index used to destroy disk groups, 189 used to disable a disk group, 188 used to display disk group version, 193 used to display free space in disk groups, 155 used to display information about disk groups, 154 used to force import of disk groups, 169 used to import disk groups, 164 used to import shared disk groups, 374 used to join disk groups, 185 used to list objects affected by move, 177 used to list shared disk groups, 372 used to list spare disks, 335 used to move disk groups between systems, 168 u
Index used to display status of DMP error daemons, 148 used to display status of DMP restore daemon, 148 used to list controllers, 134 used to rename enclosures, 146 used to set restore polling interval, 147 used to specify DMP restore policy, 147 used to start DMP restore daemon, 147 used to stop DMP restore daemon, 148 vxedit used to change plex attributes, 224 used to change subdisk attributes, 209 used to configure number of configuration copies for a disk group, 407 used to exclude free space on disks
Index used to add subdisks to striped plexes, 205 used to associate subdisks with existing plexes, 204 used to dissociate subdisks, 207 used to fill in sparse plexes, 205 used to join subdisks, 203 used to move subdisk contents, 201 used to remove subdisks from VxVM, 207 used to split subdisks, 202 vxstat used to determine failed disk, 331 used to obtain disk performance statistics, 403 used to obtain volume performance statistics, 401 used with clusters, 380 zeroing counters, 402 vxtask used to abort task
Index 455