Veritas™ File System 5.0.1 Administrator's Guide HP-UX 11i v3 HP Part Number: 5900-0082 Published: November 2009 Edition: 1.
© Copyright 2009 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation and Technical Data for Commercial Items are licensed to the U.S. Government under vendor’s standard commercial license. The information contained herein is subject to change without notice.
Contents Technical Support ............................................................................................... 4 Chapter 1 Introducing Veritas File System ....................................... 15 About Veritas File System .............................................................. Logging ................................................................................ Extents ................................................................................ File system disk layouts ........
8 Contents Chapter 2 VxFS performance: creating, mounting, and tuning file systems ..................................................................... 33 Creating a VxFS file system ............................................................ Block size ............................................................................. Intent log size ........................................................................ Mounting a VxFS file system ..........................................................
Contents Chapter 4 VxFS I/O Overview ............................................................... 69 About VxFS I/O ............................................................................ Buffered and Direct I/O ................................................................. Direct I/O ............................................................................. Unbuffered I/O ...................................................................... Data synchronous I/O ............................
10 Contents Chapter 7 Quotas .................................................................................. 109 About quota limits ...................................................................... About quota files on Veritas File System ......................................... About quota commands ............................................................... Using quotas ............................................................................. Turning on quotas ..........................
Contents Volume encapsulation ................................................................. Encapsulating a volume ......................................................... Deencapsulating a volume ..................................................... Reporting file extents .................................................................. Examples of reporting file extents ........................................... Load balancing .......................................................................
12 Contents File placement policy rule and statement ordering ............................ 176 File placement policies and extending files ...................................... 179 Appendix A Quick Reference ................................................................. 181 Command summary .................................................................... Online manual pages ................................................................... Creating a VxFS file system ............................
Contents About unique message identifiers .................................................. 250 Unique message identifiers .......................................................... 251 Appendix C Disk layout .......................................................................... 255 About disk layouts ...................................................................... Supported disk layouts and operating systems ................................. VxFS Version 4 disk layout ......................
14 Contents
Chapter 1 Introducing Veritas File System This chapter includes the following topics: ■ About Veritas File System ■ Veritas File System features ■ Veritas File System performance enhancements ■ Using Veritas File System About Veritas File System A file system is simply a method for storing and organizing computer files and the data they contain to make it easy to find and access them.
16 Introducing Veritas File System Veritas File System features ■ File system disk layouts Logging A key aspect of any file system is how to recover if a system crash occurs. Earlier methods required a time-consuming scan of the entire file system. A better solution is the method of logging (or journaling) the metadata of files. VxFS logs new attribute information into a reserved area of the file system, whenever file system changes occur.
Introducing Veritas File System Veritas File System features ■ Extent-based allocation Extents allow disk I/O to take place in units of multiple blocks if storage is allocated in consecutive blocks. ■ Extent attributes Extent attributes are the extent allocation policies associated with a file. ■ Fast file system recovery VxFS provides fast recovery of a file system from system failure.
18 Introducing Veritas File System Veritas File System features ■ Multi-volume support The multi-volume support feature allows several volumes to be represented by a single logical object. ■ Dynamic Storage Tiering The Dynamic Storage Tiering (DST) option allows you to configure policies that automatically relocate files from one volume to another, or relocate files by running file relocation commands, which can improve performance for applications that access specific types of files.
Introducing Veritas File System Veritas File System features first Used for single indirection. Each entry in the extent indicates the starting block number of an indirect data extent second Used for double indirection. Each entry in the extent indicates the starting block number of a single indirect address extent. Each indirect address extent is 8K long and contains 2048 entries.
20 Introducing Veritas File System Veritas File System features ■ While there are no limits on the levels of indirection, lower levels are expected in this format since data extents have variable lengths. ■ This format uses a type indicator that determines its record format and content and accommodates new requirements and functionality for future types. The current typed format is used on regular files and directories only when indirection is needed.
Introducing Veritas File System Veritas File System features a full structural check of the entire file system. Replaying the intent log may not completely recover the damaged file system structure if there was a disk hardware failure; hardware problems may require a complete system check using the fsck utility provided with VxFS. See “The log option and data integrity” on page 22. VxFS intent log resizing The VxFS intent log is allocated when the file system is first created.
22 Introducing Veritas File System Veritas File System features Enhanced data integrity modes For most UNIX file systems, including VxFS, the default mode for writing to a file is delayed, or buffered, meaning that the data to be written is copied to the file system cache and later flushed to disk. A delayed write provides much better performance than synchronously writing the data to disk.
Introducing Veritas File System Veritas File System features disk to guarantee the persistence of the file data before renaming it. The rename() call is also guaranteed to be persistent when the system call returns. The changes to file system data and metadata caused by the fsync(2) and fdatasync(2) system calls are guaranteed to be persistent once the calls return. Enhanced performance mode VxFS has a mount option that improves performance: delaylog.
24 Introducing Veritas File System Veritas File System features Warning: Some applications and utilities may not work on large files. Access Control Lists An Access Control List (ACL) stores a series of entries that identify specific users or groups and their access privileges for a directory or file. A file may have its own ACL or may share an ACL with other files. ACLs have the advantage of specifying detailed access permissions for multiple users and groups.
Introducing Veritas File System Veritas File System features Quotas VxFS supports quotas, which allocate per-user quotas and limit the use of two principal resources: files and data blocks. You can assign quotas for each of these resources. Each quota consists of two limits for each resource: hard limit and soft limit. The hard limit represents an absolute limit on data blocks or files. A user can never exceed the hard limit under any circumstances.
26 Introducing Veritas File System Veritas File System features log. File system operations, such as allocating or deleting files, can originate from any node in the cluster. Installing VxFS and enabling the cluster feature does not create a cluster file system configuration. HP Serviceguard Storage Management environments require HP Serviceguard for file system clustering. To be a cluster mount, a file system must be mounted using the mount -o cluster option.
Introducing Veritas File System Veritas File System performance enhancements You can then configure policies that automatically relocate files from one volume to another, or relocate files by running file relocation commands. Having multiple volumes lets you determine where files are located, which can improve performance for applications that access specific types of files. DST functionality is a separately licensed feature and is available with the VRTSfppm package.
28 Introducing Veritas File System Veritas File System performance enhancements ■ Tunable indirect data extent size ■ Integration with VxVM™ ■ Support for large directories Note: VxFS reduces the file lookup time in directories with an extremely large number of files. About enhanced I/O performance VxFS provides enhanced I/O performance by applying an aggressive I/O clustering policy, integrating with VxVM, and allowing application specific parameters to be set on a per-file system basis.
Introducing Veritas File System Using Veritas File System This value defines the maximum size of a single direct I/O. See the vxtunefs(1M) and tunefstab(4) manual pages.
30 Introducing Veritas File System Using Veritas File System Online system administration VxFS provides command line interface (CLI) operations that are described throughout this guide and in manual pages. VxFS allows you to run a number of administration tasks while the file system is online. Two of the more important tasks include: ■ Defragmentation ■ File system resizing About defragmentation Free resources are initially aligned and allocated to files in an order that provides optimal performance.
Introducing Veritas File System Using Veritas File System and the file system. The vxresize command guarantees that the file system shrinks or grows along with the volume. Do not use the vxassist and fsadm_vxfs commands for this purpose. See the vxresize(1M) manual page. See the Veritas Volume Manager Administrator's Guide.
32 Introducing Veritas File System Using Veritas File System
Chapter 2 VxFS performance: creating, mounting, and tuning file systems This chapter includes the following topics: ■ Creating a VxFS file system ■ Mounting a VxFS file system ■ Tuning the VxFS file system ■ Monitoring free space ■ Tuning I/O Creating a VxFS file system When you create a file system with the mkfs command, you can select the following characteristics: ■ Block size ■ Intent log size Block size The unit of allocation in VxFS is a block.
34 VxFS performance: creating, mounting, and tuning file systems Mounting a VxFS file system You specify the block size when creating a file system by using the mkfs -o bsize option. The block size cannot be altered after the file system is created. The smallest available block size for VxFS is 1K. The default block size is 1024 bytes for file systems smaller than 1 TB, and 8192 bytes for file systems 1 TB or larger. Choose a block size based on the type of application being run.
VxFS performance: creating, mounting, and tuning file systems Mounting a VxFS file system ■ tmplog ■ logsize ■ nodatainlog ■ blkclear ■ mincache ■ convosync ■ ioerror ■ largefiles|nolargefiles ■ cio ■ mntlock|mntunlock ■ tranflush Caching behavior can be altered with the mincache option, and the behavior of O_SYNC and D_SYNC writes can be altered with the convosync option. See the fcntl(2) manual page. The delaylog and tmplog modes can significantly improve performance.
36 VxFS performance: creating, mounting, and tuning file systems Mounting a VxFS file system writing the new file contents to a temporary file and then renaming it on top of the target file. The delaylog mode The default logging mode is delaylog. In delaylog mode, the effects of most system calls other than write(2), writev(2), and pwrite(2) are guaranteed to be persistent approximately 15 to 20 seconds after the system call returns to the application.
VxFS performance: creating, mounting, and tuning file systems Mounting a VxFS file system The behavior of NFS servers on a VxFS file system is unaffected by the log and tmplog mount options, but not delaylog. In all cases except for tmplog, VxFS complies with the persistency requirements of the NFS v2 and NFS v3 standard.
38 VxFS performance: creating, mounting, and tuning file systems Mounting a VxFS file system The blkclear mode The blkclear mode is used in increased data security environments. The blkclear mode guarantees that uninitialized storage never appears in files. The increased integrity is provided by clearing extents on disk when they are allocated within a file. This mode does not affect extending writes.
VxFS performance: creating, mounting, and tuning file systems Mounting a VxFS file system ■ The mincache=direct, mincache=unbuffered, and mincache=dsync modes also flush file data on close as mincache=closesync does. Because the mincache=direct, mincache=unbuffered, and mincache=dsync modes change non-synchronous I/O to synchronous I/O, throughput can substantially degrade for small to medium size files with most applications.
40 VxFS performance: creating, mounting, and tuning file systems Mounting a VxFS file system See the open(2), fcntl(2), and vxfsio(7) manual pages. Warning: Be very careful when using the convosync=closesync or convosync=delay mode because they actually change synchronous I/O into non-synchronous I/O. Applications that use synchronous I/O for data reliability may fail if the system crashes and synchronously written data is lost.
VxFS performance: creating, mounting, and tuning file systems Mounting a VxFS file system The disable policy If disable is selected, VxFS disables the file system after detecting any I/O error. You must then unmount the file system and correct the condition causing the I/O error. After the problem is repaired, run fsck and mount the file system again. In most cases, replay fsck to repair the file system. A full fsck is required only in cases of structural damage to the file system's metadata.
42 VxFS performance: creating, mounting, and tuning file systems Mounting a VxFS file system confined to data extents. mdisable is the default ioerror mount option for cluster mounts. The largefiles|nolargefiles option The section includes the following topics : ■ Creating a file system with large files ■ Mounting a file system with large files ■ Managing a file system with large files VxFS supports files larger than 2 gigabytes. The maximum file size that can be created is 2 terabytes.
VxFS performance: creating, mounting, and tuning file systems Mounting a VxFS file system is not to specify either option. After a file system is mounted, you can use the fsadm utility to change the large files option.
44 VxFS performance: creating, mounting, and tuning file systems Tuning the VxFS file system The mntunlock option of the vxumount command reverses the mntlock option if you previously locked the file system. Combining mount command options Although mount options can be combined arbitrarily, some combinations do not make sense. The following examples provide some common and reasonable mount option combinations.
VxFS performance: creating, mounting, and tuning file systems Tuning the VxFS file system ■ VxFS buffer cache high water mark ■ Number of links to a file ■ VxFS inode free time lag Tuning inode table size VxFS caches inodes in an inode table. There is a dynamic tunable in VxFS called vx_ninode that determines the number of entries in the inode table. You can dynamically change the value of vx_ninode by using the sam or kctune commands. See the sam(1M) and kctune(1M) manual pages.
46 VxFS performance: creating, mounting, and tuning file systems Tuning the VxFS file system either the system memory size, which is the default, or the value of the tunable if explicitly set, whichever is larger. Thus, dynamically increasing the tunable to a value that is more than two times either the default value or the user-defined value, if larger, may cause performance degradation unless the system is rebooted.
VxFS performance: creating, mounting, and tuning file systems Tuning the VxFS file system VxFS buffer cache high water mark VxFS maintains its own buffer cache in the kernel for frequently accessed file system metadata. This cache is different from the HP-UX kernel buffer cache that caches file data. The vx_bc_bufhwm dynamic, global, tunable parameter lets you change the VxFS buffer cache high water mark, that is, the maximum amount of memory that can be used to cache VxFS metadata.
48 VxFS performance: creating, mounting, and tuning file systems Monitoring free space See the sam(1M) and kctune(1M) manual pages. You can also add an entry to the system configuration file as shown in the following example: vxfs_maxlink vx_maxlink 40000 This sets the value of vx_maxlink to 40,000 links. VxFS inode free time lag In VxFS, an inode is put on a freelist if it is not being used. The memory space for this unused inode can be freed if it stays on the freelist for a specified amount of time.
VxFS performance: creating, mounting, and tuning file systems Monitoring free space Full file systems may have an adverse effect on file system performance. Full file systems should therefore have some files removed, or should be expanded. See the fsadm_vxfs(1M) manual page. Monitoring fragmentation Fragmentation reduces performance and availability. Regular use of fsadm's fragmentation reporting and reorganization facilities is therefore advisable.
50 VxFS performance: creating, mounting, and tuning file systems Monitoring free space The “after” result is an indication of how well the reorganizer has performed. The degree of fragmentation should be close to the characteristics of an unfragmented file system. If not, it may be a good idea to resize the file system; full file systems tend to fragment and are difficult to defragment.
VxFS performance: creating, mounting, and tuning file systems Tuning I/O Note: Thin Reclamation is a slow process and may take several hours to complete, depending on the file system size. Thin Reclamation is not guaranteed to reclaim 100% of the free space. You can track the progress of the Thin Reclamation process by using the vxtask list command when using the Veritas Volume Manager (VxVM) command vxdisk reclaim. See the vxtask(1M) and vxdisk(1M) manual pages.
52 VxFS performance: creating, mounting, and tuning file systems Tuning I/O ■ The mount command queries VxVM when the file system is mounted and downloads the I/O parameters. If the default parameters are not acceptable or the file system is being used without VxVM, then the /etc/vx/tunefstab file can be used to set values for I/O parameters. The mount command reads the /etc/vx/tunefstab file and downloads any parameters specified for a file system.
VxFS performance: creating, mounting, and tuning file systems Tuning I/O Table 2-1 Tunable VxFS I/O parameters (continued) Parameter Description write_pref_io The preferred write request size. The file system uses this in conjunction with the write_nstream value to determine how to do flush behind on writes. The default value is 64K. read_nstream The number of parallel read requests of size read_pref_io to have outstanding at one time.
54 VxFS performance: creating, mounting, and tuning file systems Tuning I/O Table 2-1 Tunable VxFS I/O parameters (continued) Parameter Description fcl_keeptime Specifies the minimum amount of time, in seconds, that the VxFS File Change Log (FCL) keeps records in the log. When the oldest 8K block of FCL records have been kept longer than the value of fcl_keeptime, they are purged from the FCL and the extents nearest to the beginning of the FCL file are freed.
VxFS performance: creating, mounting, and tuning file systems Tuning I/O Table 2-1 Tunable VxFS I/O parameters (continued) Parameter Description fcl_winterval Specifies the time, in seconds, that must elapse before the VxFS File Change Log (FCL) records a data overwrite, data extending write, or data truncate for a file. The ability to limit the number of repetitive FCL records for continuous writes to the same file is important for file system performance and for applications processing the FCL.
56 VxFS performance: creating, mounting, and tuning file systems Tuning I/O Table 2-1 Tunable VxFS I/O parameters (continued) Parameter Description initial_extent_size Changes the default initial extent size. VxFS determines, based on the first write to a new file, the size of the first extent to be allocated to the file. Normally the first extent is the smallest power of 2 that is larger than the size of the first write. If that power of 2 is less than 8K, the first extent allocated is 8K.
VxFS performance: creating, mounting, and tuning file systems Tuning I/O Table 2-1 Tunable VxFS I/O parameters (continued) Parameter Description inode_aging_size Specifies the minimum size to qualify a deleted inode for inode aging. Inode aging is used in conjunction with file system Storage Checkpoints to allow quick restoration of large, recently deleted files. For best performance, it is advisable to age only a limited number of larger files before completion of the removal process.
58 VxFS performance: creating, mounting, and tuning file systems Tuning I/O Table 2-1 Tunable VxFS I/O parameters (continued) Parameter Description default_indir_ size On VxFS, files can have up to ten direct extents of variable size stored in the inode. After these extents are used up, the file must use indirect extents which are a fixed size that is set when the file first uses indirect extents. These indirect extents are 8K by default.
VxFS performance: creating, mounting, and tuning file systems Tuning I/O Table 2-1 Tunable VxFS I/O parameters (continued) Parameter Description read_ahead The default for all VxFS read operations is to perform sequential read ahead. You can specify the read_ahead cache advisory to implement the VxFS enhanced read ahead functionality.
60 VxFS performance: creating, mounting, and tuning file systems Tuning I/O Table 2-1 Tunable VxFS I/O parameters (continued) Parameter Description write_throttle The write_throttle parameter is useful in special situations where a computer system has a combination of a large amount of memory and slow storage devices. In this configuration, sync operations, such as fsync(), may take long enough to complete that a system appears to hang.
VxFS performance: creating, mounting, and tuning file systems Tuning I/O Note: VxFS does not query VxVM with multiple volume sets. To improve I/O performance when using multiple volume sets, use the vxtunefs command. If the file system is being used with a hardware disk array or volume manager other than VxVM, try to align the parameters to match the geometry of the logical disk.
62 VxFS performance: creating, mounting, and tuning file systems Tuning I/O
Chapter 3 Extent attributes This chapter includes the following topics: ■ About extent attributes ■ Commands related to extent attributes About extent attributes Veritas File System (VxFS) allocates disk space to files in groups of one or more adjacent blocks called extents. VxFS defines an application interface that allows programs to control various aspects of the extent allocation for a given file. The extent allocation policies associated with a file are referred to as extent attributes.
64 Extent attributes About extent attributes Some of the extent attributes are persistent and become part of the on-disk information about the file, while other attributes are temporary and are lost after the file is closed or the system is rebooted. The persistent attributes are similar to the file's permissions and are written in the inode for the file. When a file is copied, moved, or archived, only the persistent attributes of the source file are preserved in the new file.
Extent attributes About extent attributes smaller pieces. By erring on the side of minimizing fragmentation for the file system, files may become so non-contiguous that their I/O characteristics would degrade. Fixed extent sizes are particularly appropriate in the following situations: ■ If a file is large and contiguous, a large fixed extent size can minimize the number of extents in the file.
66 Extent attributes Commands related to extent attributes Write operations beyond reservation A reservation request can specify that no allocations can take place after a write operation fills the last available block in the reservation. This request can be used a way similar to the function of the ulimit command to prevent a file's uncontrolled growth. Reservation trimming A reservation request can specify that any unused reservation be released when the file is closed.
Extent attributes Commands related to extent attributes file system does not support extent attributes, has a different block size than the source file system, or lacks free extents appropriate to satisfy the extent attribute requirements.
68 Extent attributes Commands related to extent attributes
Chapter 4 VxFS I/O Overview This chapter includes the following topics: ■ About VxFS I/O ■ Buffered and Direct I/O ■ Concurrent I/O ■ Cache advisories ■ Freezing and thawing a file system ■ Getting the I/O size About VxFS I/O VxFS processes two basic types of file system I/O: ■ Sequential ■ Random or I/O that is not sequential For sequential I/O, VxFS employs a read-ahead policy by default when the application is reading data. For writing, it allocates contiguous blocks if possible.
70 VxFS I/O Overview Buffered and Direct I/O Buffered and Direct I/O VxFS responds with read-ahead for sequential read I/O. This results in buffered I/O. The data is prefetched and retained in buffers for the application. This is the default VxFS behavior. On the other hand, direct I/O does not buffer the data when the I/O to the underlying device is completed. This saves system resources like memory and CPU usage. Direct I/O is possible only when alignment and sizing criteria are satisfied.
VxFS I/O Overview Buffered and Direct I/O Direct I/O versus synchronous I/O Because direct I/O maintains the same data integrity as synchronous I/O, it can be used in many applications that currently use synchronous I/O. If a direct I/O request does not allocate storage or extend the file, the inode is not immediately written. Direct I/O CPU overhead The CPU cost of direct I/O is about the same as a raw disk transfer.
72 VxFS I/O Overview Concurrent I/O transferred to disk synchronously before the write returns to the user. If the file is not extended by the write, the times are updated in memory, and the call returns to the user. If the file is extended by the operation, the inode is written before the write returns. The direct I/O and VX_DSYNC advisories are maintained on a per-file-descriptor basis. Data synchronous I/O vs.
VxFS I/O Overview Cache advisories ■ By using the cio mount option. The read(2) and write(2) operations occurring on all of the files in this particular file system will use concurrent I/O. See “The cio option” on page 43. See the mount_vxfs(1M) manual page. Cache advisories VxFS allows an application to set cache advisories for use when accessing files.
74 VxFS I/O Overview Getting the I/O size When the file system is frozen, any attempt to use the frozen file system, except for a VX_THAW ioctl command, is blocked until a process executes the VX_THAW ioctl command or the time-out on the freeze expires. Getting the I/O size VxFS provides the VX_GET_IOPARAMETERS ioctl to get the recommended I/O sizes to use on a file system. This ioctl can be used by the application to make decisions about the I/O sizes issued to VxFS for a file or file device.
Chapter 5 Online backup using file system snapshots This chapter includes the following topics: ■ About snapshot file systems ■ Snapshot file system backups ■ Creating a snapshot file system ■ Backup examples ■ Snapshot file system performance ■ Differences Between Snapshots and Storage Checkpoints ■ About snapshot file system disk structure ■ How a snapshot file system works About snapshot file systems A snapshot file system is an exact image of a VxFS file system, referred to as the snap
76 Online backup using file system snapshots Snapshot file system backups its snapshots are unmounted. Although it is possible to have multiple snapshots of a file system made at different times, it is not possible to make a snapshot of a snapshot. Note: A snapshot file system ceases to exist when unmounted. If mounted again, it is actually a fresh snapshot of the snapped file system. A snapshot file system must be unmounted before its dependent snapped file system can be unmounted.
Online backup using file system snapshots Creating a snapshot file system Creating a snapshot file system You create a snapshot file system by using the -o snapof= option of the mount command. The -o snapsize= option may also be required if the device you are mounting does not identify the device size in its disk label, or if you want a size smaller than the entire device.
78 Online backup using file system snapshots Snapshot file system performance To create a backup using a snapshop file system 1 To back up files changed within the last week using cpio: # mount -F vxfs -o snapof=/home,snapsize=100000 \ /dev/vx/dsk/fsvol/vol1 /backup/home # cd /backup # find home -ctime -7 -depth -print | cpio -oc > /dev/rmt/0m # umount /backup/home 2 To do a level 3 backup of /dev/vx/dsk/fsvol/vol1 and collect those files that have changed in the current directory: # vxdump 3f - /dev/
Online backup using file system snapshots Differences Between Snapshots and Storage Checkpoints application running an online transaction processing (OLTP) workload on a snapped file system was measured at about 15 to 20 percent slower than a file system that was not snapped.
80 Online backup using file system snapshots How a snapshot file system works Figure 5-1 The Snapshot Disk Structure super-block bitmap blockmap data block The super-block is similar to the super-block of a standard VxFS file system, but the magic number is different and many of the fields are not applicable. The bitmap contains one bit for every block on the snapped file system. Initially, all bitmap entries are zero.
Online backup using file system snapshots How a snapshot file system works data for block n can be found on the snapshot file system. The blockmap entry for block n is changed from 0 to the block number on the snapshot file system containing the old data. A subsequent read request for block n on the snapshot file system will be satisfied by checking the bitmap entry for block n and reading the data from the indicated block on the snapshot file system, instead of from block n on the snapped file system.
82 Online backup using file system snapshots How a snapshot file system works
Chapter 6 Storage Checkpoints This chapter includes the following topics: ■ About Storage Checkpoints ■ How a Storage Checkpoint works ■ Types of Storage Checkpoints ■ Storage Checkpoint administration ■ Space management considerations ■ Restoring a file system from a Storage Checkpoint ■ Storage Checkpoint quotas About Storage Checkpoints Veritas File System provides a Storage Checkpoint feature that quickly creates a persistent image of a file system at an exact point in time.
84 Storage Checkpoints About Storage Checkpoints See “How a Storage Checkpoint works” on page 85. Unlike a disk-based mirroring technology that requires a separate storage space, Storage Checkpoints minimize the use of disk space by using a Storage Checkpoint within the same free space available to the file system.
Storage Checkpoints How a Storage Checkpoint works availability and data integrity by increasing the frequency of backup and replication solutions. Storage Checkpoints can be taken in environments with a large number of files, such as file servers with millions of files, with little adverse impact on performance. Because the file system does not remain frozen during Storage Checkpoint creation, applications can access the file system even while the Storage Checkpoint is taken.
86 Storage Checkpoints How a Storage Checkpoint works Primary fileset and its Storage Checkpoint Figure 6-1 Primary fileset Storage Checkpoint /database emp.dbf /database jun.dbf emp.dbf jun.dbf In Figure 6-2, a square represents each block of the file system. This figure shows a Storage Checkpoint containing pointers to the primary fileset at the time the Storage Checkpoint is taken, as in Figure 6-1.
Storage Checkpoints How a Storage Checkpoint works data is copied to the Storage Checkpoint before the new data is written. When a write operation changes a specific data block in the primary fileset, the old data is first read and copied to the Storage Checkpoint before the primary fileset is updated. Subsequent writes to the specified data block on the primary fileset do not result in additional updates to the Storage Checkpoint because the old data needs to be saved only once.
88 Storage Checkpoints Types of Storage Checkpoints Types of Storage Checkpoints You can create the following types of Storage Checkpoints: ■ Data Storage Checkpoints ■ nodata Storage Checkpoints ■ Removable Storage Checkpoints ■ Non-mountable Storage Checkpoints Data Storage Checkpoints A data Storage Checkpoint is a complete image of the file system at the time the Storage Checkpoint is created. This type of Storage Checkpoint contains the file system metadata and file data blocks.
Storage Checkpoints Types of Storage Checkpoints Figure 6-4 Updates to a nodata clone Primary fileset Storage Checkpoint A’ B C D E See “Showing the difference between a data and a nodata Storage Checkpoint” on page 95. Removable Storage Checkpoints A removable Storage Checkpoint can “self-destruct” under certain conditions when the file system runs out of space. See “Space management considerations” on page 101.
90 Storage Checkpoints Storage Checkpoint administration Use this type of Storage Checkpoint as a security feature which prevents other applications from accessing the Storage Checkpoint and modifying it. Storage Checkpoint administration Storage Checkpoint administrative operations require the fsckptadm utility. See the fsckptadm(1M) manual page. You can use the fsckptadm utility to create and remove Storage Checkpoints, change attributes, and ascertain statistical data.
Storage Checkpoints Storage Checkpoint administration For disk layout Version 6 or 7, multiply the number of inodes by 1 byte, and add 1 or 2 megabytes to get the approximate amount of space required. You can determine the number of inodes with the fsckptadm utility as above. Using the output from the example for disk layout Version 5, the approximate amount of space required by the metadata is just over one or two megabytes (23,872 x 1 byte, plus 1 or 2 megabytes).
92 Storage Checkpoints Storage Checkpoint administration Removing a Storage Checkpoint You can delete a Storage Checkpoint by specifying the remove keyword of the fsckptadm command. Specifically, you can use either the synchronous or asynchronous method of removing a Storage Checkpoint; the asynchronous method is the default method. The synchronous method entirely removes the Storage Checkpoint and returns all of the blocks to the file system before completing the fsckptadm operation.
Storage Checkpoints Storage Checkpoint administration ■ To mount a Storage Checkpoint of a file system, first mount the file system itself. ■ To unmount a file system, first unmount all of its Storage Checkpoints. 93 Warning: If you create a Storage Checkpoint for backup purposes, do not mount it as a writable Storage Checkpoint. You will lose the point-in-time image if you accidently write to the Storage Checkpoint. A Storage Checkpoint is mounted on a special pseudo device.
94 Storage Checkpoints Storage Checkpoint administration vol1 /dev/vx/dsk/fsvol/ vol1:may_23 ■ /fsvol_may_23 vxfs ckpt=may_23 0 To mount a Storage Checkpoint of a cluster file system, you must also use the -o cluster option: # mount -F vxfs -o cluster,ckpt=may_23 \ /dev/vx/dsk/fsvol/vol1:may_23 /fsvol_may_23 You can only mount a Storage Checkpoint cluster-wide if the file system that the Storage Checkpoint belongs to is also mounted cluster-wide.
Storage Checkpoints Storage Checkpoint administration file system, use the asynchronous method to mark the Storage Checkpoint you want to convert for a delayed conversion. In this case, the actual conversion will continue to be delayed until the Storage Checkpoint becomes the oldest Storage Checkpoint in the file system, or all of the older Storage Checkpoints have been converted to nodata Storage Checkpoints.
96 Storage Checkpoints Storage Checkpoint administration 4 Examine the content of the original file and the Storage Checkpoint file: # cat /mnt0/file hello, world # cat /mnt0@5_30pm/file hello, world 5 Change the content of the original file: # echo "goodbye" > /mnt0/file 6 Examine the content of the original file and the Storage Checkpoint file.
Storage Checkpoints Storage Checkpoint administration Converting multiple Storage Checkpoints You can convert Storage Checkpoints to nodata Storage Checkpoints, when dealing with older Storage Checkpoints on the same file system.
98 Storage Checkpoints Storage Checkpoint administration 3 Try to convert synchronously the latest Storage Checkpoint to a nodata Storage Checkpoint. The attempt will fail because the Storage Checkpoints older than the latest Storage Checkpoint are data Storage Checkpoints, namely the Storage Checkpoints old, older, and oldest: # fsckptadm -s set nodata latest /mnt0 UX:vxfs fsckptadm: ERROR: V-3-24632: Storage Checkpoint set failed on latest.
Storage Checkpoints Storage Checkpoint administration To create a delayed nodata Storage Checkpoint 1 Remove the latest Storage Checkpoint.
100 Storage Checkpoints Storage Checkpoint administration 3 Convert the oldest Storage Checkpoint to a nodata Storage Checkpoint because no older Storage Checkpoints exist that contain data in the file system. Note: This step can be done synchronously.
Storage Checkpoints Space management considerations 4 Remove the older and old Storage Checkpoints.
102 Storage Checkpoints Restoring a file system from a Storage Checkpoint ■ Remove the oldest Storage Checkpoint first. Restoring a file system from a Storage Checkpoint Mountable data Storage Checkpoints on a consistent and undamaged file system can be used by backup and restore applications to restore either individual files or an entire file system.
Storage Checkpoints Restoring a file system from a Storage Checkpoint To restore a file from a Storage Checkpoint 1 Create the Storage Checkpoint CKPT1 of /home. $ fckptadm create CKPT1 /home 2 Mount Storage Checkpoint CKPT1 on the directory /home/checkpoints/mar_4. $ mount -F vxfs -o ckpt=CKPT1 /dev/vx/dsk/dg1/vol- \ 01:CKPT1 /home/checkpoints/mar_4 3 Delete the file MyFile.txt from your home directory. $ cd /home/users/me $ rm MyFile.
104 Storage Checkpoints Restoring a file system from a Storage Checkpoint To restore a file system from a Storage Checkpoint 1 Run the fsckpt_restore command: # fsckpt_restore -l /dev/vx/dsk/dg1/vol2 /dev/vx/dsk/dg1/vol2: UNNAMED: ctime = Thu 08 May 2004 06:28:26 PM PST mtime = Thu 08 May 2004 06:28:26 PM PST flags = largefiles, file system root CKPT6: ctime = Thu 08 May 2004 06:28:35 PM PST mtime = Thu 08 May 2004 06:28:35 PM PST flags = largefiles CKPT5: ctime = Thu 08 May 2004 06:28:34 PM PST mtime =
Storage Checkpoints Restoring a file system from a Storage Checkpoint 2 In this example, select the Storage Checkpoint CKPT3 as the new root fileset: Select Storage Checkpoint for restore operation or (EOF) to exit or to list Storage Checkpoints: CKPT3 CKPT3: ctime = Thu 08 May 2004 06:28:31 PM PST mtime = Thu 08 May 2004 06:28:36 PM PST flags = largefiles UX:vxfs fsckpt_restore: WARNING: V-3-24640: Any file system changes or Storage Checkpoints made after Thu 08 May 2004 06:28:31 PM
106 Storage Checkpoints Restoring a file system from a Storage Checkpoint 3 Type y to restore the file system from CKPT3: Restore the file system from Storage Checkpoint CKPT3 ? (ynq) y (Yes) UX:vxfs fsckpt_restore: INFO: V-3-23760: File system restored from CKPT3 If the filesets are listed at this point, it shows that the former UNNAMED root fileset and CKPT6, CKPT5, and CKPT4 were removed, and that CKPT3 is now the primary fileset. CKPT3 is now the fileset that will be mounted by default.
Storage Checkpoints Storage Checkpoint quotas Storage Checkpoint quotas VxFS provides options to the fsckptadm command interface to administer Storage Checkpoint quotas. Storage Checkpoint quotas set the following limits on the number of blocks used by all Storage Checkpoints of a primary file set: hard limit An absolute limit that cannot be exceeded. If a hard limit is exceeded, all further allocations on any of the Storage Checkpoints fail, but existing Storage Checkpoints are preserved.
108 Storage Checkpoints Storage Checkpoint quotas
Chapter 7 Quotas This chapter includes the following topics: ■ About quota limits ■ About quota files on Veritas File System ■ About quota commands ■ Using quotas About quota limits Veritas File System (VxFS) supports user quotas. The quota system limits the use of two principal resources of a file system: files and data blocks. For each of these resources, you can assign quotas to individual users to limit their usage.
110 Quotas About quota files on Veritas File System See “About quota files on Veritas File System” on page 110. The quota soft limit can be exceeded when VxFS preallocates space to a file. Quota limits cannot exceed two terabytes on a Version 5 disk layout. See “About extent attributes” on page 63. About quota files on Veritas File System A quotas file (named quotas) must exist in the root directory of a file system for any of the quota commands to work.
Quotas Using quotas Turning on quotas To use the quota functionality on a file system, quotas must be turned on. You can turn quotas on at mount time or after a file system is mounted. To turn on quotas ◆ To turn on user quotas for a VxFS file system, enter: # quotaon /mount_point Turning on quotas at mount time Quotas can be turned on with the mount command when you mount a file system.
112 Quotas Using quotas To modify time limits ◆ Specify the -t option to modify time limits for any user: # edquota -t Viewing disk quotas and usage Use the quota command to view a user's disk quotas and usage on VxFS file systems.
Chapter 8 File Change Log This chapter includes the following topics: ■ About File Change Log ■ About the File Change Log file ■ File Change Log administrative interface ■ File Change Log programmatic interface ■ Summary of API functions ■ Reverse path name lookup About File Change Log The VxFS File Change Log (FCL) tracks changes to files and directories in a file system.
114 File Change Log About the File Change Log file About the File Change Log file File Change Log records file system changes such as creates, links, unlinks, renaming, data appended, data overwritten, data truncated, extended attribute modifications, holes punched, and miscellaneous file property updates. Note: FCL is supported only on disk layout Version 6 and later. FCL stores changes in a sparse file in the file system namespace. The FCL file is located in mount_point/lost+found/changelog.
File Change Log File Change Log administrative interface File Change Log administrative interface The FCL can be set up and tuned through the fcladm and vxtunefs VxFS administrative commands. See the fcladm(1M) and vxtunefs(1M) manual pages. The FCL keywords for fcladm are as follows: clear Disables the recording of the audit, open, close, and statistical events after it has been set. dump Creates a regular file image of the FCL file that can be downloaded too an off-host processing system.
116 File Change Log File Change Log administrative interface fcl_keeptime Specifies the duration in seconds that FCL records stay in the FCL file before they can be purged. The first records to be purged are the oldest ones, which are located at the beginning of the file. Additionally, records at the beginning of the file can be purged if allocation to the FCL file exceeds fcl_maxalloc bytes. The default value of fcl_keeptime is 0.
File Change Log File Change Log programmatic interface # fcladm off mount_point To remove the FCL file for a mounted file system, on which FCL must be turned off, type the following: # fcladm rm mount_point To obtain the current FCL state for a mounted file system, type the following: # fcladm state mount_point To enable tracking of the file opens along with access information with each event in the FCL, type the following: # fcladm set fileopen,accessinfo mount_point To stop tracking file I/O statisti
118 File Change Log File Change Log programmatic interface Backward compatibility Providing API access for the FCL feature allows backward compatibility for applications. The API allows applications to parse the FCL file independent of the FCL layout changes. Even if the hidden disk layout of the FCL changes, the API automatically translates the returned data to match the expected output record.
File Change Log Summary of API functions return EIO; } if (fclsb.
120 File Change Log Reverse path name lookup vxfs_fcl_seek() Extracts data from the specified cookie and then seeks to the specified offset. vxfs_fcl_seektime() Seeks to the first record in the FCL after the specified time. Reverse path name lookup The reverse path name lookup feature obtains the full path name of a file or directory from the inode number of that file or directory.
Chapter 9 Multi-volume file systems This chapter includes the following topics: ■ About multi-volume support ■ About volume types ■ Features implemented using multi-volume support ■ About volume sets ■ Creating multi-volume file systems ■ Converting a single volume file system to a multi-volume file system ■ Removing a volume from a multi-volume file system ■ About allocation policies ■ Assigning allocation policies ■ Querying allocation policies ■ Assigning pattern tables to director
122 Multi-volume file systems About multi-volume support About multi-volume support VxFS provides support for multi-volume file systems when used in conjunction with the Veritas Volume Manager. Using multi-volume support (MVS), a single file system can be created over multiple volumes, each volume having its own properties. For example, it is possible to place metadata on mirrored storage while placing file data on better-performing volume types such as RAID-1+0 (striped and mirrored).
Multi-volume file systems Features implemented using multi-volume support See “About Dynamic Storage Tiering” on page 143. ■ Placing the VxFS intent log on its own volume to minimize disk head movement and thereby increase performance. This functionality can be used to migrate from the Veritas QuickLog™ feature. ■ Separating Storage Checkpoints so that data allocated to a Storage Checkpoint is isolated from the rest of the file system. ■ Separating metadata from file data.
124 Multi-volume file systems About volume sets Volume availability is supported only on a file system with disk layout Version 7 or later. Note: Do not mount a multi-volume system with the ioerror=disable or ioerror=wdisable mount options if the volumes have different availability properties. Symantec recommends the ioerror=mdisable mount option both for cluster mounts and for local mounts. About volume sets Veritas Volume Manager exports a data object called a volume set.
Multi-volume file systems Creating multi-volume file systems 3 125 List the component volumes of the previously created volume set: # vxvset -g dg1 list myvset 4 VOLUME vol1 INDEX 0 LENGTH 20480 STATE ACTIVE CONTEXT - vol2 vol3 1 2 102400 102400 ACTIVE ACTIVE - Use the ls command to see that when a volume set is created, the volumes contained by the volume set are removed from the namespace and are instead accessed through the volume set name: # ls -l /dev/vx/rdsk/rootdg/myvset crw------- 1 r
126 Multi-volume file systems Creating multi-volume file systems Example of creating a multi-volume file system The following procedure is an example of creating a multi-volume file system.
Multi-volume file systems Converting a single volume file system to a multi-volume file system 4 List the volume availability flags using the fsvoladm command: # fsvoladm queryflags /mnt1 5 volname vol1 flags metadataok vol2 vol3 vol4 vol5 dataonly dataonly dataonly dataonly Increase the metadata space in the file system using the fsvoladm command: # fsvoladm clearflags dataonly /mnt1 vol2 # fsvoladm queryflags /mnt1 volname vol1 vol2 vol3 vol4 vol5 flags metadataok metadataok dataonly dataonly dat
128 Multi-volume file systems Removing a volume from a multi-volume file system 4 If the disk layout version is less than 6, upgrade to Version 7.
Multi-volume file systems About allocation policies Forcibly removing a volume If you must forcibly remove a volume from a file system, such as if a volume is permanently destroyed and you want to clean up the dangling pointers to the lost volume, use the fsck -o zapvol=volname command. The zapvol option performs a full file system check and zaps all inodes that refer to the specified volume. The fsck command prints the inode numbers of all files that the command destroys; the file names are not printed.
130 Multi-volume file systems Assigning allocation policies To assign allocation policies 1 List the volumes in the volume set: # vxvset -g rootdg list myvset VOLUME vol1 vol2 vol3 vol4 2 INDEX 0 1 2 3 LENGTH 102400 102400 102400 102400 STATE ACTIVE ACTIVE ACTIVE ACTIVE CONTEXT - Create a file system on the myvset volume set and mount the file system: # mkfs -F vxfs /dev/vx/rdsk/rootdg/myvset version 7 layout 204800 sectors, 102400 blocks of size 1024, log size 1024 blocks largefiles supported # m
Multi-volume file systems Querying allocation policies 3 131 Define three allocation policies, v1, bal_34, and rr_all, that allocate from the volumes using different methods: # fsapadm define /mnt1 v1 vol1 # fsapadm define -o balance -c 64k /mnt1 bal_34 vol3 vol4 # fsapadm define -o round-robin /mnt1 rr_all vol1 vol2 vol3 vol4 # fsapadm list /mnt1 name rr_all bal_34 v1 order round-robin balance as-given flags 0 0 0 chunk 0 64.
132 Multi-volume file systems Assigning pattern tables to directories Assigning pattern tables to directories A pattern table contains patterns against which a file's name and creating process' UID and GID are matched as a file is created in a specified directory. The first successful match is used to set the allocation policies of the file, taking precedence over inheriting per-file allocation policies. See the fsapadm(1M) manual page.
Multi-volume file systems Allocating data To assign pattern tables to directories 1 Define two allocation policies called mydata and mymeta to refer to the vol1 and vol2 volumes: # fsapadm define /mnt1 mydata vol1 # fsapadm define /mnt1 mymeta vol2 2 Assign the pattern table: # fsapadm assignfspat -F mypatternfile /mnt1 Allocating data The following script creates a large number of files to demonstrate the benefit of allocating data: i=1 while [ $i -lt 1000 ] do dd if=/dev/zero of=/mnt1/$i bs=65536 co
134 Multi-volume file systems Volume encapsulation Allocating data from vol1 to vol2 1 Define an allocation policy, lf_12, that allocates user data to the least full volume between vol1 and vol2: # fsapadm define -o least-full /mnt1 lf_12 vol1 vol2 2 Assign the allocation policy lf_12 as the data allocation policy to the file system mounted at /mnt1: # fsapadm assignfs /mnt1 lf_12 '' Metadata allocations use the default policy, as indicated by the empty string ('').
Multi-volume file systems Volume encapsulation To encapsulate a volume 1 List the volumes: # vxvset -g dg1 list myvset VOLUME vol1 vol2 INDEX 0 1 LENGTH 102400 102400 STATE ACTIVE ACTIVE CONTEXT - The volume set has two volumes.
136 Multi-volume file systems Reporting file extents 6 Encapsulate dbvol: # fsvoladm encapsulate /mnt1/dbfile dbvol 100m # ls -l /mnt1/dbfile -rw------- 1 root other 104857600 May 22 11:30 /mnt1/dbfile 7 Examine the contents of dbfile to see that it can be accessed as a file: # head -2 /mnt1/dbfile root:x:0:1:Super-User:/:/sbin/sh daemon:x:1:1::/: The passwd file that was written to the raw volume is now visible in the new file.
Multi-volume file systems Reporting file extents volume name, logical offset, and size of data extents, or the volume name and size of indirect extents associated with a file on a multi-volume file system. The fsvmap command maps volumes to the files that have extents on those volumes. See the fsmap(1M) and fsvmap(1M) manual pages. The fsmap command requires open() permission for each file or directory specified. Root permission is required to report the list of files with extents on a particular volume.
138 Multi-volume file systems Load balancing Load balancing An allocation policy with the balance allocation order can be defined and assigned to files that must have their allocations distributed at random between a set of specified volumes. Each extent associated with these files are limited to a maximum size that is defined as the required chunk size in the allocation policy. The distribution of the extents is mostly equal if none of the volumes are full or disabled.
Multi-volume file systems Converting a multi-volume file system to a single volume file system 139 extents on the volumes being removed are automatically relocated to other volumes within the policy. The following example redefines a policy that has four volumes by adding two new volumes, removing an existing volume, and enforcing the policy for rebalancing.
140 Multi-volume file systems Converting a multi-volume file system to a single volume file system Note: Steps 5, 6, 7, and 8 are optional, and can be performed if you prefer to remove the wrapper of the volume set object.
Multi-volume file systems Converting a multi-volume file system to a single volume file system 7 Edit the /etc/fstab file to replace the volume set name, vset1, with the volume device name, vol1.
142 Multi-volume file systems Converting a multi-volume file system to a single volume file system
Chapter 10 Dynamic Storage Tiering This chapter includes the following topics: ■ About Dynamic Storage Tiering ■ Placement classes ■ Administering placement policies ■ File placement policy grammar ■ File placement policy rules ■ Calculating I/O temperature and access temperature ■ Multiple criteria in file placement policy rule statements ■ File placement policy rule and statement ordering ■ File placement policies and extending files About Dynamic Storage Tiering VxFS uses multi-tier o
144 Dynamic Storage Tiering About Dynamic Storage Tiering Note: Some of the commands have changed or been removed between the 4.1 release and the 5.0 release to make placement policy management more user-friendly. The following commands have been removed: fsrpadm, fsmove, and fssweep. The output of the queryfile, queryfs, and list options of the fsapadm command now print the allocation order by name instead of number.
Dynamic Storage Tiering Placement classes Placement classes A placement class is a Dynamic Storage Tiering attribute of a given volume in a volume set of a multi-volume file system. This attribute is a character string, and is known as a volume tag. A volume can have different tags, one of which can be the placement class. The placement class tag makes a volume distinguishable by DST. Volume tags are organized as hierarchical name spaces in which periods separate the levels of the hierarchy .
146 Dynamic Storage Tiering Administering placement policies Tagging volumes as placement classes The following example tags the vsavola volume as placement class tier1, vsavolb as placement class tier2, vsavolc as placement class tier3, and vsavold as placement class tier4 using the vxadm command. To tag volumes ◆ Tag the volumes as placement classes: # vxvoladm -g cfsdg settag vsavola vxfs.placement_class.tier1 # vxvoladm -g cfsdg settag vsavolb vxfs.placement_class.
Dynamic Storage Tiering Administering placement policies for which each document is the current active policy. When a policy document is updated, SFMS can assign the updated document to all file systems whose current active policies are based on that document. By default, SFMS does not update file system active policies that have been created or modified locally, that is by the hosts that control the placement policies' file systems.
148 Dynamic Storage Tiering Administering placement policies Querying which files will be affected by enforcing a placement policy The following example uses the fsppadm query command to generate a list of files that will be affected by enforcing a placement policy. The command provides details about where the files currently reside, to where the files will be relocated, and which rule in the placement policy applies to the files.
Dynamic Storage Tiering Administering placement policies 149 You can specify the -T option to specify the placement classes that contain files for the fsppadm command to sweep and relocate selectively. You can specify the -T option only if the policy uses the Prefer criteria forIOTEMP. See the fsppadm(1M) manual page.
150 Dynamic Storage Tiering File placement policy grammar File placement policy grammar VxFS allocates and relocates files within a multi-volume file system based on properties in the file system metadata that pertains to the files. Placement decisions may be based on file name, directory of residence, time of last access, access frequency, file size, and ownership. An individual file system's criteria for allocating and relocating files are expressed in the file system's file placement policy.
Dynamic Storage Tiering File placement policy rules SELECT statement The VxFS placement policy rule SELECT statement designates the collection of files to which a rule applies.
152 Dynamic Storage Tiering File placement policy rules Either an exact file name or a pattern using a single wildcard character (*). For example, the pattern “abc*" denotes all files whose names begin with “abc". The pattern “abc.*" denotes all files whose names are exactly "abc" followed by a period and any extension. The pattern “*abc" denotes all files whose names end in “abc", even if the name is all or part of an extension. The pattern “*.
Dynamic Storage Tiering File placement policy rules In the following example, only files that reside in either the ora/db or the crash/dump directory, and whose owner is either user1 or user2 are selected for possible action: A rule may include multiple SELECT statements.
154 Dynamic Storage Tiering File placement policy rules the last rule in the policy document on which the file system's active placement policy is based should specify * as the only selection criterion in its SELECT statement, and a CREATE statement naming the desired placement class for files not selected by other rules.
Dynamic Storage Tiering File placement policy rules space for new files to which the rule applies on the specified placement classes. Failing that, VxFS resorts to its internal space allocation algorithms, so file allocation does not fail unless there is no available space any-where in the file system's volume set.
156 Dynamic Storage Tiering File placement policy rules tier2 1 The element with a value of one megabyte is specified for allocations on tier2 volumes. For files allocated on tier2 volumes, the first megabyte would be allocated on the first volume, the second on the second volume, and so forth.
Dynamic Storage Tiering File placement policy rules additional_placement_class_specifications relocation_conditions A RELOCATE statement contains the following clauses: An optional clause that contains a list of placement classes from whose volumes designated files should be relocated if the files meet the conditions specified in the clause.
158 Dynamic Storage Tiering File placement policy rules Indicates the placement classes to which qualifying files should be relocated. Unlike the source placement class list in a FROM clause, placement classes in a clause are specified in priority order. Files are relocated to volumes in the first specified placement class if possible, to the second if not, and so forth.
Dynamic Storage Tiering File placement policy rules This criterion is met when files are unmodified for a designated period or during a designated period relative to the time at which the fsppadm enforce command was issued. This criterion is met when files exceed or drop below a designated size or fall within a designated size range. This criterion is met when files exceed or drop below a designated I/O temperature, or fall within a designated I/O temperature range.
160 Dynamic Storage Tiering File placement policy rules max_access_age min_modification_age max_modification_age min_size max_size min_I/O_temperature
Dynamic Storage Tiering File placement policy rules Both the and elements require Flags attributes to direct their operation. For , the following Flags attributes values may be specified: gt The time of last access must be greater than the specified interval. eq The time of last access must be equal to the specified interval. gteq The time of last access must be greater than or equal to the specified interval. For , the following Flags attributes values may be specified.
162 Dynamic Storage Tiering File placement policy rules GB Gigabytes Specifying the I/O temperature relocation criterion The I/O temperature relocation criterion, , causes files to be relocated if their I/O temperatures rise above or drop below specified values over a specified period immediately prior to the time at which the fsppadm enforce command was issued. A file's I/O temperature is a measure of the read, write, or total I/O activity against it normalized to the file's size.
Dynamic Storage Tiering File placement policy rules I/O temperature is a softer measure of I/O activity than access age. With access age, a single access to a file resets the file's atime to the current time. In contrast, a file's I/O temperature decreases gradually as time passes without the file being accessed, and increases gradually as the file is accessed periodically.
164 Dynamic Storage Tiering File placement policy rules The files designated by the rule's SELECT statement that reside on volumes in placement class tier1 at the time the fsppadm enforce command executes would be unconditionally relocated to volumes in placement class tier2 as long as space permitted. This type of rule might be used, for example, with applications that create and access new files but seldom access existing files once they have been processed.
Dynamic Storage Tiering File placement policy rules The following example illustrates a possible companion rule that relocates files from tier2 volumes to tier1 ones based on their I/O temperatures. This rule might be used to return files that had been relocated to tier2 volumes due to inactivity to tier1 volumes when application activity against them increases.
166 Dynamic Storage Tiering File placement policy rules tier2 tier3 4 3 This rule relocates files whose 3-day I/O temperatures are less than 4 and which reside on tier1 volumes.
Dynamic Storage Tiering File placement policy rules tier2 10 100 tier3 100 This rule relocates files smaller than 10 megabytes to tier1 volumes, files between 10 and 100 megabytes to tier2 volumes, and files large
168 Dynamic Storage Tiering File placement policy rules
Dynamic Storage Tiering Calculating I/O temperature and access temperature 120 The first DELETE statement unconditionally deletes files designated by the rule's SELECT statement that reside on tier3 volumes when the fsppadm enforce command is issued. The absence of a clause in the DELETE statement indicates that deletion of designated files is unconditional.
170 Dynamic Storage Tiering Calculating I/O temperature and access temperature files that have experienced I/O activity in the recent past to be relocated to higher performing, perhaps more failure tolerant storage, ACCAGE is too coarse a filter.
Dynamic Storage Tiering Calculating I/O temperature and access temperature is the interval between the time at which the fsppadm enforce command was issued and that time minus the largest interval value specified in any element in the active policy.
172 Dynamic Storage Tiering Calculating I/O temperature and access temperature transfer activity, which is the the sum of bytes read and bytes written, should be used in the computation. For example, a 50 megabyte file that experienced less than 150 megabytes of data transfer over the 4-day period immediately preceding the fsppadm enforce scan would be a candidate for relocation. VxFS considers files that experience no activity over the period of interest to have an I/O temperature of zero.
Dynamic Storage Tiering Multiple criteria in file placement policy rule statements period are to be relocated to tier1 volumes. Bytes written to the file during the period of interest are not part of this calculation. Using I/O temperature rather than a binary indicator of activity as a criterion for file relocation gives administrators a granular level of control over automated file relocation that can be used to attune policies to application requirements.
174 Dynamic Storage Tiering Multiple criteria in file placement policy rule statements In the following example, a file must reside in one of db/datafiles, db/indexes, or db/logs and be owned by one of DBA_Manager, MFG_DBA, or HR_DBA to be designated for possible action:
Dynamic Storage Tiering Multiple criteria in file placement policy rule statements The following example illustrates of three placement classes specified in the clause of a CREATE statement: tier1 tier2 tier3 In this statement, VxFS would allocate space for newly created files designated by the rule's SELECT statement on tier1 volumes if spa
176 Dynamic Storage Tiering File placement policy rule and statement ordering Multiple conditions in clauses of RELOCATE and DELETE statements The clause in RELOCATE and DELETE statements may include multiple relocation criteria. Any or all of , , , and can be specified. When multiple conditions are specified, all must be satisfied in order for a selected file to qualify for relocation or deletion.
Dynamic Storage Tiering File placement policy rule and statement ordering tier2 other_statements
178 Dynamic Storage Tiering File placement policy rule and statement ordering A similar consideration applies to statements within a placement policy rule. VxFS processes these statements in order, and stops processing on behalf of a file when it encounters a statement that pertains to the file. This can result in unintended behavior.
Dynamic Storage Tiering File placement policies and extending files File placement policies and extending files In a VxFS file system with an active file placement policy, the placement class on whose volume a file resides is part of its metadata, and is attached when it is created and updated when it is relocated. When an application extends a file, VxFS allocates the incremental space on the volume occupied by the file if possible.
180 Dynamic Storage Tiering File placement policies and extending files
Appendix A Quick Reference This appendix includes the following topics: ■ Command summary ■ Online manual pages ■ Creating a VxFS file system ■ Converting a file system to VxFS ■ Mounting a file system ■ Unmounting a file system ■ Displaying information on mounted file systems ■ Identifying file system types ■ Resizing a file system ■ Backing up and restoring a file system ■ Using quotas Command summary Symbolic links to all VxFS command executables are installed in the /opt/VRTS/bin
182 Quick Reference Command summary Table A-1 VxFS commands Command Description df Reports the number of free disk blocks and inodes for a VxFS file system. diskusg Generates VxFS disk accounting data by user ID. extendfs Extends the size of a VxFS file system. fcladm Administers VxFS File Change Logs. ff Lists file names and inode information for a VxFS file system. fiostat Administers file I/O statistics fsadm Resizes or defragments a VxFS file system.
Quick Reference Command summary Table A-1 VxFS commands (continued) Command Description glmconfig Configures Group Lock Managers (GLM). glmstat Group Lock Managers (GLM) statistics gathering utility. mkfs Constructs a VxFS file system. mount Mounts a VxFS file system. ncheck Generates path names from inode numbers for a VxFS file system. newfs Creates a new VxFS file system. qioadmin Administers VxFS Quick I/O for Databases cache. qiomkfile Creates a VxFS Quick I/O device file.
184 Quick Reference Online manual pages Online manual pages This release includes the following online manual pages as part of the VRTSvxfs package. These are installed in the appropriate directories under /opt/VRTS/man (add this to your MANPATH environment variable), but does not update the windex database. To ensure that new VxFS manual pages display correctly, update the windex database after installing VRTSvxfs. See the catman(1M) manual page.
Quick Reference Online manual pages Table A-3 Section 1M manual pages (continued) Section 1M Description df_vxfs Reports the number of free disk blocks and inodes for a VxFS file system. extendfs_vxfs Extends the size of a VxFS file system. fcladm Administers VxFS File Change Logs. ff_vxfs Lists file names and inode information for a VxFS file system. fsadm_vxfs Resizes or reorganizes a VxFS file system. fsapadm Administers VxFS allocation policies. fscat_vxfs Cats a VxFS file system.
186 Quick Reference Online manual pages Table A-3 Section 1M manual pages (continued) Section 1M Description quot Summarizes ownership on a VxFS file system. quotacheck_vxfs Checks VxFS file system quota consistency. vxdiskusg Generates VxFS disk accounting data by user ID. vxdump Incrementally dumps file systems. vxenablef Enables specific VxFS features. vxfsconvert Converts an unmounted file system to VxFS or upgrades a VxFS disk layout version. vxfsstat Displays file system statistics.
Quick Reference Online manual pages Table A-4 Section 3 manual pages (continued) Section 3 Description fsckpt_fsopen Opens a mount point for Storage Checkpoint management. fsckpt_info Returns status information on a Storage Checkpoint. fsckpt_intro Introduces the VxFS file system Storage Checkpoint API. fsckpt_mkprimary Makes a Storage Checkpoint in a VxFS file system the primary fileset for that file system. fsckpt_opts_create Creates a Storage Checkpoint associated with a file system handle.
188 Quick Reference Online manual pages Table A-4 Section 3 manual pages (continued) Section 3 Description vxfs_ap_define2 Defines a new allocation policy. vxfs_ap_enforce_ckpt Reorganizes blocks in a Storage Checkpoint to match a specified allocation policy. vxfs_ap_enforce_ckptchain Enforces the allocation policy for all of the Storage Checkpoints of a VxFS file system. vxfs_ap_enforce_file Ensures that all blocks in a specified file match the file allocation policy.
Quick Reference Online manual pages Table A-4 Section 3 manual pages (continued) Section 3 Description vxfs_fiostats_set Turns on and off file range I/O statistics and resets statistics counters. vxfs_get_ioffsets Obtains VxFS inode field offsets. vxfs_inotopath Returns path names for a given inode number. vxfs_inostat Gets the file statistics based on the inode number. vxfs_inotofd Gets the file descriptor based on the inode number.
190 Quick Reference Creating a VxFS file system Table A-5 describes the VxFS-specific section 4 manual pages. Table A-5 Section 4 manual pages Section 4 Description fs_vxfs Provides the format of a VxFS file system volume. inode_vxfs Provides the format of a VxFS file system inode. tunefstab Describes the VxFS file system tuning parameters table. Table A-6 describes the VxFS-specific section 7 manual pages.
Quick Reference Creating a VxFS file system 191 To create a file system ◆ Use the mkfs command to create a file system: mkfs [-F vxfs] [-m] [generic_options] [-o specific_options] \ special [size] -F vxfs Specifies the VxFS file system type. -m Displays the command line that was used to create the file system. The file system must already exist. This option enables you to determine the parameters used to construct the file system. generic_options Options common to most other file system types.
192 Quick Reference Converting a file system to VxFS Converting a file system to VxFS The vxfsconvert command can be used to convert a HFS file system to a VxFS file system. See the vxfsconvert(1M) manual page. To convert a HFS file system to a VxFS file system ◆ Use the vxfsconvert command to convert a HFS file system to VxFS: vxfsconvert [-l logsize] [-s size] [-efnNvyY] special -e Estimates the amount of space required to complete the conversion.
Quick Reference Mounting a file system and Veritas-installed products, the generic mount command executes the VxFS mount command from the directory /sbin/fs/vxfs. If the -F option is not supplied, the command searches the file /etc/fstab for a file system and an fstype matching the special file or mount point provided. If no file system type is specified, mount uses the default file system type (VxFS).
194 Quick Reference Mounting a file system Support for cluster file If you specify the cluster option, the file system is mounted in systems shared mode. HP Serviceguard Storage Management Suite environments require HP Serviceguard to be configured correctly before a complete clustering environment is enabled. Using Storage Checkpoints The ckpt=checkpoint_name option mounts a Storage Checkpoint of a mounted file system that was previously created by the fsckptadm command.
Quick Reference Unmounting a file system To mount the file system ◆ Mount the file system: # mount -F vxfs -o delaylog /dev/vx/dsk/fsvol/vol1 /ext Editing the fstab file You can edit the /etc/fstab file to mount a file system automatically at boot time. You must specify the following: ■ The special block device name to mount ■ The mount point ■ The file system type (vxfs) ■ The mount options ■ The backup frequency ■ Which fsck pass looks at the file system Each entry must be on a single line.
196 Quick Reference Displaying information on mounted file systems To unmount a file system ◆ Use the umount command to unmount a file system: vxumount [-o [force]] mount_point vxumount [-f] mount_point vxumount [-o [force]] {special|mount_point} Specify the file system to be unmounted as a mount_point or special. special is the VxFS block special device on which the file system resides. Example of unmounting a file system The following are examples of unmounting file systems.
Quick Reference Identifying file system types To display information on mounted file systems ◆ Invoke the mount command without options: # mount /dev/vg00/lvol3 on / type vxfs ioerror=mwdisable,delaylog \ Wed Jun 5 3:23:40 2004 /dev/vg00/lvol8 on /var type vxfs ioerror=mwdisable,delaylog Wed Jun 5 3:23:56 2004 /dev/vg00/lvol7 on /usr type vxfs ioerror=mwdisable,delaylog Wed Jun 5 3:23:56 2004 /dev/vg00/lvol6 on /tmp type vxfs ioerror=mwdisable,delaylog Wed Jun 5 3:23:56 2004 /dev/vg00/lvol5 on /opt type v
198 Quick Reference Resizing a file system Example of determining a file system's type The following example uses the fstyp command to determine the file system type of the /dev/vx/dsk/fsvol/vol1 device.
Quick Reference Resizing a file system To extend a VxFS file system ◆ Use the fsadm command to extend a VxFS file system: /usr/lib/fs/vxfs/fsadm [-F vxfs] [-b newsize] [-r rawdev] \ mount_point vxfs The file system type. newsize The size (in sectors) to which the file system will increase. mount_point The file system's mount point. -r rawdev Specifies the path name of the raw device if there is no entry in /etc/fstab and fsadm cannot determine the raw device.
200 Quick Reference Resizing a file system -r rawdev Specifies the path name of the raw device if there is no entry in /etc/fstab and fsadm cannot determine the raw device. Example of shrinking a file system The following example shrinks a VxFS file system mounted at /ext to 20480 sectors. To shrink a VxFS file system ◆ Shrink a VxFS file system mounted at /ext to 20480 sectors: # fsadm -F vxfs -b 20480 /ext Warning: After this operation, there is unused space at the end of the device.
Quick Reference Resizing a file system -E Reports on extent fragmentation. mount_point The file system's mount point. -r rawdev Specifies the path name of the raw device if there is no entry in /etc/fstab and fsadm cannot determine the raw device. Example of reorganizing a file system The following example reorganizes the file system mounted at /ext.
202 Quick Reference Backing up and restoring a file system Example of extending a VxFS file system The following example extends a VxFS file system on a VxVM volume.
Quick Reference Backing up and restoring a file system source The special device name or mount point of the file system to copy. destination The name of the special device on which to create the snapshot. size The size of the snapshot file system in sectors. snap_mount_point Location where to mount the snapshot; snap_mount_pointmust exist before you enter this command.
204 Quick Reference Using quotas To back up a VxFS snapshot file system ◆ Back up the VxFS snapshot file system mounted at /snapmount to the tape drive with device name /dev/rmt/: # vxdump -cf /dev/rmt /snapmount Restoring a file system After backing up the file system, you can restore it using the vxrestore command. First, create and mount an empty file system.
Quick Reference Using quotas Turning on quotas You can enable quotas at mount time or after a file system is mounted. The root directory of the file system must contain a file named quotas that is owned by root.
206 Quick Reference Using quotas limits or assign them specific values. Users are allowed to exceed the soft limit, but only for a specified time. Disk usage can never exceed the hard limit. The default time limit for exceeding the soft limit is seven days on VxFS file systems. edquota creates a temporary file for a specified user. This file contains on-disk quotas for each mounted VxFS file system that has a quotas file.
Quick Reference Using quotas To turn off quotas for a file system ◆ Turn off quotas for a file system: quotaoff mount_point 207
208 Quick Reference Using quotas
Appendix B Diagnostic messages This appendix includes the following topics: ■ File system response to problems ■ About kernel messages ■ Kernel messages ■ About unique message identifiers ■ Unique message identifiers File system response to problems When the file system encounters problems, it responds in one of the following ways: Marking an inode bad Inodes can be marked bad if an inode update or a directory-block update fails.
210 Diagnostic messages About kernel messages Disabling a file system If an error occurs that compromises the integrity of the file system, VxFS disables itself. If the intent log fails or an inode-list error occurs, the super-block is ordinarily updated (setting the VX_FULLFSCK flag) so that the next fsck does a full structural check. If this super-block update fails, any further changes to the file system can cause inconsistencies that are undetectable by the intent log replay.
Diagnostic messages Kernel messages instance of the message to guarantee that the sequence of events is known when analyzing file system problems. Each message is also written to an internal kernel buffer that you can view in the file /var/adm/syslog/syslog.log. In some cases, additional data is written to the kernel buffer. For example, if an inode is marked bad, the contents of the bad inode are written.
212 Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 002 WARNING: msgcnt x: mesg 002: V-2-2: vx_snap_strategy - mount_point file system write attempt to read-only file system WARNING: msgcnt x: mesg 002: V-2-2: vx_snap_copyblk - mount_point file system write attempt to read-only file system Description The kernel tried to write to a read-only file system. This is an unlikely problem, but if it occurs, the file system is disabled.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 006, 007 WARNING: msgcnt x: mesg 006: V-2-6: vx_sumupd - mount_point file system summary update in au aun failed WARNING: msgcnt x: mesg 007: V-2-7: vx_sumupd - mount_point file system summary update in inode au iaun failed Description An I/O error occurred while writing the allocation unit or inode allocation unit bitmap summary to disk. This sets the VX_FULLFSCK flag on the file system.
214 Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 010 WARNING: msgcnt x: mesg 010: V-2-10: vx_ialloc - mount_point file system inode inumber not free Description When the kernel allocates an inode from the free inode bitmap, it checks the mode and link count of the inode. If either is non-zero, the free inode bitmap or the inode list is corrupted.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 013 WARNING: msgcnt x: mesg 013: V-2-13: vx_iposition - mount_point file system inode inumber invalid inode list extent Description For a Version 2 and above disk layout, the inode list is dynamically allocated. When the kernel tries to read an inode, it must look up the location of the inode in the inode list file. If the kernel finds a bad extent, the inode cannot be accessed.
216 Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 015 WARNING: msgcnt x: mesg 015: V-2-15: vx_ibadinactive - mount_point file system cannot mark inode inumber bad WARNING: msgcnt x: mesg 015: V-2-15: vx_ilisterr - mount_point file system cannot mark inode inumber bad Description An attempt to mark an inode bad on disk, and the super-block update to set the VX_FULLFSCK flag, failed.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 017 217
218 Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition WARNING: msgcnt x: mesg 017: V-2-17: vx_attr_getblk - mount_point file system inode inumber marked bad in core WARNING: msgcnt x: mesg 017: V-2-17: vx_attr_iget - mount_point file system inode inumber marked bad in core WARNING: msgcnt x: mesg 017: V-2-17: vx_attr_indadd - mount_point file system inode inumber marked bad in core WARNING: msgcnt x: mesg 017: V-2-17: vx_attr_indtrunc mount_p
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition file system inode inumber marked bad in core 017 (continued) WARNING: msgcnt x: mesg 017: V-2-17: vx_ilisterr - mount_point file system inode inumber marked bad in core WARNING: msgcnt x: mesg 017: V-2-17: vx_indtrunc - mount_point file system inode inumber marked bad in core WARNING: msgcnt x: mesg 017: V-2-17: vx_iread - mount_point file system inode inumber marked bad in core WARNING: msgcn
220 Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 017 (continued) Description When inode information is no longer dependable, the kernel marks it bad in memory. This is followed by a message to mark it bad on disk as well unless the mount command ioerror option is set to disable, or there is subsequent I/O failure when updating the inode on disk. No further operations can be performed on the inode.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 019 WARNING: msgcnt x: mesg 019: V-2-19: vx_log_add - mount_point file system log overflow Description Log ID overflow. When the log ID reaches VX_MAXLOGID (approximately one billion by default), a flag is set so the file system resets the log ID at the next opportunity.
222 Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 021 WARNING: msgcnt x: mesg 021: V-2-21: vx_fs_init - mount_point file system validation failure ■ Description When a VxFS file system is mounted, the structure is read from disk. If the file system is marked clean, the structure is correct and the first block of the intent log is cleared.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 022 WARNING: msgcnt x: mesg 022: V-2-22: vx_mountroot - root file system remount failed Description The remount of the root file system failed. The system will not be usable if the root file system cannot be remounted for read/write access. When a root Veritas File System is first mounted, it is mounted for read-only access.
224 Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 024 WARNING: msgcnt x: mesg 024: V-2-24: vx_cutwait - mount_point file system current usage table update error Description Update to the current usage table (CUT) failed. For a Version 2 disk layout, the CUT contains a fileset version number and total number of blocks used by each fileset. The VX_FULLFSCK flag is set in the super-block.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 027 WARNING: msgcnt x: mesg 027: V-2-27: vx_snap_bpcopy - mount_point snapshot file system write error Description A write to the snapshot file system failed. As the primary file system is updated, copies of the original data are read from the primary file system and written to the snapshot file system. If one of these writes fails, the snapshot file system is disabled.
226 Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 029, 030 WARNING: msgcnt x: mesg 029: V-2-29: vx_snap_getbp - mount_point snapshot file system block map write error WARNING: msgcnt x: mesg 030: V-2-30: vx_snap_getbp - mount_point snapshot file system block map read error Description During a snapshot backup, each snapshot file system maintains a block map on disk.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 033 WARNING: msgcnt x: mesg 033: V-2-33: vx_check_badblock mount_point file system had an I/O error, setting VX_FULLFSCK Description When the disk driver encounters an I/O error, it sets a flag in the super-block structure. If the flag is set, the kernel will set the VX_FULLFSCK flag as a precautionary measure.
228 Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 036 WARNING: msgcnt x: mesg 036: V-2-36: vx_lctbad - mount_point file system link count table lctnumber bad Description Update to the link count table (LCT) failed. For a Version 2 and above disk layout, the LCT contains the link count for all the structural inodes. The VX_FULLFSCK flag is set in the super-block. If the super-block cannot be written, the file system is disabled.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 038 WARNING: msgcnt x: mesg 038: V-2-38: vx_dataioerr - volume_name file system file data [read|write] error in dev/block device_ID/block Description A read or a write error occurred while accessing file data. The message specifies whether the disk I/O that failed was a read or a write. File data includes data currently in files and free blocks.
230 Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 039 WARNING: msgcnt x: mesg 039: V-2-39: vx_writesuper - file system super-block write error Description An attempt to write the file system super block failed due to a disk I/O error. If the file system was being mounted at the time, the mount will fail.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 056 WARNING: msgcnt x: mesg 056: V-2-56: vx_mapbad - mount_point file system extent allocation unit state bitmap number number marked bad Description If there is an I/O failure while writing a bitmap, the map is marked bad. The kernel considers the maps to be invalid, so does not do any more resource allocation from maps.
232 Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 058 WARNING: msgcnt x: mesg 058: V-2-58: vx_isum_bad - mount_point file system inode allocation unit summary number number marked bad Description An I/O error occurred reading or writing an inode allocation unit summary. The VX_FULLFSCK flag is set. If the VX_FULLFSCK flag cannot be set, the file system is disabled. ■ Action Check the console log for I/O errors.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 060 WARNING: msgcnt x: mesg 060: V-2-60: vx_snap_getbitbp mount_point snapshot file system bitmap read error Description An I/O error occurred while reading the snapshot file system bitmap. There is no problem with snapped file system, but the snapshot file system is disabled. ■ Action Check the console log for I/O errors. If the problem is a disk failure, replace the disk.
234 Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 063 WARNING: msgcnt x: mesg 063: V-2-63: vx_fset_markbad mount_point file system mount_point fileset (index number) marked bad Description An error occurred while reading or writing a fileset structure. VX_FULLFSCK flag is set. If the VX_FULLFSCK flag cannot be set, the file system is disabled. ■ Action Unmount the file system and use fsck to run a full structural check.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 067 WARNING: msgcnt x: mesg 067: V-2-67: mount of device_path requires HSM agent Description The file system mount failed because the file system was marked as being under the management of an HSM agent, and no HSM agent was found during the mount. ■ Action Restart the HSM agent and try to mount the file system again.
236 Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 071 NOTICE: msgcnt x: mesg 071: V-2-71: cleared data I/O error flag in mount_point file system Description The user data I/O error flag was reset when the file system was mounted. This message indicates that a read or write error occurred while the file system was previously mounted. See Message Number 038. ■ Action Informational only, no action required.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 076 NOTICE: msgcnt x: mesg 076: V-2-76: checkpoint asynchronous operation on mount_point file system still in progress ■ Description An EBUSY message was received while trying to unmount a file system. The unmount failure was caused by a pending asynchronous fileset operation, such as a fileset removal or fileset conversion to a nodata Storage Checkpoint.
238 Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 079
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition WARNING: msgcnt x: mesg 017: V-2-79: vx_attr_getblk - mount_point file system inode inumber marked bad on disk WARNING: msgcnt x: mesg 017: V-2-79: vx_attr_iget - mount_point file system inode inumber marked bad on disk WARNING: msgcnt x: mesg 017: V-2-79: vx_attr_indadd - mount_point file system inode inumber marked bad on disk WARNING: msgcnt x: mesg 017: V-2-79: vx_attr_indtrunc mount_point
240 Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition file system inode inumber marked bad on disk 079 (continued) WARNING: msgcnt x: mesg 017: V-2-79: vx_ilisterr - mount_point file system inode inumber marked bad on disk WARNING: msgcnt x: mesg 017: V-2-79: vx_indtrunc - mount_point file system inode inumber marked bad on disk WARNING: msgcnt x: mesg 017: V-2-79: vx_iread - mount_point file system inode inumber marked bad on disk WARNING:
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 079 (continued) ■ Description When inode information is no longer dependable, the kernel marks it bad on disk. The most common reason for marking an inode bad is a disk I/O failure. If there is an I/O failure in the inode list, on a directory block, or an indirect address extent, the integrity of the data in the inode, or the data the kernel tried to write to the inode list, is questionable.
242 Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 081 WARNING: msgcnt x: mesg 081: V-2-81: possible network partition detected Description This message displays when CFS detects a possible network partition and disables the file system locally, that is, on the node where the message appears. ■ Action There are one or more private network links for communication between the nodes in a cluster.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 084 WARNING: msgcnt x: mesg 084: V-2-84: in volume_name quota on failed during assumption. (stage stage_number) Description In a cluster file system, when the primary of the file system fails, a secondary file system is chosen to assume the role of the primary. The assuming node will be able to enforce quotas after becoming the primary.
244 Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 088 WARNING: msgcnt x: mesg 088: V-2-88: quotaon on file_system failed; limits exceed limit Description The external quota file, quotas, contains the quota values, which range from 0 up to 2147483647. When quotas are turned on by the quotaon command, this message displays when a user exceeds the quota limit. ■ Action Correct the quota values in the quotas file.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 092 WARNING: msgcnt x: mesg 092: V-2-92: vx_mkfcltran - failure to map offset offset in File Change Log file Description The vxfs kernel was unable to map actual storage to the next offset in the File Change Log file. This is mostly likely caused by a problem with allocating to the FCL file. Because no new FCL records can be written to the FCL file, the FCL has been deactivated.
246 Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 098 WARNING: msgcnt x: mesg 098: V-2-98: VxFS failed to initialize File Change Log for fileset fileset (index number) of mount_point file system Description VxFS mount failed to initialize FCL structures for the current fileset mount. As a result, FCL could not be turned on. The FCL file will have no logging records. ■ Action Reactivate the FCL.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 101 WARNING: msgcnt x: mesg 101: V-2-101: File Change Log on mount_point for file set index approaching max file size supported. File Change Log will be reactivated when its size hits max file size supported. ■ Description The size of the FCL file is approching the maximum file size supported. This size is platform specific.
248 Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 104 WARNING: msgcnt x: mesg 104: V-2-104: File System mount_point device volume_name disabled ■ Description The volume manager detected that the specified volume has failed, and the volume manager has disabled the volume. No further I/O requests are sent to the disabled volume. ■ 105 Action The volume must be repaired.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 108 WARNING: msgcnt x: mesg 108: V-2-108: vx_dexh_error - error: fileset fileset, directory inode number dir_inumber, bad hash inode hash_inode, seg segment bno block_number ■ Description The supplemental hash for a directory is corrupt. ■ 109 Action If the file system is mounted read/write, the hash for the directory will be automatically removed and recreated.
250 Diagnostic messages About unique message identifiers Table B-1 Kernel messages (continued) Message Number Message and Definition 111 WARNING: msgcnt x: mesg 111: V-2-111: You have exceeded the authorized usage (maximum maxfs unique mounted user-data file systems) for this product and are out of compliance with your License Agreement. Please email sales_mail@symantec.com or contact your Symantec sales representative for information on how to obtain additional licenses for this product.
Diagnostic messages Unique message identifiers Unique message identifiers Some commonly encountered UMIs and the associated messages are described on the following table: Table B-2 Unique message identifiers and messages Message Number Message and Definition 20002 UX:vxfs command: ERROR: V-3-20002: message Description The command attempted to call stat() on a device path to ensure that the path refers to a character device before opening the device, but the stat() call failed.
252 Diagnostic messages Unique message identifiers Table B-2 Unique message identifiers and messages (continued) Message Number Message and Definition 20076 UX:vxfs command: ERROR: V-3-20076: message Description The command called stat() on a file, which is usually a file system mount point, but the call failed. ■ Action Check that the path specified is what was intended and that the user has permission to access that path.
Diagnostic messages Unique message identifiers Table B-2 Unique message identifiers and messages (continued) Message Number Message and Definition 21264 UX:vxfs command: ERROR: V-3-21264: message ■ Description The attempt to mount a VxFS file system has failed because either the volume being mounted or the directory which is to be the mount point is busy.
254 Diagnostic messages Unique message identifiers Table B-2 Unique message identifiers and messages (continued) Message Number Message and Definition 21272 UX:vxfs command: ERROR: V-3-21272: message Description The mount options specified contain mutually-exclusive options, or in the case of a remount, the new mount options differed from the existing mount options in a way that is not allowed to change in a remount.
Appendix C Disk layout This appendix includes the following topics: ■ About disk layouts ■ Supported disk layouts and operating systems ■ VxFS Version 4 disk layout ■ VxFS Version 5 disk layout ■ VxFS Version 6 disk layout ■ VxFS Version 7 disk layout About disk layouts The disk layout is the way file system information is stored on disk. On VxFS, seven different disk layout versions were created to take advantage of evolving technological developments.
256 Disk layout About disk layouts Version 3 Version 3 disk layout encompasses all file system structural information in files, rather than at fixed locations on disk, allowing for greater scalability. Version 3 supports files and file systems up to one terabyte in size. Not Supported Version 4 Version 4 disk layout encompasses all file system structural information in files, rather than at fixed locations on disk, allowing for greater scalability.
Disk layout Supported disk layouts and operating systems Supported disk layouts and operating systems Table C-1 shows which disk layouts supported on the which operating systems. File system type and operating system versions Table C-1 JFS 3.3, HP-UX 11.11 VxFS 3.5, HP-UX 11.
258 Disk layout VxFS Version 4 disk layout File system type and operating system versions (continued) Table C-1 JFS 3.3, HP-UX 11.11 VxFS 3.5, HP-UX 11.11 mkfs No No No Yes Yes Yes Local Mount No No No Yes Yes Yes Shared Mount No No No Yes Yes Yes mkfs No No No Yes Yes Yes Local Mount No No No Yes Yes Yes Shared Mount No No No Yes Yes Yes Disk Layout Version 6 Version 7 VxFS VxFS VxFS VxFS 5.0, 3.5.2, 4.1, 5.0, HP-UX 11i HP-UX HP-UX HP-UX v3 11.23 PI 11.
Disk layout VxFS Version 4 disk layout the file system structures simply requires extending the appropriate structural files. This removes the extent size restriction imposed by the previous layouts. All Version 4 structural files reside in the structural fileset. The structural files in the Version 4 disk layout are: File Description object location table file Contains the object location table (OLT). The OLT, which is referenced from the super-block, is used to locate the other structural files.
260 Disk layout VxFS Version 4 disk layout File Description quotas files Contains quota information in records. Each record contains resources allocated either per user or per group. The Version 4 disk layout supports Access Control Lists and Block-Level Incremental (BLI) Backup. BLI Backup is a backup method that stores and retrieves only the data blocks changed since the previous backup, not entire files. This saves times, storage space, and computing resources required to backup large databases.
Disk layout VxFS Version 5 disk layout VxFS Version 4 disk layout Figure C-1 Super-block Object Location Table OLT Extent Addresses Initial Inode Extents Fileset Header/ File Inode Number Fileset Header File Inode Initial Inode List Extent Addresses Inode List Inode Inode Allocation Unit Inode .... .... OLT Replica Primary Fileset Header Fileset Header File Inode List inum Structural Fileset Header Fileset Index and Name Primary Fileset Header max_inodes Features .... ....
262 Disk layout VxFS Version 6 disk layout Block Size Maximum File System Size 2048 bytes 8,589,934,078 sectors (≈8 TB) 4096 bytes 17,179,868,156 sectors (≈16 TB) 8192 bytes 34,359,736,312 sectors (≈32 TB) If you specify the file system size when creating a file system, the block size defaults to the appropriate value as shown above. See the mkfs(1M) manual page. See “About quota files on Veritas File System” on page 110.
Disk layout VxFS Version 7 disk layout See “About quota files on Veritas File System” on page 110. VxFS Version 7 disk layout VxFS disk layout Version 7 is similar to Version 6, except that Version 7 enables support for variable and large size history log records, more than 2048 volumes, large directory hash, and Dynamic Storage Tiering. The Version 7 disk layout can theoretically support files and file systems up to 8 exabytes (263).
264 Disk layout VxFS Version 7 disk layout
Glossary access control list (ACL) The information that identifies specific users or groups and their access privileges for a particular file or directory. agent A process that manages predefined Veritas Cluster Server (VCS) resource types. Agents bring resources online, take resources offline, and monitor resources to report any state changes to VCS. When an agent is started, it obtains configuration information from VCS and periodically monitors the resources and updates VCS with the resource status.
266 Glossary on the disk before the write returns, but the inode modification times may be lost if the system crashes. defragmentation The process of reorganizing data on disk by making file data blocks physically adjacent to reduce access times. direct extent An extent that is referenced directly by an inode. direct I/O An unbuffered form of I/O that bypasses the kernel’s buffering of data. With direct I/O, the file system transfers data directly between the disk and the user-supplied buffer.
Glossary inode A unique identifier for each file within a file system that contains the data and metadata associated with that file. inode allocation unit A group of consecutive blocks containing inode allocation information for a given fileset. This information is in the form of a resource summary and a free inode map. intent logging A method of recording pending changes to the file system structure. These changes are recorded in a circular intent log file.
268 Glossary quotas file The quotas commands read and write the external quotas file to get or change usage limits. When quotas are turned on, the quota limits are copied from the external quotas file to the internal quotas file. See quotas, internal quotas file, and external quotas file. reservation An extent attribute used to preallocate space for a file. root disk group A special private disk group that always exists on the system. The root disk group is named rootdg.
Glossary volume A virtual disk which represents an addressable range of disk blocks used by applications such as file systems or databases. volume set A container for multiple different volumes. Each volume can have its own geometry. vxfs The Veritas File System type. Used as a parameter in some commands. VxFS Veritas File System. VxVM Veritas Volume Manager.
270 Glossary
Index A D access control lists 24 allocation policies 64 default 64 extent 18 extent based 18 multi-volume support 129 data copy 70 data integrity 22 data Storage Checkpoints definition 88 data synchronous I/O 38, 71 data transfer 70 default allocation policy 64 block sizes 18 default_indir_size tunable parameter 58 defragmentation 30 extent 49 scheduling with cron 49 delaylog mount option 35–36 device file 259 direct data transfer 70 direct I/O 70 directory reorganization 50 disabled file system snapsh
272 Index ENOENT 213 ENOSPC 101 ENOTDIR 213 expansion 30 extent 18, 63 attributes 63 indirect 19 reorganization 50 extent allocation 18 aligned 64 control 63 fixed size 63 unit state file 259 unit summary file 259 extent size indirect 19 external quotas file 110 F fc_foff 116 fcl_inode_aging_count tunable parameter 56 fcl_inode_aging_size tunable parameter 57 fcl_keeptime tunable parameter 54 fcl_maxalloc tunable parameter 54 fcl_winterval tunable parameter 55 file device 259 extent allocation unit state
Index how to restore a file system 204 how to set up user quotas 206 how to turn off quotas 206 how to turn on quotas 205 how to unmount a Storage Checkpoint 94 how to view quotas 206 HSM agent error message 234–235 hsm_write_prealloc 55 I I/O direct 70 sequential 71 synchronous 71 I/O requests asynchronous 38 synchronous 37 increasing file system size 198 indirect extent address size 19 double 19 single 19 initial_extent_size tunable parameter 56 inode allocation unit file 259 inode list error 210 inode
274 Index N name space preserved by Storage Checkpoints 84 ncheck 120 nodata Storage Checkpoints 94 nodata Storage Checkpoints definition 88 nodatainlog mount option 35, 37 reorganization directory 50 extent 50 report extent fragmentation 49 reservation space 63 Reverse Path Name Lookup 120 S O O_SYNC 35 object location table file 259 P parameters default 52 tunable 52 tuning 51 performance overall 34 snapshot file systems 78 primary fileset relation to Storage Checkpoints 85 pseudo device 93 Q qio_ca
Index Storage Checkpoints (continued) difference between a data Storage Checkpoint and a nodata Storage Checkpoint 95 freezing and thawing a file system 85 mounting 92 multi-volume support 123 nodata Storage Checkpoints 88, 94 operation failures 101 pseudo device 93 read-only Storage Checkpoints 92 removable Storage Checkpoints 89 removing 92 space management 101 synchronous vs.
276 Index vxfsstat 47 vxfsu_fcl_sync 55 vxlsino 120 vxrestore 66, 204 vxtunefs changing extent size 19 vxvset 124 W writable Storage Checkpoints 92 write size 65 write_nstream tunable parameter 53 write_pref_io tunable parameter 53 write_throttle tunable parameter 60