Installation and Configuration Guide HP StorageWorks HSG80 ACS Solution Software V8.8 for OpenVMS Product Version: 8.8-1 First Edition (March 2005) Part Number: AA-RV1PA-TE This guide provides installation and configuration instructions and reference material for operation of the HSG80 ACS Solution Software V8.8-1 for OpenVMS.
© Copyright 2000-2005 Hewlett-Packard Development Company, L.P. Hewlett-Packard Company makes no warranty of any kind with regard to this material, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. Hewlett-Packard shall not be liable for errors contained herein or for incidental or consequential damages in connection with the furnishing, performance, or use of this material.
contents About this Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11 Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Intended Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Related Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Determining Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Naming Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Numbers of Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Assigning Unit Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Changing Characteristics Through Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enabling Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Changing Switches. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Specifying Storageset and Partition Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . RAIDset Switches .
Contents Implementation Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 SMART Error Eject . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Error Threshold for Drives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 4 Installing and Configuring the HSG Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Configuring a Stripeset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring a Mirrorset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring a RAIDset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring a Striped Mirrorset. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents A Subsystem Profile Templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .175 Storageset Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Storage Map Template 1 for the BA370 Enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 Storage Map Template 2 for the Second BA370 Enclosure. . . . . . . . . . . . . . . . . . . . . . . . .
Contents 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 Location of controllers and cache modules in a BA370 enclosure . . . . . . . . . . . . . . . . . 27 “This controller” and “other controller” for the Model 2200 enclosure . . . . . . . . . . . . . 28 “This controller” and “other controller” for the BA370 enclosure . . . . . . . . . . . . . . . . . 28 Typical multiple-bus configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Tables 1 Summary of Chapter Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2 Document Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3 Unit Assignments and SCSI_VERSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4 PTL addressing, single-bus configuration, six Model 4320R enclosures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
about this guide About this Guide This Installation Guide describes how to install and configure the HSG80 ACS Solution Software Version 8.8-1 for OpenVMS.
About this Guide Overview This section covers the following topics: Intended Audience, page 12 Related Documentation, page 12 Intended Audience This book is intended for use by system administrators and system technicians who have a basic experience with storage and networking. Related Documentation In addition to this guide, corresponding information can be found in: 12 ■ ACS V8.
About this Guide ■ HP StorageWorks HSG80 ACS Solution Software Release Notes (platform-specific) ■ HP StorageWorks Enterprise/Modular Storage RAID Array Fibre Channel Arbitrated Loop Configurations for Windows, Tru64, and Sun Solaris Application Note (AA-RS1ZB-TE) - Solution software host support includes the following platforms: — IBM AIX — HP-UX — Linux (Red Hat x86/Alpha, SuSE x86/Alpha, Caldera x86) — Novell NetWare — Open VMS — Sun Solaris — Tru64 UNIX — Windows NT/2000/Windows Server 2003 (32-bit)
About this Guide Chapter Content Summary Table 1 below summarizes the content of the chapters. Table 1: Summary of Chapter Contents Chapters Description 1. Planning a Subsystem This chapter focuses on technical terms and knowledge needed to plan and implement storage array subsystems. 2. Planning Storage Configurations Plan the storage configuration of your subsystem, using individual disk drives (JBOD), storageset types (mirrorsets, stripesets, and so on), and/or partitioned drives.
About this Guide Table 1: Summary of Chapter Contents (Continued) 7. Chapters Description Backing Up, Cloning, and Moving Data Description of common procedures that are not mentioned elsewhere in this guide. Backing Up Subsystem Configuration Cloning Data for Backup Moving Storagesets Appendix A. Subsystem Profile Templates This appendix contains storageset profiles to copy and use to create your system profiles.
About this Guide Conventions Conventions consist of the following: ■ Document conventions ■ Symbols in Text ■ Symbols on Equipment Document conventions This document follows the conventions in Table 2.
About this Guide Note: Text set off in this manner presents commentary, sidelights, or interesting points of information. Symbols on Equipment Any enclosed surface or area of the equipment marked with these symbols indicates the presence of electrical shock hazards. Enclosed area contains no operator serviceable parts. WARNING: To reduce the risk of injury from electrical shock hazards, do not open this enclosure. Any RJ-45 receptacle marked with these symbols indicates a network interface connection.
About this Guide Any product or assembly marked with these symbols indicates that the component exceeds the recommended weight for one individual to handle safely. WARNING: To reduce the risk of personal injury or damage to the equipment, observe local occupational health and safety requirements and guidelines for manually handling material. Rack Stability WARNING: To reduce the risk of personal injury or damage to the equipment, be sure that: • • • • • 18 The leveling jacks are extended to the floor.
About this Guide Getting Help If you still have a question after reading this guide, contact an HP authorized service provider or access our web site. Technical Support Telephone numbers for worldwide technical support are listed on the following HP web site: http://www.hp.com/support/. From this web site, select the country of origin. Note: For continuous quality improvement, calls may be recorded or monitored. Outside North America, call technical support at the nearest location.
About this Guide HP Authorized Reseller For the name of your nearest authorized reseller: ■ In the United States, call 1-800-345-1518 ■ In Canada, call 1-800-263-5868 ■ Elsewhere, see the Storage web site for locations and telephone numbers Configuration Flowchart A three-part flowchart (Figure 1, Figure 2, and Figure 3) is shown on the following pages. Refer to these charts while installing and configuring a new storage subsystem.
About this Guide Unpack subsystem See the unpacking instructions on shipping box Plan a Subsystem Chapter 1 Plan Storage Configurations Chapter 2 Prepare Host System Chapter 3 Make Local Connection page 126 Controller pair Single controller Cable the Controller page 128 Cable the Controllers page 134 Configure the Controller page 128 Configure the Controllers page 135 Installing SWCC ? No Yes B See Figure 3 on page 23 See Figure 2 on page 22 A Figure 1: General configuration flowchart (pane
About this Guide A Configure devices page 141 Create Storagesets and Partitions: Stripeset, page 141 Mirrorset, page 143 RAIDset, page 144 Striped Mirrorset, page 144 Single (JBOD) Disk, page 145 Continue creating units until you have completed your planned configuration. Partition, page 146 Assign Unit Numbers page 147 Select Configuration Options page 150 Verify Storage Setup page 154 Figure 2: General configuration flowchart (panel 2) 22 HSG80 ACS Solution Software V8.
About this Guide B Install Agent Chapter 4 Install Client Appendix B Create Storage See SWCC online help Verify Storage Set Up Figure 3: SWCC storage configuration flowchart (panel 3) HSG80 ACS Solution Software V8.
About this Guide 24 HSG80 ACS Solution Software V8.
Planning a Subsystem 1 This chapter provides information that helps you plan how to configure the storage array subsystem. This chapter focuses on the technical terms and knowledge needed to plan and implement storage subsystems. Note: This chapter frequently references the command line interface (CLI). For the complete syntax and descriptions of the CLI commands, see the HP StorageWorks HSG60 and HSG80 Array Controller and Array Controller Software Command Line Interface (CLI) Reference Guide.
Planning a Subsystem Defining Subsystems This section describes the terms this controller and other controller. It also presents graphics of the Model 2200 and BA370 enclosures. Note: The HSG80 controller uses the BA370 or Model 2200 enclosure. Controller Designations A and B The terms A, B, “this controller,” and “other controller,” respectively are used to distinguish one controller from another in a two-controller (also called dual-redundant) subsystem.
Planning a Subsystem BA370 Enclosure 1 2 3 4 5 2 1 3 4 5 6 6 EMU PVA Controller A Controller B Cache module A Cache module B CXO6283B Figure 5: Location of controllers and cache modules in a BA370 enclosure Controller Designations “This Controller” and “Other Controller” Some CLI commands use the terms “this” and “other” to identify one controller or the other in a dual-redundant pair. These designations are a shortened form of “this controller” and “other controller.
Planning a Subsystem Model 2200 Enclosure 1 2 CXO7366A 1 This controller 2 Other controller Figure 6: “This controller” and “other controller” for the Model 2200 enclosure BA370 Enclosure 1 2 CXO6468D 1 Other controller 2 This controller Figure 7: “This controller” and “other controller” for the BA370 enclosure 28 HSG80 ACS Solution Software V8.
Planning a Subsystem What is Failover Mode? Failover is a way to keep the storage array available to the host if one of the controllers becomes unresponsive. A controller can become unresponsive because of a controller hardware failure. Failover keeps the storage array available to the hosts by allowing the surviving controller to take over total control of the subsystem.
Planning a Subsystem — Host Fibre Channel adapter ■ A host can redistribute the I/O load between the controllers. ■ All hosts must have operating system software that supports multiple-bus failover mode.
Planning a Subsystem Selecting a Cache Mode The cache module supports read, read-ahead, write-through, and write-back caching techniques. The cache technique is selected separately for each unit. For example, you can enable only read and write-through caching for some units while enabling only write-back caching for other units. Read Caching When the controller receives a read request from the host, it reads the data from the disk drives, delivers it to the host, and stores the data in its cache module.
Planning a Subsystem operation is complete. This process is called write-through caching because the data actually passes through—and is stored in—the cache memory on its way to the disk drives. Enabling Mirrored Caching In mirrored caching, half of each controller’s cache mirrors the companion controller’s cache, as shown in Figure 9. The total memory available for cached data is reduced by half, but the level of protection is greater.
Planning a Subsystem What is the Command Console LUN? StorageWorks Command Console (SWCC) software communicates with the HSG80 controllers through an existing storage unit, or logical unit number (LUN). The dedicated LUN that SWCC uses is called the Command Console LUN (CCL). The CCL serves as the communication device for the HS-Series Agent and identifies itself to the host by a unique identification string. By default, a CCL device is enabled within the HSG80 controller on host port 1.
Planning a Subsystem Determining Connections The term “connection” applies to every path between a Fibre Channel adapter in a host computer and an active host port on a controller. Note: In ACS Version 8.8-1, the maximum number of supported connections is 96. Naming Connections It is highly recommended that you assign names to connections that have meaning in the context of your particular configuration. Note: Some non-alpha /numeric characters may not work with some hosts.
Planning a Subsystem Numbers of Connections The number of connections resulting from cabling one adapter into a switch or hub depends on failover mode and how many links the configuration has: If a controller pair is in multiple-bus failover mode, each adapter has two connections, as shown in Figure 10.
Planning a Subsystem Assigning Unit Numbers The controller keeps track of the unit with the unit number. The unit number can be from 0−199 prefixed by a D, which stands for disk drive. A unit can be presented as different LUNs to different connections.
Planning a Subsystem The unit would not be visible at all to a host connection with a unit offset of 18 or greater, because that offset is not within the units range (unit number of 17 minus offset of 18 is a negative number). In addition, the access path to the host connection must be enabled for the connection to access the unit. This is done through the ENABLE_ACCESS_PATH switch of the ADD UNIT (or SET unit) command.
Planning a Subsystem LUN 2 - unit 22, etc. In this example, if a unit 20 is defined, it will be superseded by the CCL and invisible to the connection. Assigning Host Connection Offsets and Unit Numbers in SCSI-2 Mode Some operating systems expect or require a disk unit to be at LUN 0. In this case, it is necessary to specify SCSI-2 mode. If SCSI_VERSION is set to SCSI-2 mode, the CCL floats, moving to the first available LUN location, depending on the configuration.
Planning a Subsystem Using CLI to Specify Identifier for a Unit The command syntax for setting the identifier for a previously created unit (virtual disk) follows: SET UNIT_NUMBER IDENTIFIER=NN Note: For simplicity, StorageWorks recommends that the identifier match the unit number.
Planning a Subsystem Now with V88, you may change the default subsystem behavior so that units are always created without enabling connections. This provides more control in granting appropriate access to specific connections.
Planning a Subsystem Note: The procedure of restricting access by enabling all access paths then disabling selected paths is not recommended because of the potential data/security breach that occurs when a new host connection is added. Restricting Host Access in Multiple-Bus Failover Mode In multiple-bus mode, the units assigned to any port are visible to all ports.
Planning a Subsystem Host 1 "RED" Host 2 "GREY" Host 3 "BLUE" FCA1 FCA2 FCA1 FCA2 FCA1 FCA2 Switch or hub Connections RED1B1 GREY1B1 BLUE1B1 Switch or hub Connections RED1A1 GREY1A1 BLUE1A1 Connections RED2A2 GREY2A2 BLUE2A2 Host port 1 active Host port 2 active Controller A D0 D1 D2 D100 Connections RED2B2 GREY2B2 BLUE2B2 D101 D120 All units visible to all ports Host port 1 active Controller B Host port 2 active NOTE: FCA = Fibre Channel Adapter CXO7078B Figure 11: Limiting host acc
Planning a Subsystem multiple-bus failover to work. For most operating systems, it is desirable to have all connections to the host enabled.
Planning a Subsystem For example: In Figure 11, assume all host connections initially have the default offset of 0. Giving all connections access to host BLUE, an offset of 120 will present unit D120 to host BLUE as LUN 0. Enter the following commands: SET BLUE1A1 UNIT_OFFSET=120 SET BLUE1B1 UNIT_OFFSET=120 SET BLUE2A2 UNIT_OFFSET=120 SET BLUE2B2 UNIT_OFFSET=120 Host BLUE cannot see units lower than its offset, so it cannot access any other units.
Planning a Subsystem ■ Controller A, port 1—worldwide name + 3, for example 5000-1FE1-FF0C-EE03 ■ Controller A, port 2—worldwide name + 4, for example 5000-1FE1-FF0C-EE04 Use the CLI command, SHOW THIS_CONTROLLER/OTHER_CONTROLLER to display the subsystem’s worldwide name.
Planning a Subsystem 1 2 Node ID (Worldwide name) Checksum 1 WWN INFORMATION P/N: WWN: S/N: NNNN – NNNN – NNNN – NNNN Checksum: NN 2 CXO6873B Figure 13: Placement of the worldwide name label on the BA370 enclosure Caution: Each subsystem has its own unique worldwide name (node ID). If you attempt to set the subsystem worldwide name to a name other than the one that came with the subsystem, the data on the subsystem will not be accessible.
Planning Storage Configurations 2 This chapter provides information to help you plan the storage configuration of your subsystem. Storage containers are individual disk drives (JBOD), storageset types (mirrorsets, stripesets, and so on), and/or partitioned drives. Use the guidelines found in this section to plan the various types of storage containers needed.
Planning Storage Configurations Where to Start The following procedure outlines the steps to follow when planning your storage configuration. See Appendix A to locate the blank templates for keeping track of the containers being configured. 1. Determine your storage requirements. Use the questions in “Determining Storage Requirements” on page 49, to help you. 2. Review configuration rules. See “Configuration Rules for the Controller” on page 49. 3.
Planning Storage Configurations — Use SWCC. See the SWCC documentation for details. — Use the Command Line Interpreter (CLI) commands. This method allows you flexibility in defining and naming your storage containers. See the HP StorageWorks HSG60 and HSG80 Array Controller and Array Controller Software Command Line Interface (CLI) Reference Guide. Determining Storage Requirements It is important to determine your storage requirements.
Planning Storage Configurations Note: For the previous two storageset configurations, this is a combined maximum, limited to no more than 20 RAID 3/5 storagesets in the individual combination.
Planning Storage Configurations D100 RAID1 Disk 10000 Disk 20000 Host addressable unit number Storageset name Disk 30000 Controller PTL addresses CXO6186B Figure 14: Mapping a unit to physical disk drives The HSG80 controller identifies devices based on a Port-Target-LUN (PTL) numbering scheme, shown in Figure 15. The physical location of a device in its enclosure determines its PTL. ■ P—Designates the controller's SCSI device port number (1 through 6).
Planning Storage Configurations The controller can either operate with a BA370 enclosure or with a Model 2200 controller enclosure combined with Model 4214R, Model 4254, Model 4310R, Model 4350R, Model 4314R, or Model 4354R disk enclosures. The controller operates with BA370 enclosures that are assigned ID numbers 0, 2, and 3. These ID numbers are set through the PVA module. Enclosure ID number 1, which assigns devices to targets 4 through 7, is not supported.
Planning Storage Configurations Examples - Model 2200 Storage Maps, PTL Addressing The Model 2200 controller enclosure can be combined with the following: ■ Model 4214R disk enclosure — Ultra2 SCSI with 14 drive bays, single-bus I/O module. ■ Model 4254 disk enclosure — Ultra2 SCSI with 14 drive bays, dual-bus I/O module. Note: The Model 4214R uses the same storage maps as the Model 4314R, and the Model 4254 uses the same storage maps as the Model 4354R disk enclosures.
Planning Storage Configurations ■ Model 4354R disk enclosure — Ultra3 SCSI with 14 drive bays, dual-bus I/O module. Table 7 shows the addresses for each device in a three-shelf, dual-bus configuration. A maximum of three Model 4354R disk enclosures can be used with each Model 2200 controller enclosure. Note: Appendix A contains storageset profiles you can copy and use to create your own system profiles.
Planning Storage Configurations Table 4: PTL addressing, single-bus configuration, six Model 4320R enclosures Model 4310R Disk Enclosure Shelf 6 (Single-Bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID Disk61200 9 Disk61100 8 Disk61000 7 Disk60800 6 Disk60500 5 Disk60400 4 Disk60300 3 Disk60200 2 Disk60100 1 Disk60000 Bay Model 4310R Disk Enclosure Shelf 5 (Single-Bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID Disk51200 9 Disk51100 8 Disk51000
Planning Storage Configurations Table 4: PTL addressing, single-bus configuration, six Model 4320R enclosures (Continued) Model 4310R Disk Enclosure Shelf 1 (Single-Bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID Disk11200 9 Disk11100 8 Disk11000 7 Disk10800 6 Disk10500 5 Disk10400 4 Disk10300 3 Disk10200 2 Disk10100 1 Disk10000 Bay Model 4310R Disk Enclosure Shelf 2 (Single-Bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID Disk21200 9 Disk21100 8
Planning Storage Configurations Table 5: PTL addressing, dual-bus configuration, three Model 4350R enclosures Model 4350R Disk Enclosure Shelf 1 (Single-Bus) SCSCSI Bus ASI Bus A SCSI Bus B 10 SCSI ID 00 01 02 03 04 00 01 02 03 04 DISK ID Disk20400 9 Disk20300 8 Disk20200 7 Disk20100 6 Disk20000 5 Disk10400 4 Disk10300 3 Disk10200 2 Disk10100 1 Disk10000 Bay Model 4350R Disk Enclosure Shelf 2 (Single-Bus) SCSCSI Bus ASI Bus A SCSI Bus B 10 SCSI ID 00 01 02 03 04 00
Planning Storage Configurations Table 6: PTL addressing, single-bus configuration, six Model 4314R enclosures Model 4314R Disk Enclosure Shelf 6 (Single-Bus) 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID Disk61500 13 Disk61400 12 Disk61300 11 Disk61200 10 Disk61100 9 Disk61000 8 Disk60900 7 Disk60800 6 Disk60500 5 Disk60400 4 Disk60300 3 Disk60200 2 Disk60100 1 Disk60000 Bay Model 4314R Disk Enclosure Shelf 5 (Single-Bus) 14 SCSI ID 00 01 02
Planning Storage Configurations Table 6: PTL addressing, single-bus configuration, six Model 4314R enclosures (Continued) Model 4314R Disk Enclosure Shelf 2 (Single-Bus) 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID Disk21500 13 Disk21400 12 Disk21300 11 Disk21200 10 Disk21100 9 Disk21000 8 Disk20900 7 Disk20800 6 Disk20500 5 Disk20400 4 Disk20300 3 Disk20200 2 Disk20100 1 Disk20000 Bay Model 4314R Disk Enclosure Shelf 3 (Single-Bus) 14 SCSI ID
Planning Storage Configurations Table 6: PTL addressing, single-bus configuration, six Model 4314R enclosures (Continued) Model 4314R Disk Enclosure Shelf 2 (Single-Bus) 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID Disk21500 13 Disk21400 12 Disk21300 11 Disk21200 10 Disk21100 9 Disk21000 8 Disk20900 7 Disk20800 6 Disk20500 5 Disk20400 4 Disk20300 3 Disk20200 2 Disk20100 1 Disk20000 Bay Model 4314R Disk Enclosure Shelf 3 (Single-Bus) 14 SCSI ID
Planning Storage Configurations Table 6: PTL addressing, single-bus configuration, six Model 4314R enclosures (Continued) Model 4314R Disk Enclosure Shelf 2 (Single-Bus) 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID Disk21500 13 Disk21400 12 Disk21300 11 Disk21200 10 Disk21100 9 Disk21000 8 Disk20900 7 Disk20800 6 Disk20500 5 Disk20400 4 Disk20300 3 Disk20200 2 Disk20100 1 Disk20000 Bay Model 4314R Disk Enclosure Shelf 3 (Single-Bus) 14 SCSI ID
Planning Storage Configurations Table 6: PTL addressing, single-bus configuration, six Model 4314R enclosures (Continued) Model 4314R Disk Enclosure Shelf 2 (Single-Bus) 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID Disk21500 13 Disk21400 12 Disk21300 11 Disk21200 10 Disk21100 9 Disk21000 8 Disk20900 7 Disk20800 6 Disk20500 5 Disk20400 4 Disk20300 3 Disk20200 2 Disk20100 1 Disk20000 Bay Model 4314R Disk Enclosure Shelf 3 (Single-Bus) 14 SCSI ID
Planning Storage Configurations Table 6: PTL addressing, single-bus configuration, six Model 4314R enclosures (Continued) Model 4314R Disk Enclosure Shelf 2 (Single-Bus) 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID Disk21500 13 Disk21400 12 Disk21300 11 Disk21200 10 Disk21100 9 Disk21000 8 Disk20900 7 Disk20800 6 Disk20500 5 Disk20400 4 Disk20300 3 Disk20200 2 Disk20100 1 Disk20000 Bay Model 4314R Disk Enclosure Shelf 3 (Single-Bus) 14 SCSI ID
Planning Storage Configurations Table 6: PTL addressing, single-bus configuration, six Model 4314R enclosures (Continued) Model 4314R Disk Enclosure Shelf 2 (Single-Bus) 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID Disk21500 13 Disk21400 12 Disk21300 11 Disk21200 10 Disk21100 9 Disk21000 8 Disk20900 7 Disk20800 6 Disk20500 5 Disk20400 4 Disk20300 3 Disk20200 2 Disk20100 1 Disk20000 Bay Model 4314R Disk Enclosure Shelf 3 (Single-Bus) 14 SCSI ID
Planning Storage Configurations Table 6: PTL addressing, single-bus configuration, six Model 4314R enclosures (Continued) Model 4314R Disk Enclosure Shelf 2 (Single-Bus) 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID Disk21500 13 Disk21400 12 Disk21300 11 Disk21200 10 Disk21100 9 Disk21000 8 Disk20900 7 Disk20800 6 Disk20500 5 Disk20400 4 Disk20300 3 Disk20200 2 Disk20100 1 Disk20000 Bay Model 4314R Disk Enclosure Shelf 3 (Single-Bus) 14 SCSI ID
Planning Storage Configurations Table 6: PTL addressing, single-bus configuration, six Model 4314R enclosures (Continued) Model 4314R Disk Enclosure Shelf 2 (Single-Bus) 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID Disk21500 13 Disk21400 12 Disk21300 11 Disk21200 10 Disk21100 9 Disk21000 8 Disk20900 7 Disk20800 6 Disk20500 5 Disk20400 4 Disk20300 3 Disk20200 2 Disk20100 1 Disk20000 Bay Model 4314R Disk Enclosure Shelf 3 (Single-Bus) 14 SCSI ID
Planning Storage Configurations For this reason, you should avoid using a stripeset to store critical data. Stripesets are more suitable for storing data that can be reproduced easily or whose loss does not prevent the system from supporting its critical mission. ■ Evenly distribute the members across the device ports to balance the load and provide multiple paths. ■ Stripesets may contain between two and 24 members.
Planning Storage Configurations Mirrorset Planning Considerations Mirrorsets (RAID 1) use redundancy to ensure availability, as illustrated in Figure 20. For each primary disk drive, there is at least one mirror disk drive. Thus, if a primary disk drive fails, its mirror drive immediately provides an exact copy of the data. Figure 21 shows a second example of a Mirrorset.
Planning Storage Configurations Keep these points in mind when planning mirrorsets ■ Data availability with a mirrorset is excellent but comes with a higher cost—you need twice as many disk drives to satisfy a given capacity requirement. If availability is your top priority, consider using dual-redundant controllers and redundant power supplies. ■ You can configure up to a maximum of 20 RAID 3/5 mirrorsets per controller or pair of dual-redundant controllers. Each mirrorset may contain up to 6 members.
Planning Storage Configurations Virtual disk Operating system view Disk 1 Block 0 Block 5 Block 10 Block 15 Block 0 Block 1 Block 2 Block 3 Block 4 Block 5 etc.
Planning Storage Configurations ■ A RAIDset must include at least 3 disk drives, but no more than 14. ■ A storageset should only contain disk drives of the same capacity. The controller limits the capacity of each member to the capacity of the smallest member in the storageset. Thus, if you combine 9 GB disk drives with 4 GB disk drives in the same storageset, you waste 5 GB of capacity on each 9 GB member.
Planning Storage Configurations p t Mirrorset1 Mirrorset2 Disk 20000 Disk 10100 Disk 20200 A B C Disk 10000 Disk 20100 Disk 10200 B' C' A' Mirrorset3 CXO7289A Figure 23: Striped mirrorset (example 1) The failure of a single disk drive has no effect on the ability of the storageset to deliver data to the host. Under normal circumstances, a single disk drive failure has very little effect on performance.
Planning Storage Configurations Plan the mirrorset members, and plan the stripeset that will contain them. Review the recommendations in “Planning Considerations for Storageset” on page 64, and “Mirrorset Planning Considerations” on page 68. Storageset Expansion Considerations Storageset Expansion allows for the joining of two of the same kind of storage containers by concatenating RAIDsets, stripesets, or individual disks, thereby forming a larger virtual disk, which is presented as a single unit.
Planning Storage Configurations unpartitioned storageset or device. Partitions are separately addressable storage units; therefore, you can partition a single storageset to service more than one user group or application. Defining a Partition Partitions are expressed as a percentage of the storageset or single disk unit that contains them: ■ Mirrorsets and single disk units—the controller allocates the largest whole number of blocks that are equal to or less than the percentage you specify.
Planning Storage Configurations Changing Characteristics Through Switches CLI command switches allow the user another level of command options. There are three types of switches that modify the storageset and unit characteristics: ■ Storageset switches ■ Initialization switches ■ Unit switches The following sections describe how to enable/modify switches. They also contain a description of the major CLI command switches.
Planning Storage Configurations Specifying Storageset and Partition Switches The characteristics of a particular storageset can be set by specifying switches when the storageset is added to the controllers’ configuration. Once a storageset has been added, the switches can be changed by using a SET command. Switches can be set for partitions and the following types of storagesets: ■ RAIDset ■ Mirrorset Stripesets have no specific switches associated with their ADD and SET commands.
Planning Storage Configurations ■ Size ■ Geometry For details on the use of these switches, refer to CREATE_PARTITION command in the HP StorageWorks HSG60 and HSG80 Array Controller and Array Controller Software Command Line Interface (CLI) Reference Guide. Specifying Initialization Switches Initialization switches set characteristics for established storagesets before they are made into units.
Planning Storage Configurations ■ CHUNKSIZE=n lets you specify a chunk size in blocks. The relationship between chunk size and request size determines whether striping increases the request rate or the data-transfer rate. Increasing the Request Rate A large chunk size (relative to the average request size) increases the request rate by enabling multiple disk drives to respond to multiple requests.
Planning Storage Configurations ■ Random I/Os that are scattered over all the areas of the disks should use a chunk size of 20 times the average transfer request rate. If you do not know, then you should use a chunk size of 15 times the average transfer request rate. ■ If you have mostly sequential reads or writes (like those needed to work with large graphic files), make the chunk size for RAID 0 and RAID 0+1 a small number (for example: 67 sectors).
Planning Storage Configurations Note: DO NOT use SAVE_CONFIGURATION in dual redundant controller installations. It is not supported and may result in unexpected controller behavior. Note: HP recommends that you do not use SAVE_CONFIGURATION on every unit and device on the controller. Destroy/Nodestroy Specify whether to destroy or retain the user data and metadata when a disk is initialized after it has been used in a mirrorset or as a single-disk unit.
Planning Storage Configurations ■ SECTORS_PER_TRACK—the number of sectors per track used. The range is from 1 to 255. Specifying Unit Switches Several switches control the characteristics of units. The unit switches are described under the SET unit-number command in the HP StorageWorks HSG60 and HSG80 Array Controller and Array Controller Software Command Line Interface (CLI) Reference Guide.
Planning Storage Configurations 2. Note the position of all the drives contained within D104. 3. Enter the following command to turn off the flashing LEDs: LOCATE CANCEL The following procedure is an example command to locate all the drives that make up RAIDset R1: 1. Enter the following command: LOCATE R1 2. Note the position of all the drives contained within R1. 3. Enter the following command to turn off the flashing LEDs: LOCATE CANCEL 82 HSG80 ACS Solution Software V8.
Planning Storage Configurations Example Storage Map—Model 4310R Disk Enclosure Table 11 shows an example of four Model 4310R disk enclosures (single-bus I/O).
Planning Storage Configurations Table 11: Model 4310 disk enclosure, example of storage map (Continued) Model 4310R Disk Enclosure Shelf 2 (Single-Bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 D100 R1 D101 S1 M1 D102 M3 D104 S2 D106 R2 D108 S3 D1 S4 M5 D2 R3 D3 S5 D4 M7 DISK ID Disk21200 9 Disk21100 8 Disk21000 7 Disk20800 6 Disk20500 5 Disk20400 4 Disk20300 3 Disk20200 2 Disk20100 1 Disk20000 Bay Model 4310R Disk Enclosure Shelf 3 (Single-Bus) 10 SCSI ID 00 01
Planning Storage Configurations ■ Unit D103 is a 2-member mirrorset named M4. M4 consists of Disk30200 and Disk40200. ■ Unit D104 is 3-member stripeset named S2. S2 consists of Disk10300, Disk20300, and Disk30300. ■ Unit D105 is a single (JBOD) disk named Disk40300. ■ Unit D106 is a 3-member RAID 3/5 storageset named R2. R2 consists of Disk10400, Disk20400, and Disk30400. ■ Unit D107 is a single (JBOD) disk named Disk40400. ■ Unit D108 is a 4-member stripeset named S3.
Planning Storage Configurations 86 HSG80 ACS Solution Software V8.
Preparing the Host System 3 3 This chapter describes how to prepare your OpenVMS host computer to accommodate the HSG80 controller storage subsystem. The following information is included in this chapter: ■ Making a Physical Connection, page 92 ■ New Features, ACS 8.8 for OpenVMS, page 95 Refer to Chapter 4 for instructions on how to install and configure the HSG Agent. The Agent for HSG is operating system-specific and polls the storage. HSG80 ACS Solution Software V8.
Preparing the Host System Installing RAID Array Storage System WARNING: A shock hazard exists at the backplane when the controller enclosure bays or cache module bays are empty. Be sure the enclosures are empty, then mount the enclosures into the rack. DO NOT use the disk enclosure handles to lift the enclosure. The handles cannot support the weight of the enclosure. Only use these handles to position the enclosure in the mounting brackets.
Preparing the Host System 3. Install the elements. Install the disk drives. Make sure you install blank panels in any unused bays. Fibre Channel cabling information is shown to illustrate supported configurations. In a dual-bus disk enclosure configuration, disk enclosures 1, 2, and 3 are stacked below the controller enclosure—two SCSI Buses per enclosure (see Figure 27).
Preparing the Host System 1 8 2 3 4 5 7 6 CXO7383A 1 3 5 7 SCSI Bus 1 Cable SCSI Bus 3 Cable SCSI Bus 5 Cable AC Power Inputs 2 4 6 8 SCSI Bus 2 Cable SCSI Bus 4 Cable SCSI Bus 6 Cable Fibre Channel Ports Figure 27: Dual-Bus Enterprise Storage RAID Array Storage System 90 HSG80 ACS Solution Software V8.
Preparing the Host System 6 5 4 8 1 7 2 3 CXO7382A 1 3 5 7 SCSI Bus 1 Cable SCSI Bus 3 Cable SCSI Bus 5 Cable AC Power Inputs 2 4 6 8 SCSI Bus 2 Cable SCSI Bus 4 Cable SCSI Bus 6 Cable Fibre Channel Ports Figure 28: Single-Bus Enterprise Storage RAID Array Storage System HSG80 ACS Solution Software V8.
Preparing the Host System Making a Physical Connection To attach a host computer to the storage subsystem, install one or more host bus adapters into the computer. A Fibre Channel (FC) cable goes from the host bus adapter to an FC switch. Preparing to Install Host Bus Adapter Before installing the host bus adapter: 1. Perform a complete backup of the entire system. 2. Shut down the computer system or perform a hot addition of the adapter based upon directions for that server.
Preparing the Host System Verifying/Installing Required Versions Refer to the Release Notes for OpenVMS to determine compatibility with the HSG60 controller. Solution Software Upgrade Procedures Use the following procedures for upgrades to your Solution Software. It is considered best practice to follow this order of procedures: 1. Perform backups of data prior to upgrade. 2. Verify operating system versions, upgrade operating systems to supported versions and patch levels. 3.
Preparing the Host System 5. If you have OpenVMS version 7.2-1 on an Alpha computer with MultiNet and/or TCPware TCP/IP stacks, you must install the security patch from the Process Software web site at http://www.process.com. 6. If you have an HSJ40 controller, check the controller firmware revision level. If your controller is at version 3.2J you must upgrade to version 3.4J before installing the Agent. This is due to an issue with the 3.
Preparing the Host System 2. Upgrade ACS to Version 8.8 using the instructions provided in the HP StorageWorks HSG60 and HSG80 Array Controller and Array Controller Software Maintenance and Service Guide. Refer to the rolling upgrade procedure section and read it carefully before attempting the upgrade. New Features, ACS 8.8 for OpenVMS The following are new features implemented in ACS 8.
Preparing the Host System The lock is maintained in the failover information (fi) section of each controller's NV. When the state of the lock is changed on one controller, the other controller is updated as well. The existing CLI command to ADD CONN is not affected by the state of the lock.
Preparing the Host System Example of Host Connection Table Unlock: (new output shown in bold) AP_Bot> show this Controller: HSG80 (C) DEC CX00000001 Software V87 Hardware 0000 NODE_ID = 5000-1FE1-FF00-0090 ALLOCATION_CLASS = 1 SCSI_VERSION = SCSI-3 Configured for dual-redundancy with ZG02804912 In dual-redundant configuration Device Port SCSI address 6 Time: 10-SEP-2001 15:45:54 Command Console LUN is lun 0 (IDENTIFIER = 99) Host Connection Table is NOT locked Host PORT_1: Reported PORT_ID = 5000-1FE
Preparing the Host System Example of Host Connection Table Locked: (new output shown in bold) AP_Bot> show this Controller: HSG80 (C) DEC CX00000001 Software XC21P-0, Hardware 0000 NODE_ID = 5000-1FE1-FF00-0090 ALLOCATION_CLASS = 1 SCSI_VERSION = SCSI-3 Configured for dual-redundancy with ZG02804912 In dual-redundant configuration Device Port SCSI address 6 Time: 10-SEP-2001 15:48:24 Command Console LUN is lun 0 (IDENTIFIER = 99) Host Connection Table is LOCKED Host PORT_1: Reported PORT_ID = 5000-1F
Preparing the Host System The state of the connection can be displayed using: CLI> SHOW CONN <<< LOCKED >>> appears in the title area when the connection table is locked. If unlocked, or not supported (HOST_FC only), the title area looks the same as it did for ACS version 8.7. The full switch displays the rejected hosts, with an index. Adding Rejected Host Connections to Locked Host Connection Table With ACS version 8.
Preparing the Host System ■ To Create a new SAN - Basically the system administrator unlocks the connection table, connect the desired hosts, and then lock the connection table. As the hosts are connected they login to the controller pair. After the connection table is locked, the host logins are rejected until the system administrator manually adds the host to the connection table. ■ To Add a new Host to a SAN - A new host is added to the fabric that needs connectivity to the HSG80.
Preparing the Host System Adding Management Agent Host systems The following command enables access to the management functions. The user can specify all systems, or a list of systems. HSG80> SET ENABLE_MANAGERS=ALL - or HS80> SET ENABLE_MANAGERS=(host list…) Display Enabled Management Agents The following command displays a list of the systems currently enabled to perform management functions.
Preparing the Host System In the event that all connections are enabled the display appears as follows.
Preparing the Host System Note: The Selective Management Presentation feature only applies to commands received by way of a SCSI SEND_DIAG command. If the HSG80 receives a SEND_DIAG command over a disabled management connection, an ILLEGAL_REQUEST CHECK_COND will be returned with an ASC=0x91 and ASCQ=0x08. Any command delivered to the HSG80 Serial Port bypasses this constraint and will be processed.
Preparing the Host System Example of error message text: CLI> add snap d2 disk10100 d1 use_parent_wwid A new WWID has been allocated for this unit because the linked WWID for d2 is already in use. CLI> run clonew Implementation Notes Add Snap with Linked WWID - The user has a script that runs every night to create a snapshot, run a backup to tape from the snapshot, then delete the snapshot. Each time this is done, a new WWID is allocated.
Preparing the Host System Manual Clone Creation - The user has his own set of scripts that create clones, and wants to update them to use linked WWIDs. At some point in the script there will be an “add unit” command. The switch “parent_wwid=” must be provided. For example, CLI> add unit d2 disk10100 parent_wwid=d1 would create a unit d2 from device disk10100 whose WWID would be the linked WWID associated with unit d1.
Preparing the Host System CLI output - feature disabled: AP_TOP> show this Controller: HSG80 ZG02804912 Software V87S-0, Hardware E12 NODE_ID = 5000-1FE1-FF00-0090 ALLOCATION_CLASS = 1 SCSI_VERSION = SCSI-3 Configured for MULTIBUS_FAILOVER with ZG02804288 In dual-redundant configuration Device Port SCSI address 7 Time: 22-NOV-2001 01:14:32 Command Console LUN is lun 0 (IDENTIFIER = 99) Host Connection Table is NOT locked Smart Error Eject Disabled Host PORT_1: Reported PORT_ID = 5000-1FE1-FF00-0093 POR
Preparing the Host System Battery: NOUPS FULLY CHARGED Expires: WARNING: UNKNOWN EXPIRATION DATE! WARNING: AN UNKNOWN NUMBER OF DEEP DISCHARGES HAVE OCCURRED! HSG80 ACS Solution Software V8.
Preparing the Host System CLI Output - feature enabled: AP_TOP> show this Controller: HSG80 ZG02804912 Software V87S-0, Hardware E12 NODE_ID = 5000-1FE1-FF00-0090 ALLOCATION_CLASS = 1 SCSI_VERSION = SCSI-3 Configured for MULTIBUS_FAILOVER with ZG02804288 In dual-redundant configuration Device Port SCSI address 7 Time: 22-NOV-2001 01:17:47 Command Console LUN is lun 0 (IDENTIFIER = 99) Host Connection Table is NOT locked Smart Error Eject Enabled Host PORT_1: Reported PORT_ID = 5000-1FE1-FF00-0093 PORT_
Preparing the Host System NOUPS FULLY CHARGED Expires: WARNING: UNKNOWN EXPIRATION DATE! WARNING: AN UNKNOWN NUMBER OF DEEP DISCHARGES HAVE OCCURRED! Error Threshold for Drives A new limit for driver errors can be set. Once the limit is reached, the drive is removed from any redundant sets to which it belongs and put into the failed set. Errors counted are medium and recovered errors - there is no need to add hardware errors to this count as the drive fails immediately if a hardware error is encountered.
Preparing the Host System 110 HSG80 ACS Solution Software V8.
Installing and Configuring the HSG Agent 4 StorageWorks Command Console (SWCC) enables real-time configuration of the storage environment and permits you to monitor and configure the storage connected to the HSG80 controller.
Installing and Configuring the HSG Agent Why Use StorageWorks Command Console (SWCC)? StorageWorks Command Console (SWCC) enables you to monitor and configure the storage connected to the HSG80 controller. SWCC consists of Client and Agent. ■ The Client provides pager notification and lets you manage your virtual disks. The Client runs on Windows Server 2003 (32-bit), Windows 2000 with Service Pack 3 and 4, and Windows NT 4.0 with Service Pack 6A or above.
Installing and Configuring the HSG Agent Note: For serial and SCSI connections, the Agent is not required for creating virtual disks. Installation and Configuration Overview Table 13 provides an overview of the installation. Table 13: Installation and Configuration Overview Step Procedure 1. Verify that your hardware has been set up correctly. See the previous chapters in this guide. 2. Verify that you have a network connection for the Client and Agent systems.
Installing and Configuring the HSG Agent About the Network Connection for the Agent The network connection, shown in Figure 29, displays the subsystem connected to a hub or a switch. SWCC can consist of any number of Clients and Agents in a network. However, it is suggested that you install only one Agent on a computer. By using a network connection, you can configure and monitor the subsystem from anywhere on the LAN. If you have a WAN or a connection to the Internet, monitor the subsystem with TCP/IP.
Installing and Configuring the HSG Agent 7 1 A T V A T -S H V T N E C O O A T V O 4 4 7 A T V A T -S H 2 V T N E C O O 5 4 3 6 CXO7240A 1 Agent system (has the Agent 5 Hub or switch software) 2 TCP/IP network 6 HSG80 controller and its device subsystem 3 Client system (has the Client 7 Servers software) 4 Fibre Channel cable Figure 29: An example of a network connection HSG80 ACS Solution Software V8.
Installing and Configuring the HSG Agent Before Installing the Agent The Agent requires the minimum system requirements, as defined in the release notes for your operating system. The program is designed to operate with the Client version 2.5 on Windows 2000, Windows NT, or Windows Server 2003 (32-bit). Verify that your system meets the minimum requirements by completing the following steps. 1. Verify that you have one of the following: — TCP/IP Services for OpenVMS (version 5.
Installing and Configuring the HSG Agent Downloading the Host Kit Software From the Web The host kit software is available for download. You can save the software to your computer or create a CD-ROM. Platform kit software is stored on the download web site based on operating system. Follow the steps below to obtain the software from the web site. 1. Go to http://h18006.www1.hp.com/products/storageworks/ma8kema12k/kits.html. 2. Select the kit for download. 3.
Installing and Configuring the HSG Agent 3. To create a local directory on your system, enter the following at the command prompt. Later in this procedure, you will copy the installation file from the CD-ROM to this new directory. Replace DKB100 with the device-name on the system that is connected to the controller. $ CREATE/DIRECTORY DKB100:[SWCC] A directory named DKB100:[SWCC] has been created. 4.
Installing and Configuring the HSG Agent 11. Run the configuration program. Enter the following at the command prompt: $ @SYS$MANAGER:SWCC_CONFIG If the installation does not detect any configuration files from a previous installation, you are shown a configuration script when you run the configuration program. During the configuration, you will need to do at least the following: — Enter the name of the client system on which you installed the Client software. You can enter more than one client system.
Installing and Configuring the HSG Agent SWCC Agent for HS* Controllers Configuration Menu Agent is enabled as TCP/IP Services for OpenVMS service.
Installing and Configuring the HSG Agent Table 14: Information Needed to Configure Agent Term/Procedure Description Adding a Client system entry For a client system to receive updates from the Agent, you must add it to the Agent’s list of client system entries. The Agent will only send information to client system entries that are on this list. In addition, adding a client system entry allows you to access the Agent system from the Navigation Tree on that Client system.
Installing and Configuring the HSG Agent Table 14: Information Needed to Configure Agent (Continued) Term/Procedure Description Deleting a client system entry When you remove a client system from the Agent’s list, you are instructing the Agent to stop sending updates to that client system. In addition, you will be unable to access this agent system from the Navigation Tree. Email notification Modify file pagemail.com in directory sys$sysdevice:[swcc$agent].
Installing and Configuring the HSG Agent SWCC Agent for HS* Controllers Configuration Menu 5)Add a Client 6)Remove a Client 7)View Clients Storage Subsystem Options: 8)Add a subsystem 9)Remove a subsystem 10)View subsystems E)Exit configuration procedure Removing the Agent Instructions on how to remove the HSG Agent from OpenVMS are the following: WARNING: This OpenVMS uninstallation will remove all configuration files! To fully remove agent software, including client and storage data: 1.
Installing and Configuring the HSG Agent Caution: Do not uninstall the Agent if you want to preserve configuration information. If you only want to install an upgrade, stop the Agent, and then install the new version. Older versions will be automatically removed before the update, but all configuration information will be preserved. The following is the detailed procedure for fully removing agent software: 1.
FC Configuration Procedures 5 This chapter describes procedures to configure a subsystem that uses Fibre Channel (FC) fabric topology. In fabric topology, the controller connects to its hosts through switches.
FC Configuration Procedures Establishing a Local Connection A local connection is required to configure the controller until a command console LUN (CCL) is established using the CLI. Communication with the controller can be through the CLI or SWCC. The maintenance port, shown in Figure 30, provides a way to connect a maintenance terminal. The maintenance port can be an EIA-423 compatible terminal or a computer running a terminal emulator program. The maintenance port accepts a standard RS-232 jack.
FC Configuration Procedures Setting Up a Single Controller Powering On and Establishing Communication 1. Connect the computer or terminal to the controller, as shown in Figure 30. The connection to the computer is through the COM1 or COM2 port. 2. Turn on the computer or terminal. 3. Apply power to the storage subsystem. 4. Verify that the computer or terminal is configured as follows: — 9600 baud — 8 data bits — 1 stop bit — no parity — no flow control 5. Press Enter.
FC Configuration Procedures 4 1 2 5 3 5 4 CXO6881B 1 Controller 4 Cable from the switch to the host Fibre Channel 2 Host port 1 adapter 3 Host port 2 5 FC switch Figure 31: Single controller cabling Configuring a Single Controller Using CLI To configure a single controller using CLI involves the following processes: ■ Verifying the Node ID and Checking for Any Previous Connections. ■ Configuring Controller Settings. ■ Restart the Controller. ■ Setting Time and Verifying all Commands.
FC Configuration Procedures The node ID is located in the third line of the SHOW THIS result: HSG80> SHOW THIS Controller: HSG80 ZG80900583 Software V8.8, Hardware E11 NODE_ID = 5000-1FE1-0001-3F00 ALLOCATION_CLASS = 0 If the node ID is present, go to step 5. If the node ID is all zeroes, enter node ID and checksum, which are located on a sticker on the controller enclosure.
FC Configuration Procedures 6. Assign an identifier for the communication LUN (also called the command console LUN, or CCL). The CCL must have a unique identifier that is a decimal number in the range 1 to 32767, and which is different from the identifiers of all units. Use the following syntax: SET THIS IDENTIFIER=N Identifier must be unique among all the controllers attached to the fabric within the specified allocation class. 7. Set the topology for the controller.
FC Configuration Procedures When FRUTIL asks if you intend to replace the battery, answer “Y”: Do you intend to replace this controller's cache battery? Y/N [N] Y FRUTIL will print out a procedure, but will not give you a prompt. Ignore the procedure and press the Enter key. 3. Set up any additional optional controller settings, such as changing the CLI prompt.
FC Configuration Procedures The following sample is a result of a SHOW THIS command, with the areas of interest in bold. Controller: HSG80 ZG94214134 Software V8.
FC Configuration Procedures 5. Turn on the switches, if not done previously. If you want to communicate with the Fibre Channel switches through Telnet, set an IP address for each switch. See the manuals that came with the switches for details. Plugging in the FC Cable and Verify Connections 6. Plug the Fibre Channel cable from the first host bus adapter into the switch. Enter the SHOW CONNECTIONS command to view the connection table: SHOW CONNECTIONS 7.
FC Configuration Procedures Setting Up a Controller Pair Powering Up and Establishing Communication 1. Connect the computer or terminal to the controller as shown in Figure 30. The connection to the computer is through the COM1 or COM2 ports. 2. Turn on the computer or terminal. 3. Apply power to the storage subsystem. 4. Configure the computer or terminal as follows: — 9600 baud — 8 data bits — 1 stop bit — no parity — no flow control 5. Press Enter.
FC Configuration Procedures Figure 32 shows a controller pair with failover cabling showing one HBA per server with HSG80 controller in transparent failover mode. 5 6 1 3 4 2 6 5 CXO6887B 1 Controller A 4 Host port 2 2 Controller B 5 Cable from the switch to the host FC adapter 3 Host port 1 6 FC switch Figure 32: Controller pair failover cabling Configuring a Controller Pair Using CLI To configure a controller pair using CLI: ■ Configuring Controller Settings. ■ Restarting the Controller.
FC Configuration Procedures The node ID is located in the third line of the SHOW THIS result: HSG80> show this Controller: HSG80 ZG80900583 Software V8.8, Hardware E11 NODE_ID = 5000-1FE1-0001-3F00 ALLOCATION_CLASS = 0 If the node ID is present, go to step 5. If the node ID is all zeroes, enter the node ID and checksum, which are located on a sticker on the controller enclosure.
FC Configuration Procedures 6. Assign an identifier for the communication LUN (also called the command console LUN, or CCL). The CCL must have a unique identifier that is a decimal number in the range 1 to 32767, and which is different from the identifiers of all units. Use the following syntax: SET THIS IDENTIFIER=N Identifier must be unique among all the controllers attached to the fabric within the specified allocation class. 7. Set the topology for the controller.
FC Configuration Procedures When FRUTIL asks if you intend to replace the battery, answer “Y”: Do you intend to replace this controller's cache battery? Y/N [N] Y FRUTIL will print out a procedure, but will not give you a prompt. Ignore the procedure and press Enter. 12. Set up any additional optional controller settings, such as changing the CLI prompt.
FC Configuration Procedures 14. Verify node ID, allocation class, SCSI version, failover mode, identifier, and port topology. The following display is a sample result of a SHOW THIS command, with the areas of interest in bold. Controller: HSG80 ZG94214134 Software V8.
FC Configuration Procedures 15. Turn on the switches if not done previously. If you want to communicate with the FC switches through Telnet, set an IP address for each switch. See the manuals that came with the switches for details. Plugging in the FC Cable and Verify Connections 16. Plug the FC cable from the first host adapter into the switch. Enter a SHOW CONNECTIONS command to view the connection table: SHOW CONNECTIONS The first connection will have one or more entries in the connection table.
FC Configuration Procedures Verifying Installation To verify installation for your OpenVMS host, enter the following command: SHOW DEVICES Your host computer should report that it sees a device whose designation matches the identifier (CCL) that you assigned the controllers. For example, if you assigned an identifier of 88, your host computer will see device $1$GGA88. This verifies that your host computer is communicating with the controller pair.
FC Configuration Procedures ■ Configuring a Single-Disk Unit (JBOD), page 145 ■ Configuring a Partition, page 146 Containers Partition Stripeset (R0) Single devices (JBOD) Mirrorset (R1) Striped mirrorset (R0+1) RAIDset (R3/5) Storagesets CXO6677A Figure 33: Storage container types Configuring a Stripeset 1. Create the stripeset by adding its name to the controller's list of storagesets and by specifying the disk drives it contains.
FC Configuration Procedures For example: The commands to create Stripe1, a stripeset consisting of three disks (DISK10000, DISK20000, and DISK10100) and having a chunksize of 128: ADD STRIPESET STRIPE1 DISK10000 DISK20000 DISK30000 INITIALIZE STRIPE1 CHUNKSIZE=128 SHOW STRIPE1 Configuring a Mirrorset 1. Create the mirrorset by adding its name to the controller's list of storagesets and by specifying the disk drives it contains.
FC Configuration Procedures Configuring a RAIDset 1. Create the RAIDset by adding its name to the controller's list of storagesets and by specifying the disk drives it contains. Optionally, you can specify RAIDset switch values: ADD RAIDSET RAIDSET-NAME DISKNNNNN DISKNNNNN DISKNNNNN SWITCHES Note: See the ADD RAIDSET command in the HP StorageWorks HSG60 and HSG80 Array Controller and Array Controller Software Command Line Interface (CLI) Reference Guide for a description of the RAIDset switches. 2.
FC Configuration Procedures 2. Create a stripeset and specify the mirrorsets it contains: ADD STRIPESET STRIPESET-NAME MIRRORSET-1 MIRRORSET-2....MIRRORSET-N 3. Initialize the striped mirrorset, specifying any desired switches: INITIALIZE STRIPESET-NAME SWITCH See “Specifying Initialization Switches” on page 77 for a description of the initialization switches. 4. Verify the striped mirrorset configuration: SHOW STRIPESET-NAME 5.
FC Configuration Procedures Configuring a Partition 1. Initialize the storageset or disk drive, specifying any desired switches: INITIALIZE STORAGESET-NAME SWITCHES or INITIALIZE DISK-NAME SWITCHES See “Specifying Initialization Switches” on page 77 for a description of the initialization switches. 2. Create each partition in the storageset or disk drive by indicating the partition's size.
FC Configuration Procedures For example: The commands to create RAID1, a three-member RAIDset, then partition it into two storage units are shown below. ADD RAIDSET RAID1 DISK10000 DISK20000 DISK30000 INITIALIZE RAID1 CREATE_PARTITION RAID1 SIZE=25 CREATE_PARTITION RAID1 SIZE=LARGEST SHOW RAID1 Assigning Unit Numbers and Unit Qualifiers Each storageset, partition, or single (JBOD) disk must be assigned a unit number for the host to access.
FC Configuration Procedures For example: To assign unit D4 to DISK20300, use the following command: ADD UNIT D4 DISK20300 Assigning a Unit Number to a Partition To assign a unit number to a partition, use the following syntax: ADD UNIT UNIT-NUMBER STORAGESET-NAME PARTITION=PARTITION-NUMBER For example: To assign unit D100 to partition 3 of mirrorset mirr1, use the following command: ADD UNIT D100 MIRR1 PARTITION=3 Assigning Unit Identifiers One unique step is required when configuring storage units for
FC Configuration Procedures Using SWCC to Specify LUN ID Alias for a Virtual Disk Setting a LUN ID alias for a virtual disk is the same as setting a unit identifier. To set the LUN ID alias for a previously created virtual disk, perform the following procedure: 1. Open the storage window, where you see the properties for that virtual disk. 2. Click on the Settings Tab to see changeable properties. 3. Click on the Enable LUN ID Alias button. 4. Enter the LUN ID alias (identifier) in the appropriate field.
FC Configuration Procedures Configuration Options Changing the CLI Prompt To change the CLI prompt, enter a 1- to 16-character string as the new prompt, according to the following syntax: SET THIS_CONTROLLER PROMPT = “NEW PROMPT” If you are configuring dual-redundant controllers, also change the CLI prompt on the “other controller.” Use the following syntax: SET OTHER_CONTROLLER PROMPT = “NEW PROMPT” Note: It is suggested that the prompt name reflect some information about the controllers.
FC Configuration Procedures Note: This procedure assumes that the disks that you are adding to the spareset have already been added to the controller's list of known devices. To add the disk drive to the controller's spareset list, use the following syntax: ADD SPARESET DISKNNNNN Repeat this step for each disk drive you want to add to the spareset: For example: The following example shows the syntax for adding DISK11300 and DISK21300 to the spareset.
FC Configuration Procedures To disable autospare, use the following command: SET FAILEDSET NOAUTOSPARE During initialization, AUTOSPARE checks to see if the new disk drive contains metadata. Metadata is information the controller writes on the disk drive when the disk drive is configured into a storageset. Therefore, the presence of metadata indicates that the disk drive belongs to, or has been used by, a storageset. If the disk drive contains metadata, initialization stops.
FC Configuration Procedures Displaying the Current Switches To display the current switches for a storageset or single-disk unit, enter a SHOW command, specifying the FULL switch: SHOW STORAGESET-NAME or SHOW DEVICE-NAME Note: FULL is not required when showing a particular device. It is used when showing all devices, for example, SHOW DEVICES FULL.
FC Configuration Procedures Verifying Storage Configuration from Host This section briefly describes how to verify that multiple paths exist to virtual disk units under OpenVMS. After configuring units (virtual disks) through either the CLI or SWCC, reboot the host to enable access to the new storage and enter the following command to rescan the bus: $ MC SYSMAN ID AUTO After the host restarts, verify that the disk is correctly presented to the host.
Using CLI for Configuration 6 This chapter presents an example of how to configure a storage subsystem using the Command Line Interpreter (CLI). The CLI configuration example shown assumes: ■ A normal, new controller pair, which includes: — NODE ID set — No previous failover mode — No previous topology set ■ Full array with no expansion cabinet ■ PCMCIA cards installed in both controllers A storage subsystem example is shown in Figure 34.
Using CLI for Configuration Figure 34 shows an example storage system map for the BA370 enclosure. Details on building your own map are described in Chapter 2. Templates to help you build your storage map are supplied in Appendix A.
Using CLI for Configuration multiple-bus failover to work. For most operating systems, it is desirable to have all connections to the host enabled. The example system, shown in Figure 36, contains three non-clustered VMS hosts.
Using CLI for Configuration "RED" "GREY" "BLUE" D1 D0 D2 D101 D102 D120 CXO7110B Figure 36: Example, logical or virtual disks comprised of storagesets CLI Configuration Example Text conventions used in this example are listed below: ■ Text in italics indicates an action you take. ■ Text in THIS FORMAT, indicates a command you type. Be certain to press Enter after each command. ■ Text enclosed within a box, indicates information that is displayed by the CLI interpreter.
Using CLI for Configuration SET THIS SCSI_VERSION=SCSI-3 SET THIS IDENTIFIER=88 SET THIS PORT_1_TOPOLOGY=FABRIC SET THIS PORT_2_TOPOLOGY=FABRIC SET OTHER PORT_1_TOPOLOGY=FABRIC SET OTHER PORT_2_TOPOLOGY=FABRIC SET THIS ALLOCATION_CLASS=1 RESTART OTHER RESTART THIS SET THIS TIME=10-Mar-2001:12:30:34 RUN FRUTIL Do you intend to replace this controller's cache battery? Y/N [Y] Y Plug serial cable from maintenance terminal into bottom controller. Note: Bottom controller (B) becomes “this” controller.
Using CLI for Configuration RENAME !NEWCON00 RED1B1 SET RED1B1 OPERATING_SYSTEM=VMS RENAME !NEWCON01 RED1A1 SET RED1A1 OPERATING_SYSTEM=VMS SHOW CONNECTIONS Note: Connection table sorts alphabetically.
Using CLI for Configuration RENAME !NEWCON02 RED2B2 SET RED2B2 OPERATING_SYSTEM=VMS RENAME !NEWCON02 RED2A2 SET RED2A2 OPERATING_SYSTEM=VMS SHOW CONNECTIONS Connection Name RED1A1 Operating System VMS Controller OTHER Port 1 Address XXXXXX Status Unit Offset OL other 0 HOST_ID=XXXX-XXXX-XXXX-XXXX ADAPTER_ID=XXXX-XXXX-XXXX-XXXX RED1B1 1 VMS THIS XXXXXX OL this 0 HOST_ID=XXXX-XXXX-XXXX-XXXX ADAPTER_ID=XXXX-XXXX-XXXX-XXXX RED2A2 2 VMS OTHER XXXXXX OL other 0 HOST_ID=XXXX-XXXX-XXXX-XX
Using CLI for Configuration (Continued) Connection Name Operating System Controller BLUE1A1 VMS OTHER Port 1 Address Status XXXXXX OL other Unit Offset 0 HOST_ID=XXXX-XXXX-XXXX-XXXX ADAPTER_ID=XXXX-XXXX-XXXX-XXXX BLUE1B1 1 VMS THIS XXXXXX OL this 0 HOST_ID=XXXX-XXXX-XXXX-XXXX ADAPTER_ID=XXXX-XXXX-XXXX-XXXX BLUE2A2 2 VMS OTHER XXXXXX OL other 0 HOST_ID=XXXX-XXXX-XXXX-XXXX ADAPTER_ID=XXXX-XXXX-XXXX-XXXX BLUE2B2 2 VMS THIS XXXXXX OL this 0 HOST_ID=XXXX-XXXX-XXXX-XXXX AD
Using CLI for Configuration SET CONNECTION BLUE1A1 UNIT_OFFSET=100 SET CONNECTION BLUE1B1 UNIT_OFFSET=100 SET CONNECTION BLUE2A2 UNIT_OFFSET=100 SET CONNECTION BLUE2B2 UNIT_OFFSET=100 RUN CONFIG ADD RAIDSET R1 DISK10000 DISK20000 DISK30000 DISK40000 DISK50000 DISK60000 INITIALIZE R1 ADD UNIT D102 R1 DISABLE_ACCESS_PATH=ALL SET D102 ENABLE_ACCESS_PATH=(RED1A1, RED1B1, RED2A2, RED2B2) SET D102 IDENTIFIER=102 ADD RAIDSET R2 DISK10100 DISK20100 DISK30100 DISK40100 DISK50100 DISK60100 INITIALIZE R2 ADD UNIT D12
Using CLI for Configuration INITIALIZE DISK50300 ADD UNIT D101 DISK50300 DISABLE_ACCESS_PATH=ALL SET D101 ENABLE_ACCESS_PATH=(BLUE1A1, BLUE1B1, BLUE2A2, BLUE2B2) SET D101 IDENTIFIER=101 ADD SPARESET DISK60300 SHOW UNITS FULL 164 HSG80 ACS Solution Software V8.
Backing Up, Cloning, and Moving Data 7 This chapter includes the following topics: ■ Backing Up Subsystem Configurations, page 166 ■ Creating Clones for Backup, page 167 ■ Moving Storagesets, page 171 HSG80 ACS Solution Software V8.
Backing Up, Cloning, and Moving Data Backing Up Subsystem Configurations The controller stores information about the subsystem configuration in its nonvolatile memory. This information could be lost if the controller fails or when you replace a module in the subsystem. Use the following command to produce a display that shows if the save configuration feature is active and which devices are being used to store the configuration.
Backing Up, Cloning, and Moving Data Creating Clones for Backup Use the CLONE utility to duplicate the data on any unpartitioned single-disk unit, stripeset, mirrorset, or striped mirrorset in preparation for backup. When the cloning operation is complete, you can back up the clones rather than the storageset or single-disk unit, which can continue to service its I/O load. When you are cloning a mirrorset, CLONE does not need to create a temporary mirrorset.
Backing Up, Cloning, and Moving Data Unit Unit Temporary mirrorset Disk10300 Disk10300 New member Unit Temporary mirrorset Unit Copy Disk10300 Disk10300 New member Clone Unit Clone of Disk10300 CXO5510A Figure 37: CLONE utility steps for duplicating unit members Use the following steps to clone a single-disk unit, stripeset, or mirrorset: 1. Establish a connection to the controller that accesses the unit you want to clone. 2. Start CLONE using the following command: RUN CLONE 3.
Backing Up, Cloning, and Moving Data The following example shows the commands you would use to clone storage unit D6. The clone command terminates after it creates storage unit D33, a clone or copy of D6. RUN CLONE CLONE LOCAL PROGRAM INVOKED UNITS AVAILABLE FOR CLONING: 98 ENTER UNIT TO CLONE? 98 CLONE WILL CREATE A NEW UNIT WHICH IS A COPY OF UNIT 98. ENTER THE UNIT NUMBER WHICH YOU WANT ASSIGNED TO THE NEW UNIT? 99 THE NEW UNIT MAY BE ADDED USING ONE OF THE FOLLOWING METHODS: 1.
Backing Up, Cloning, and Moving Data USE AVAILABLE DEVICE DISK20300(SIZE=832317) FOR MEMBER DISK10000(SIZE=832317) (Y,N) [Y]? Y MIRROR DISK10000 C_MB SET C_MB NOPOLICY SET C_MB MEMBERS=2 SET C_MB REPLACE=DISK20300 COPY IN PROGRESS FOR EACH NEW MEMBER. PLEASE BE PATIENT... . .
Backing Up, Cloning, and Moving Data Moving Storagesets You can move a storageset from one subsystem to another without destroying its data. You also can follow the steps in this section to move a storageset to a new location within the same subsystem. Caution: Move only normal storagesets. Do not move storagesets that are reconstructing or reduced, or data corruption will result. See the release notes for the version of your controller software for information on which drives can be supported.
Backing Up, Cloning, and Moving Data 5. Delete each disk drive, one at a time, that the storageset contained. Use the following syntax: DELETE DISK-NAME DELETE DISK-NAME DELETE DISK-NAME 6. Remove the disk drives and move them to their new PTL locations. 7. Again add each disk drive to the controller's list of valid devices. Use the following syntax: ADD DISK DISK-NAME PTL-LOCATION ADD DISK DISK-NAME PTL-LOCATION ADD DISK DISK-NAME PTL-LOCATION 8.
Backing Up, Cloning, and Moving Data New cabinet ADD DISK DISK10000 ADD DISK DISK10100 ADD DISK DISK20000 ADD DISK DISK20100 ADD RAIDSET RAID99 DISK10000 DISK10100 DISK20000 DISK20100 ADD UNIT D100 RAID99 HSG80 ACS Solution Software V8.
Backing Up, Cloning, and Moving Data 174 HSG80 ACS Solution Software V8.
Subsystem Profile Templates A This appendix contains storageset profiles to copy and use to create your profiles. It also contains an enclosure template to use to help keep track of the location of devices and storagesets in your shelves. Four (4) templates will be needed for the subsystem. Note: The storage map templates for the Model 4310R and Model 4214R or 4314R reflect the physical location of the disk enclosures in the rack.
Subsystem Profile Templates Storageset Profile Type of Storageset: _____ Mirrorset __X_ RAIDset _____ Stripeset _____ Striped Mirrorset ____ JBOD Storageset Name Disk Drives Unit Number Partitions: Unit # Unit # Unit # Unit # Unit # Unit # Unit # Unit # RAIDset Switches: Reconstruction Policy ___Normal (default) Reduced Membership __ _No (default) Replacement Policy ___Best performance (default) ___Fast ___Yes, missing: ___Best fit ___None Mirrorset Switches: Replacement Policy Copy
Subsystem Profile Templates Unit Switches: Caching Read caching__________ Read-ahead caching_____ Write-back caching______ Write-through caching____ Access by following hosts enabled _________________________________________________ ___________ _________________________________________________ ___________ _________________________________________________ ___________ _________________________________________________ ___________ HSG80 ACS Solution Software V8.
Subsystem Profile Templates Storage Map Template 1 for the BA370 Enclosure Use this template for: ■ BA370 single-enclosure subsystems ■ first enclosure of multiple BA370 enclosure subsystems 1 2 Port 3 4 5 6 Power Supply Power Supply 3 D10300 D20300 D30300 D40300 D50300 D60300 Power Supply Power Supply 2 D20200 D30200 D40200 D50200 Targets D10200 D60200 Power Supply Power Supply 1 D10100 D20100 D30100 D40100 D50100 D60100 Power Supply Power Supply 0 D10000 178 D20000
Subsystem Profile Templates Storage Map Template 2 for the Second BA370 Enclosure Use this template for the second enclosure of multiple BA370 enclosure subsystems.
Subsystem Profile Templates Storage Map Template 3 for the TSingle-Bushird BA370 Enclosure Use this template for the third enclosure of multiple BA370 enclosure subsystems.
Subsystem Profile Templates Storage Map Template 4 for the Model 4214R Disk Enclosure Use this template for a subsystem with a three-shelf Model 4214R disk enclosure (single-bus). You can have up to six Model 4214R disk enclosures per controller shelf.
Subsystem Profile Templates (Continued) 182 Bay 1 2 3 4 5 6 7 8 9 1 0 1 1 1 2 1 3 1 4 SCSI ID 0 0 0 1 0 2 0 3 0 4 0 5 0 8 0 9 1 0 1 1 1 2 1 3 1 4 1 5 DISK ID Disk30000 Disk30100 Disk30200 Disk30300 Disk30400 Disk30500 Disk30800 Disk30900 Disk31000 Disk31100 Disk31200 Disk31300 Disk31400 Disk31500 Model 4214R Disk Enclosure Shelf 3 (Single-Bus) HSG80 ACS Solution Software V8.
Subsystem Profile Templates Storage Map Template 5 for the Model 4254 Disk Enclosure Use this template for a subsystem with a three-shelf Model 4254 disk enclosure (dual-bus). You can have up to three Model 4254 disk enclosures per controller shelf.
Subsystem Profile Templates (Continued) Model 4254 Disk Enclosure Shelf 3 (Dual-Bus) 184 Bay 1 2 3 4 5 6 7 8 9 1 0 1 1 1 2 1 3 1 4 SCSI ID 0 0 0 1 0 2 0 3 0 4 0 5 0 8 0 0 0 1 0 2 0 3 0 4 0 5 0 8 DISK ID Disk50100 Disk50200 Disk50300 Disk50400 Disk50500 Disk50800 Disk60000 Disk60100 Disk60200 Disk60300 Disk60400 Disk60500 Disk60800 Bus B Disk50000 Bus A HSG80 ACS Solution Software V8.
Subsystem Profile Templates Storage Map Template 6 for the Model 4310R Disk Enclosure Use this template for a subsystem with a six-shelf Model 4310R disk enclosure (single-bus). You can have up to six Model 4310R disk enclosures per controller shelf.
Subsystem Profile Templates Model 4310R Disk Enclosure Shelf 1 (Single-Bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID Disk11200 9 Disk11100 8 Disk11000 7 Disk10800 6 Disk10500 5 Disk10400 4 Disk10300 3 Disk10200 2 Disk10100 1 Disk10000 Bay Model 4310R Disk Enclosure Shelf 2 (Single-Bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID Disk21200 9 Disk21100 8 Disk21000 7 Disk20800 6 Disk20500 5 Disk20400 4 Disk20300 3 Disk20200 2 Disk20100 1 Disk20000 Ba
Subsystem Profile Templates Storage Map Template 7 for the Model 4350R Disk Enclosure Use this template for a subsystem with a three-shelf Model 4350R disk enclosure (single-bus). You can have up to three Model 4350R disk enclosures per controller shelf.
Subsystem Profile Templates Storage Map Template 8 for the Model 4314R Disk Enclosure Use this template for a subsystem with a six-shelf Model 4314R disk enclosure. You can have a maximum of six Model 4314R disk enclosures with each Model 2200 controller enclosure.
Subsystem Profile Templates continued from previous page Model 4314R Disk Enclosure Shelf 1 (Single-Bus) 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID Disk11500 13 Disk11400 12 Disk11300 11 Disk11200 10 Disk11100 9 Disk11000 8 Disk10900 7 Disk10800 6 Disk10500 5 Disk10400 4 Disk10300 3 Disk10200 2 Disk10100 1 Disk10000 Bay Model 4314R Disk Enclosure Shelf 2 (Single-Bus) 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID
Subsystem Profile Templates Storage Map Template 9 for the Model 4354R Disk Enclosure Use this template for a subsystem with a three-shelf Model 4354R disk enclosure (dual-bus). You can have up to three Model 4354R disk enclosures per controller shelf.
Installing, Configuring, and Removing the Client B The following information is included in this appendix: ■ Why Install the Client?, page 192 ■ Before You Install the Client, page 193 ■ Installing the Client, page 194 ■ Installing the Integration Patch, page 195 ■ Troubleshooting Client Installation, page 197 ■ Adding Storage Subsystem and its Host to Navigation Tree, page 199 ■ Removing Command Console Client, page 201 ■ Where to Find Additional Information, page 202 HSG80 ACS Solution S
Installing, Configuring, and Removing the Client Why Install the Client? The Client monitors and manages a storage subsystem by performing the following tasks: 192 ■ Create mirrored device group (RAID 1) ■ Create striped device group (RAID 0) ■ Create striped mirrored device group (RAID 0+1) ■ Create striped parity device group (3/5) ■ Create an individual device (JBOD) ■ Monitor many subsystems at once ■ Set up pager notification HSG80 ACS Solution Software V8.
Installing, Configuring, and Removing the Client Before You Install the Client 1. Verify you are logged into an account that is a member of the administrator group. 2. Check the software product description that came with the software for a list of supported hardware. 3. Verify that you have the SNMP service installed on the computer. SNMP must be installed on the computer for this software to work properly. The Client software uses SNMP to receive traps from the Agent.
Installing, Configuring, and Removing the Client Installing the Client The following restriction should be observed when installing SWCC on Windows NT 4.0 Workstations. If you select all of the applets during installation, the installation will fail on the HSG60 applet and again on one of the HSG80 applets. The workaround is to install all of the applets you want except for the HSG60 applet and the HSG80 ACS 8.5 applet. You can then return to the setup program and install the one that you need. 1.
Installing, Configuring, and Removing the Client Installing the Integration Patch The integration patch determines which version of firmware the controller is using and launches the appropriate StorageWorks Command Console (SWCC) Storage Window within Insight Manager (CIM) version 4.23. Should I Install the Integration Patch? Install this patch if your HSG80 controller uses ACS 8.7 or later. This patch enables you to use the controller’s SWCC Storage Window within CIM to monitor and manage the controller.
Installing, Configuring, and Removing the Client Caution: If you remove the integration patch, HSG80 Storage Window V2.1 will no longer work and you will need to reinstall HSG80 Storage Window V2.1. The integration patch uses some of the same files as the HSG80 Storage Window V2.1. Integrating Controller’s SWCC Storage Window with CIM You can open the controller’s Storage Window from within the Windows-based CIM version 4.23 by doing the following: 1.
Installing, Configuring, and Removing the Client Finding the Controller’s Storage Window If you installed Insight Manager before SWCC, Insight Manager will be unable to find the controller’s Storage Window. To find the controller’s Storage Window, perform the following procedure: 1. Double-click the Insight Agents icon (Start > Settings > Control Panel). A window appears showing you the active and inactive Agents under the Services tab. 2. Highlight the entry for Fibre Array Information and click Add.
Installing, Configuring, and Removing the Client If the Network Information Services (NIS) are being used to provide named port lookup services, contact the network administrator to add the correct ports.
Installing, Configuring, and Removing the Client Adding Storage Subsystem and its Host to Navigation Tree The Navigation Tree enables you to manage storage over the network by using the Storage Window. If you plan to use pager notification, you must add the storage subsystem to the Navigation Tree. 1. Verify that you have properly installed and configured the HS-Series Agent on the storage subsystem host. 2. Click Start > Programs > Command Console > StorageWorks Command Console.
Installing, Configuring, and Removing the Client Figure 39: Navigation window showing storage host system “Atlanta” 6. Click the plus sign to expand the host icon. When expanded, the Navigation Window displays an icon for the storage subsystem. To access the Storage Window for the subsystem, double-click the Storage Window icon. Figure 40: Navigation window showing expanded “Atlanta” host icon 200 HSG80 ACS Solution Software V8.
Installing, Configuring, and Removing the Client Note: You can create virtual disks by using the Storage Window. For more information on the Storage Window, refer to HP StorageWorks Command Console Version 2.5, User Guide. Removing Command Console Client Before you remove the Command Console Client (CCL) from the computer, remove AES. This will prevent the system from reporting that a service failed to start every time the system is restarted. Steps 2 through 5 describe how to remove the CCL.
Installing, Configuring, and Removing the Client Note: This procedure removes only the Command Console Client (SWCC Navigation Window). You can remove the HSG80 Client by using the Add/Remove program. Where to Find Additional Information You can find additional information about SWCC by referring to the online Help and to HP StorageWorks Command Console Version 2.5, User Guide. About the User Guide HP StorageWorks Command Console Version 2.5, User Guide contains additional information on how to use SWCC.
glossary Glossary This glossary defines terms pertaining to the ACS solution software. It is not a comprehensive glossary of computer terms Glossary 8B/10B A type of byte definition encoding and decoding to reduce errors in data transmission patented by the IBM Corporation. This process of encoding and decoding data for transmission has been adopted by ANSI. adapter A device that converts the protocol and hardware interface of one bus type into another without changing the function of the bus.
Glossary association set A group of remote copy sets that share selectable attributes for logging and failover. Members of an association set transition to the same state simultaneously. For example, if one association set member assumes the failsafe locked condition, then other members of the association set also assume the failsafe locked condition. An association set can also be used to share a log between a group of remote copy set members that require efficient use of the log space.
Glossary built-in self-test A diagnostic test performed by the array controller software on the controller policy processor. byte A binary character string made up of 8 bits operated on as a unit. cache memory A portion of memory used to accelerate read and write operations. cache module A fast storage buffer CCL CCL-Command Console LUN, a “SCSI Logical Unit Number” virtual-device used for communicating with Command Console Graphical User Interface (GUI) software.
Glossary controller A hardware device that, with proprietary software, facilitates communications between a host and one or more devices organized in an array. The HSG80 family controllers are examples of array controllers. copying A state in which data to be copied to the mirrorset is inconsistent with other members of the mirrorset. See also normalizing. copying member Any member that joins the mirrorset after the mirrorset is created is regarded as a copying member.
Glossary DOC DWZZA-On-a-Chip. ASCSI bus extender chip used to connect a SCSI bus in an expansion cabinet to the corresponding SCSI bus in another cabinet (See DWZZA). driver A hardware device or a program that controls or regulates another device. For example, a device driver is a driver developed for a specific device that allows a computer to operate with the device, such as a printer or a disk drive.
Glossary ESD Electrostatic discharge. The discharge of potentially harmful static electrical voltage as a result of improper grounding. extended subsystem A subsystem in which two cabinets are connected to the primary cabinet. external cache battery See ECB. F_Port A port in a fabric where an N_Port or NL_Port may attach. fabric A group of interconnections between ports that includes a fabric element.
Glossary FCC Federal Communications Commission. The federal agency responsible for establishing standards and approving electronic devices within the United States. FCC Class A This certification label appears on electronic devices that can only be used in a commercial environment within the United States. FCC Class B This certification label appears on electronic devices that can be used in either a home or a commercial environment within the United States.
Glossary FRU Field replaceable unit. A hardware component that can be replaced at the customer location by service personnel or qualified customer service personnel. FRUTIL Field Replacement utility. full duplex (n) A communications system in which there is a capability for 2-way transmission and acceptance between two sites at the same time. full duplex (adj) Pertaining to a communications method in which data can be transmitted and received at the same time.
Glossary host compatibility mode A setting used by the controller to provide optimal controller performance with specific operating systems. This improves the controller performance and compatibility with the specified operating system. hot disks A disk containing multiple hot spots. Hot disks occur when the workload is poorly distributed across storage devices which prevents optimum subsystem performance. See also hot spots. hot spots A portion of a disk drive frequently accessed by the host.
Glossary I/O Refers to input and output functions. I/O driver The set of code in the kernel that handles the physical I/O to a device. This is implemented as a fork process. Same as driver. I/O interface See interface. I/O module A 16-bit SBB shelf device that integrates the SBB shelf with either an 8-bit single ended, 16-bit single-ended, or 16-bit differential SCSI bus (see SBB).
Glossary logical unit number LUN. A value that identifies a specific logical unit belonging to a SCSI target ID number. A number associated with a physical device unit during a task I/O operations. Each task in the system must establish its own correspondence between logical unit numbers and physical devices. logon Also called login. A procedure whereby a participant, either a person or network connection, is identified as being an authorized network participant. loop See arbitrated loop.
Glossary mirrored write-back caching A method of caching data that maintains two copies of the cached data. The copy is available if either cache module fails. mirrorset See RAID level 1. MIST Module Integrity Self-Test. multibus failover Allows the host to control the failover process by moving the units from one controller to another. N_port A port attached to a node for use with point-to-point topology or fabric topology. NL_port A port attached to a node for use in all topologies.
Glossary normalizing Normalizing is a state in which, block-for-block, data written by the host to a mirrorset member is consistent with the data on other normal and normalizing members. The normalizing state exists only after a mirrorset is initialized. Therefore, no customer data is on the mirrorset. normalizing member A mirrorset member whose contents are the same as all other normal and normalizing members for data that has been written since the mirrorset was created or lost cache data was cleared.
Glossary PCMCIA Personal Computer Memory Card Industry Association. An international association formed to promote a common standard for PC card-based peripherals to be plugged into notebook computers. The card commonly known as a PCMCIA card is about the size of a credit card. PDU Power distribution unit. The power entry device for StorageWorks cabinets. The CDU provides the connections necessary to distribute power to the cabinet shelves and fans.
Glossary protocol The conventions or rules for the format and timing of messages sent and received. PTL Port-Target-LUN. The controller method of locating a device on the controller device bus. PVA module Power Verification and Addressing module. quiesce The act of rendering bus activity inactive or dormant. For example, “quiesce the SCSI bus operations during a device warm-swap.” RAID Redundant Array of Independent Disks.
Glossary RAID level 3/5 A RAID storageset that stripes data and parity across three or more members in a disk array. A RAIDset combines the best characteristics of RAID level 3 and RAID level 5. A RAIDset is the best choice for most applications with small to medium I/O requests, unless the application is write intensive. A RAIDset is sometimes called parity RAID. RAIDset See RAID level 3/5. RAM Random access memory.
Glossary remote copy set A bound set of two units, one located locally and one located remotely, for long-distance mirroring. The units can be a single disk, or a storageset, mirrorset, or RAIDset. A unit on the local controller is designated as the “initiator” and a corresponding unit on the remote controller is designated as the “target”. request rate The rate at which requests are arriving at a servicing entity. RFI Radio frequency interference.
Glossary SCSI ID number The representation of the SCSI address that refers to one of the signal lines numbered 0 through 15. SCSI-P cable A 68-conductor (34 twisted-pair) cable generally used for differential bus connections. SCSI port (1) Software: The channel controlling communications to and from a specific SCSI bus in the system. (2) Hardware: The name of the logical socket at the back of the system unit to which a SCSI device is connected.
Glossary StorageWorks A family of modular data storage products that allow customers to design and configure their own storage subsystems. Components include power, packaging, cabling, devices, controllers, and software. Customers can integrate devices and array controllers in StorageWorks enclosures to form storage subsystems. StorageWorks systems include integrated SBBs and array controllers to form storage subsystems.
Glossary tape inline exerciser (TILX) The controller diagnostic software to test the data transfer capabilities of tape drives in a way that simulates a high level of user activity. topology An interconnection scheme that allows multiple Fibre Channel ports to communicate with each other. For example, point-to-point, Arbitrated Loop, and switched fabric are all Fibre Channel topologies.
Glossary warm swap A device replacement method that allows the complete system to remain online during device removal or insertion. The system bus may be halted, or quiesced, for a brief period of time during the warm-swap procedure. Wide Ultra SCSI Fast/20 on a Wide SCSI bus. Worldwide name A unique 64-bit number assigned to a subsystem by the Institute of Electrical and Electronics Engineers (IEEE) and set by manufacturing prior to shipping. This name is referred to as the node ID within the CLI.
Glossary 224 HSG80 ACS Solution Software V8.
index A Index accessing the CLI, SWCC 39, 148 accessing the configuration menu Agent 119, 122 ADD CONNECTIONS multiple-bus failover 36 ADD UNIT multiple-bus failover 36, 37 adding client system entry Agent 119, 122 subsystem entry Agent 119, 122 virtual disks 202 adding a disk drive to the spareset configuration options 150 adding disk drives configuration options 150 Agent accessing the configuration menu 119, 122 client system entry adding 119, 122 configuration menu 119, 122 configuring 119, 122 addin
Index cloning data 167 subsystem configuration 166 C cabling controller pair 135 multiple-bus failover fabric topology configuration 134 single controller 128 cache modules location 26, 27 read caching 31 write-back caching 31 write-through caching 31 caching techniques mirrored 32 read caching 31 read-ahead caching 31 write-back caching 31 write-through caching 31 changing switches configuration options 153 chunk size choosing for RAIDsets and stripesets 77 controlling stripesize 77 using to increase req
Index configuring storage SWCC 38 connections 34 naming 34 containers attributes 60 illustrated 61, 142 mirrorsets 68 planning storage 60 stripesets 66 controller verification of installation 141 controller verification installation 133, 141 controllers cabling 128, 135 location 26, 27 node IDs 44 verification of installation 133, 141 worldwide names 44 conventions document 16 creating storageset and device profiles 63 Creating Clones for Backup 167 D deleting a client system entry Agent 119, 122 deleting
Index initialize switches 80 Geometry parameters 80 H Host access restricting in multiple-bus failover mode disabling access paths 41 host access restricting by offsets multiple-bus failover 43 restricting in multiple-bust failover mode 41 restricting in transparent failover mode disabling access paths 40 host adapter installation 92 preparation 92 host connections 34 naming 34 Host storage configuration verify 154 HSG Agent install and configure 111 network connection 114 overview 113 remove agent 123 I
Index ADD UNIT command 36 ADD UNITcommand 37 CLI configuration procedure fabric topology 158 fabric topology preferring units 149 fabric topology configuration cabling 134 host connections 36 restricting host access 41 disabling access paths 41 SET CONNECTIONS command 36 SET UNITcommand 37 N network port assignments 197 new features 95 node IDs 44 restoring 45 NODE_ID worldwide name 44 NOSAVE_CONFIGURATION 79 O offset restricting host access multiple-bus fafilover 43 online help SWCC 202 options for mirr
Index switches 76 read caching enabled for all storage units 31 general description 31 read requests decreasing the subsystem response time with read caching 31 read-ahead caching 31 enabled for all disk units 31 removing Client 201 removing a subsystem entry Agent 119, 122 request rate 78 requirements host adapter installation 92 storage configuration 38 restricting host access disabling access paths multiple-bus failover 41 transparent failover 40 multiple-bus failover 41 running Agent 116 S SAVE_CONFIG
Index first enclosure of multiple-enclosure subsystem 187 Storage map template 8 first enclosure of multiple-enclosure subsystem 188 Storage map template 9 first enclosure of multiple-enclosure subsystem 190 storageset deleting fabric topology 152 fabric topology changing switches 152 planning considerations 64 mirrorsets 68 partitions 73 RAIDsets 69 striped mirrorsets 71 stripesets 64 profile 63 profiles 175 storageset profile 63 storageset switches SET command 76 storagesets creating a profile 63 moving
Index assigning 36 fabric topology 147 assigning depending on SCSI version 37 assigning in fabric topology partition 148 single disk 147 unit qualifiers assigning fabric topology 147 unit switches changing fabric topology 153 units LUN IDs 46 Upgrade procedures solution software 93 using the configuration menu Agent 119, 122 V verification controller installation 133, 141 232 verification of installation controller 133, 141 Verifying/Installing Required Versions 93 virtual disks adding 202 W worldwide