Installation and Configuration Guide HP StorageWorks HSG80 ACS Solution Software V8.8 for Sun Solaris Product Version: 8.8-1 First Edition (March 2005) Part Number: AA-RV1RA-TE This guide provides installation and configuration instructions and reference material for operation of the HSG80 ACS Solution Software V8.8-1 for Sun Solaris.
© Copyright 2000-2005 Hewlett-Packard Development Company, L.P. Hewlett-Packard Company makes no warranty of any kind with regard to this material, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. Hewlett-Packard shall not be liable for errors contained herein or for incidental or consequential damages in connection with the furnishing, performance, or use of this material.
contents About this Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13 Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Intended Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Related Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Determining the Address of the CCL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enabling/Disabling the CCL in SCSI-2 Mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enabling the CCL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Disabling the CCL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enabling/Disabling CCL in SCSI-3 Mode. . .
Contents Keep these points in mind when planning RAIDsets . . . . . . . . . . . . . . . . . . . . . . . . Striped Mirrorset Planning Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Storageset Expansion Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Partition Planning Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Defining a Partition . . . . . . . . . . . . . . . . . . . .
Contents 4 Installing and Configuring the HSG Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .117 Why Use StorageWorks Command Console (SWCC)?. . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 Installation and Configuration Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 About the Network Connection for the Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Before Installing the Agent. . . . . . . . . . . .
Contents Verifying Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring Devices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring Storage Containers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring a Stripeset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Loop Bindings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fabric Bindings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Loop Connections at the Solaris Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Loop Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Analysis. . . . . . . . . . . . . . . . . . . . . .
Contents Storage Map Template 8 for the Model 4314R Disk Enclosure . . . . . . . . . . . . . . . . . . . . . 212 Storage Map Template 9 for the Model 4354R Disk Enclosure . . . . . . . . . . . . . . . . . . . . . 214 B Installing, Configuring, and Removing the Client. . . . . . . . . . . . . . . . . . . . . . . . . . .215 Why Install the Client? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 Before You Install the Client . . . . . . . . . . .
Contents 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 Connections in multiple-bus failover mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 LUN presentation to hosts, as determined by offset . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Limiting host access in transparent failover mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Limiting host access in multiple-bus failover mode . . . .
Contents 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 Document Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Unit Assignments and SCSI_VERSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 PTL addressing, single-bus configuration, six Model 4320R enclosures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents 12 HSG80 ACS Solution Software V8.
about this guide About this Guide This Installation Guide describes how to install and configure the HSG80 ACS Solution Software Version 8.8-1 for Sun Solaris.
About this Guide Overview This section covers the following topics: Intended Audience, page 14 Related Documentation, page 14 Intended Audience This book is intended for use by system administrators and system technicians who have a basic experience with storage and networking. Related Documentation In addition to this guide, corresponding information can be found in: 14 ■ ACS V8.
About this Guide ■ HP StorageWorks HSG80 ACS Solution Software Release Notes (platform-specific) ■ HP StorageWorks Enterprise/Modular Storage RAID Array Fibre Channel Arbitrated Loop Configurations for Windows, Tru64, and Sun Solaris Application Note (AA-RS1ZB-TE) - Solution software host support includes the following platforms: — IBM AIX — HP-UX — Linux (Red Hat x86/Alpha, SuSE x86/Alpha, Caldera x86) — Novell NetWare — Open VMS — Sun Solaris — Tru64 UNIX — Windows NT/2000/Windows Server 2003 (32-bit)
About this Guide Chapter Content Summary Table 1 below summarizes the content of the chapters. Table 1: Summary of Chapter Contents Chapters Description 1. Planning a Subsystem This chapter focuses on technical terms and knowledge needed to plan and implement storage array subsystems. 2. Planning Storage Configurations Plan the storage configuration of your subsystem, using individual disk drives (JBOD), storageset types (mirrorsets, stripesets, and so on), and/or partitioned drives.
About this Guide Table 1: Summary of Chapter Contents (Continued) 7. Chapters Description Backing Up, Cloning, and Moving Data Description of common procedures that are not mentioned elsewhere in this guide. Backing Up Subsystem Configuration Cloning Data for Backup Moving Storagesets Appendix A. Subsystem Profile Templates This appendix contains storageset profiles to copy and use to create your system profiles.
About this Guide Conventions Conventions consist of the following: ■ Document conventions ■ Symbols in Text ■ Symbols on Equipment Document conventions This document follows the conventions in Table 2.
About this Guide Note: Text set off in this manner presents commentary, sidelights, or interesting points of information. Symbols on Equipment Any enclosed surface or area of the equipment marked with these symbols indicates the presence of electrical shock hazards. Enclosed area contains no operator serviceable parts. WARNING: To reduce the risk of injury from electrical shock hazards, do not open this enclosure. Any RJ-45 receptacle marked with these symbols indicates a network interface connection.
About this Guide Any product or assembly marked with these symbols indicates that the component exceeds the recommended weight for one individual to handle safely. WARNING: To reduce the risk of personal injury or damage to the equipment, observe local occupational health and safety requirements and guidelines for manually handling material. Rack Stability WARNING: To reduce the risk of personal injury or damage to the equipment, be sure that: • • • • • 20 The leveling jacks are extended to the floor.
About this Guide Getting Help If you still have a question after reading this guide, contact an HP authorized service provider or access our web site. Technical Support Telephone numbers for worldwide technical support are listed on the following HP web site: http://www.hp.com/support/. From this web site, select the country of origin. Note: For continuous quality improvement, calls may be recorded or monitored. Outside North America, call technical support at the nearest location.
About this Guide HP Authorized Reseller For the name of your nearest authorized reseller: ■ In the United States, call 1-800-345-1518 ■ In Canada, call 1-800-263-5868 ■ Elsewhere, see the Storage web site for locations and telephone numbers Configuration Flowchart A three-part flowchart (Figure 1, Figure 2, and Figure 3) is shown on the following pages. Refer to these charts while installing and configuring a new storage subsystem.
About this Guide Unpack subsystem See the unpacking instructions on shipping box Plan a Subsystem Chapter 1 Plan Storage Configurations Chapter 2 Prepare Host System Chapter 3 Make Local Connection page 130 Controller pair Single controller Cable the Controller page 133 Cable the Controllers page 139 Configure the Controller page 133 Configure the Controllers page 140 Installing SWCC ? No Yes B See Figure 3 on page 25 See Figure 2 on page 24 A Figure 1: General configuration flowchart (pane
About this Guide A Configure devices page 146 Create Storagesets and Partitions: Stripeset, page 146 Mirrorset, page 148 RAIDset, page 148 Striped Mirrorset, page 149 Single (JBOD) Disk, page 150 Continue creating units until you have completed your planned configuration. Partition, page 150 Assign Unit Numbers page 152 Select Configuration Options page 153 Verify Storage Setup page 157 Figure 2: General configuration flowchart (panel 2) 24 HSG80 ACS Solution Software V8.
About this Guide B Install Agent Chapter 4 Install Client Appendix B Create Storage See SWCC online help Verify Storage Set Up Figure 3: SWCC storage configuration flowchart (panel 3) HSG80 ACS Solution Software V8.
About this Guide 26 HSG80 ACS Solution Software V8.
Planning a Subsystem 1 This chapter provides information that helps you plan how to configure the storage array subsystem. This chapter focuses on the technical terms and knowledge needed to plan and implement storage subsystems. Note: This chapter frequently references the command line interface (CLI). For the complete syntax and descriptions of the CLI commands, see the HP StorageWorks HSG60 and HSG80 Array Controller and Array Controller Software Command Line Interface (CLI) Reference Guide.
Planning a Subsystem Defining Subsystems This section describes the terms this controller and other controller. It also presents graphics of the Model 2200 and BA370 enclosures. Note: The HSG80 controller uses the BA370 or Model 2200 enclosure. Controller Designations A and B The terms A, B, “this controller,” and “other controller,” respectively are used to distinguish one controller from another in a two-controller (also called dual-redundant) subsystem.
Planning a Subsystem BA370 Enclosure 1 2 3 4 5 2 1 3 4 5 6 6 EMU PVA Controller A Controller B Cache module A Cache module B CXO6283B Figure 5: Location of controllers and cache modules in a BA370 enclosure Controller Designations “This Controller” and “Other Controller” Some CLI commands use the terms “this” and “other” to identify one controller or the other in a dual-redundant pair. These designations are a shortened form of “this controller” and “other controller.
Planning a Subsystem Model 2200 Enclosure 1 2 CXO7366A 1 This controller 2 Other controller Figure 6: “This controller” and “other controller” for the Model 2200 enclosure BA370 Enclosure 1 2 CXO6468D 1 Other controller 2 This controller Figure 7: “This controller” and “other controller” for the BA370 enclosure 30 HSG80 ACS Solution Software V8.
Planning a Subsystem What is Failover Mode? Failover is a way to keep the storage array available to the host if one of the controllers becomes unresponsive. A controller can become unresponsive because of a controller hardware failure or, in multiple-bus mode only, due to a failure of the link between host and controller or host-bus adapter. Failover keeps the storage array available to the hosts by allowing the surviving controller to take over total control of the subsystem.
Planning a Subsystem At any time, host port 1 is active on only one controller, and host port 2 is active on only one controller. The other ports are in standby mode. In normal operation, both host port 1 on controller A and host port 2 on controller B are active. A representative configuration is shown in Figure 8. The active and standby ports share port identity, enabling the standby port to take over for the active one.
Planning a Subsystem Host 1 Host 2 Switch or hub Switch or hub Host port 1 active D0 Host 3 D1 Host port 1 standby Host port 2 standby Controller A D100 Controller B D101 D120 Host port 2 active CXO7036A Figure 8: Transparent failover—normal operation HSG80 ACS Solution Software V8.
Planning a Subsystem Host 1 Host 2 Switch or hub Switch or hub Host port 1 active D0 Host 3 D1 Host port 1 not available Host port 2 active Controller A D100 Controller B not available D101 D120 Host port 2 not available CXO7035A Figure 9: Transparent failover—after failover from controller B to controller A Multiple-Bus Failover Mode Multiple-bus failover mode has the following characteristics: ■ Host controls the failover process by moving the units from one controller to another ■ A
Planning a Subsystem In multiple-bus failover mode, you can specify which units are normally serviced by a specific controller of a controller pair. Units can be preferred to one controller or the other by the PREFERRED_PATH switch of the ADD UNIT (or SET unit) command. For example, use the following command to prefer unit D101 to “this controller”: SET D101 PREFERRED_PATH=THIS_CONTROLLER Note: This is an initial preference, which can be overridden by the hosts.
Planning a Subsystem Host 1 "RED" Host 2 "GREY" Host 3 "BLUE" FCA1 FCA2 FCA1 FCA2 FCA1 FCA2 Switch or hub Switch or hub Host port 1 active D0 Host port 2 active Controller A D1 D2 D100 D101 D120 All units visible to all ports Host port 1 active Controller B Host port 2 active NOTE: FCA = Fibre Channel Adapter CXO7094B Figure 10: Typical multiple-bus configuration 36 HSG80 ACS Solution Software V8.
Planning a Subsystem Selecting a Cache Mode The cache module supports read, read-ahead, write-through, and write-back caching techniques. The cache technique is selected separately for each unit. For example, you can enable only read and write-through caching for some units while enabling only write-back caching for other units. Read Caching When the controller receives a read request from the host, it reads the data from the disk drives, delivers it to the host, and stores the data in its cache module.
Planning a Subsystem operation is complete. This process is called write-through caching because the data actually passes through—and is stored in—the cache memory on its way to the disk drives. Enabling Mirrored Caching In mirrored caching, half of each controller’s cache mirrors the companion controller’s cache, as shown in Figure 11. The total memory available for cached data is reduced by half, but the level of protection is greater.
Planning a Subsystem What is the Command Console LUN? StorageWorks Command Console (SWCC) software communicates with the HSG80 controllers through an existing storage unit, or logical unit number (LUN). The dedicated LUN that SWCC uses is called the Command Console LUN (CCL). The CCL serves as the communication device for the HS-Series Agent and identifies itself to the host by a unique identification string. By default, a CCL device is enabled within the HSG80 controller on host port 1.
Planning a Subsystem Disabling the CCL To disable the CCL in SCSI-2 mode, enter the following CLI command: HSG80 > SET THIS_CONTROLLER NOCOMMAND_CONSOLE_LUN To see the state of the CCL, use the SHOW THIS CONTROLLER/ OTHER CONTROLLER command. Because the CCL is not an actual LUN, the SHOW UNITS command will not display the CCL location. Enabling/Disabling CCL in SCSI-3 Mode The CCL is enabled all the time in SCSI-3 mode. There is no option to enable/disable.
Planning a Subsystem Examples: A connection from the first adapter in the host named RED that goes to port 1 of controller A would be called RED1A1. A connection from the third adapter in host GREEN that goes to port 2 of controller B would be called GREEN3B2. Note: Connection names can have a maximum of 9 characters.
Planning a Subsystem If a controller pair is in multiple-bus failover mode, each adapter has two connections, as shown in Figure 14.
Planning a Subsystem Host 1 "GREEN" Host 2 "ORANGE" Host 3 "PURPLE" FCA1 FCA1 FCA1 Switch or hub Connections GREEN1A1 ORANGE1A1 PURPLE1A1 Host port 1 active D0 Host port 2 standby Controller A D1 Host port 1 standby Connections GREEN1B2 ORANGE1B2 PURPLE1B2 D100 Controller B D101 D120 Host port 2 active NOTE: FCA = Fibre Channel Adapter CXO7079B Figure 13: Connections in single-link, transparent failover mode configurations HSG80 ACS Solution Software V8.
Planning a Subsystem Host 1 "VIOLET" FCA1 FCA2 Switch or hub Connection VIOLET1B1 Switch or hub Connection VIOLET1A1 Connection VIOLET2A2 Host port 1 active D0 Host port 2 active Controller A D1 D2 D100 Connection VIOLET2B2 D101 D120 All units visible to all ports Host port 1 active Controller B Host port 2 active NOTE: FCA = Fibre Channel Adapter CXO7080B Figure 14: Connections in multiple-bus failover mode 44 HSG80 ACS Solution Software V8.
Planning a Subsystem Assigning Unit Numbers The controller keeps track of the unit with the unit number. The unit number can be from 0−199 prefixed by a D, which stands for disk drive. A unit can be presented as different LUNs to different connections.
Planning a Subsystem If no value is specified for offset, then connections on port 1 have a default offset of 0 and connections on port 2 have a default offset of 100. For example, if all host connections use the default offset values, unit D2 will be presented to a port 1 host connection as LUN 2 (unit number of 2 minus offset of 0). Unit D102 will be presented to a port 2 host connection as LUN 2 (unit number of D102 minus offset of 100).
Planning a Subsystem An additional factor to consider when assigning unit numbers and offsets is SCSI version. If the SCSI_VERSION switch of the SET THIS_CONTROLLER/OTHER_CONTROLLER command is set to SCSI-3, the CCL is presented as LUN 0 to every connection, superseding any unit assignments. The interaction between SCSI version and unit numbers is explained further in the next section. In addition, the access path to the host connection must be enabled for the connection to access the unit.
Planning a Subsystem The PREFERRED_PATH switch of the ADD UNIT (or SET unit) command determines which controller of a dual-redundant pair initially accesses the unit. Initially, PREFERRED_PATH determines which controller presents the unit as Ready. The other controller presents the unit as Not Ready. Hosts can issue a SCSI Start Unit command to move the unit from one controller to the other.
Planning a Subsystem If SCSI_VERSION is set to SCSI-2 mode, the CCL floats, moving to the first available LUN location, depending on the configuration. StorageWorks recommends to use the following conventions when assigning host connection offsets and unit numbers in SCSI-2 mode: ■ Offsets should be divisible by 10 (for consistency and simplicity). ■ Unit numbers should be assigned at connection offsets (so that every host connection has a unit presented at LUN 0).
Planning a Subsystem The command syntax to disable is: HSG> SET this/other Default_Access=Disable The command syntax to enable is: HSG> SET this/other Default_Access=Enable {default after upgrade} When the command is invoked from one controller, the Default_Access from the other controller will be similarly modified. The setting is symmetrical and persistent across restarts, FRUTIL, etc.
Planning a Subsystem Host 1 "AQUA" Host 2 "BLACK" Host 3 "BROWN" FCA1 FCA1 FCA1 Switch or hub Switch or hub Connection AQUA1A1 Host port 1 active Host port 2 standby Controller A Connection BLACK1B2 Connection BROWN1B2 D0 D1 Host port 1 standby D100 Controller B D101 D120 Host port 2 active FCA = Fibre Channel Adapter CXO7081B Figure 16: Limiting host access in transparent failover mode Restricting Host Access by Disabling Access Paths If more than one host is on a link (that is, atta
Planning a Subsystem For example: In Figure 17, restricting the access of unit D101 to host 3, the host named BROWN can be done by enabling only the connection to host 3. Enter the following commands: SET D101 DISABLE_ACCESS_PATH=ALL SET D101 ENABLE_ACCESS_PATH=BROWN1B2 If the storage subsystem has more than one host connection, carefully specify the access path to avoid providing undesired host connections access to the unit.
Planning a Subsystem Note: StorageWorks recommends that you provide access to only specific connections, even if there is just one connection on the link. This way, if new connections are added, they will not have automatic access to all units. Restricting Host Access in Multiple-Bus Failover Mode In multiple-bus mode, the units assigned to any port are visible to all ports.
Planning a Subsystem Host 1 "RED" Host 2 "GREY" Host 3 "BLUE" FCA1 FCA2 FCA1 FCA2 FCA1 FCA2 Switch or hub Connections RED1B1 GREY1B1 BLUE1B1 Switch or hub Connections RED1A1 GREY1A1 BLUE1A1 Connections RED2A2 GREY2A2 BLUE2A2 Host port 1 active Host port 2 active Controller A D0 D1 D2 D100 Connections RED2B2 GREY2B2 BLUE2B2 D101 D120 All units visible to all ports Host port 1 active Controller B Host port 2 active NOTE: FCA = Fibre Channel Adapter CXO7078B Figure 17: Limiting host acc
Planning a Subsystem multiple-bus failover to work. For most operating systems, it is desirable to have all connections to the host enabled.
Planning a Subsystem For example: In Figure 17, assume all host connections initially have the default offset of 0. Giving all connections access to host BLUE, an offset of 120 will present unit D120 to host BLUE as LUN 0. Enter the following commands: SET BLUE1A1 UNIT_OFFSET=120 SET BLUE1B1 UNIT_OFFSET=120 SET BLUE2A2 UNIT_OFFSET=120 SET BLUE2B2 UNIT_OFFSET=120 Host BLUE cannot see units lower than its offset, so it cannot access any other units.
Planning a Subsystem In multiple-bus failover mode, each of the host ports has its own port ID: ■ Controller B, port 1—worldwide name + 1, for example 5000-1FE1-FF0C-EE01 ■ Controller B, port 2—worldwide name + 2, for example 5000-1FE1-FF0C-EE02 ■ Controller A, port 1—worldwide name + 3, for example 5000-1FE1-FF0C-EE03 ■ Controller A, port 2—worldwide name + 4, for example 5000-1FE1-FF0C-EE04 Use the CLI command, SHOW THIS_CONTROLLER/OTHER_CONTROLLER to display the subsystem’s worldwide name.
Planning a Subsystem 1 2 Node ID (Worldwide name) Checksum 1 WWN INFORMATION P/N: WWN: S/N: NNNN – NNNN – NNNN – NNNN Checksum: NN 2 CXO6873B Figure 19: Placement of the worldwide name label on the BA370 enclosure Caution: Each subsystem has its own unique worldwide name (node ID). If you attempt to set the subsystem worldwide name to a name other than the one that came with the subsystem, the data on the subsystem will not be accessible.
Planning Storage Configurations 2 This chapter provides information to help you plan the storage configuration of your subsystem. Storage containers are individual disk drives (JBOD), storageset types (mirrorsets, stripesets, and so on), and/or partitioned drives. Use the guidelines found in this section to plan the various types of storage containers needed.
Planning Storage Configurations Where to Start The following procedure outlines the steps to follow when planning your storage configuration. See Appendix A to locate the blank templates for keeping track of the containers being configured. 1. Determine your storage requirements. Use the questions in “Determining Storage Requirements” on page 61, to help you. 2. Review configuration rules. See “Configuration Rules for the Controller” on page 61. 3.
Planning Storage Configurations — Use SWCC. See the SWCC documentation for details. — Use the Command Line Interpreter (CLI) commands. This method allows you flexibility in defining and naming your storage containers. See the HP StorageWorks HSG60 and HSG80 Array Controller and Array Controller Software Command Line Interface (CLI) Reference Guide. Determining Storage Requirements It is important to determine your storage requirements.
Planning Storage Configurations Note: For the previous two storageset configurations, this is a combined maximum, limited to no more than 20 RAID 3/5 storagesets in the individual combination.
Planning Storage Configurations D100 RAID1 Disk 10000 Disk 20000 Host addressable unit number Storageset name Disk 30000 Controller PTL addresses CXO6186B Figure 20: Mapping a unit to physical disk drives The HSG80 controller identifies devices based on a Port-Target-LUN (PTL) numbering scheme, shown in Figure 21. The physical location of a device in its enclosure determines its PTL. ■ P—Designates the controller's SCSI device port number (1 through 6).
Planning Storage Configurations The controller can either operate with a BA370 enclosure or with a Model 2200 controller enclosure combined with Model 4214R, Model 4254, Model 4310R, Model 4350R, Model 4314R, or Model 4354R disk enclosures. The controller operates with BA370 enclosures that are assigned ID numbers 0, 2, and 3. These ID numbers are set through the PVA module. Enclosure ID number 1, which assigns devices to targets 4 through 7, is not supported.
Planning Storage Configurations Examples - Model 2200 Storage Maps, PTL Addressing The Model 2200 controller enclosure can be combined with the following: ■ Model 4214R disk enclosure — Ultra2 SCSI with 14 drive bays, single-bus I/O module. ■ Model 4254 disk enclosure — Ultra2 SCSI with 14 drive bays, dual-bus I/O module. Note: The Model 4214R uses the same storage maps as the Model 4314R, and the Model 4254 uses the same storage maps as the Model 4354R disk enclosures.
Planning Storage Configurations ■ Model 4354R disk enclosure — Ultra3 SCSI with 14 drive bays, dual-bus I/O module. Table 7 shows the addresses for each device in a three-shelf, dual-bus configuration. A maximum of three Model 4354R disk enclosures can be used with each Model 2200 controller enclosure. Note: Appendix A contains storageset profiles you can copy and use to create your own system profiles.
Planning Storage Configurations Table 4: PTL addressing, single-bus configuration, six Model 4320R enclosures Model 4310R Disk Enclosure Shelf 6 (Single-Bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID Disk61200 9 Disk61100 8 Disk61000 7 Disk60800 6 Disk60500 5 Disk60400 4 Disk60300 3 Disk60200 2 Disk60100 1 Disk60000 Bay Model 4310R Disk Enclosure Shelf 5 (Single-Bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID Disk51200 9 Disk51100 8 Disk51000
Planning Storage Configurations Table 4: PTL addressing, single-bus configuration, six Model 4320R enclosures (Continued) Model 4310R Disk Enclosure Shelf 1 (Single-Bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID Disk11200 9 Disk11100 8 Disk11000 7 Disk10800 6 Disk10500 5 Disk10400 4 Disk10300 3 Disk10200 2 Disk10100 1 Disk10000 Bay Model 4310R Disk Enclosure Shelf 2 (Single-Bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID Disk21200 9 Disk21100 8
Planning Storage Configurations Table 5: PTL addressing, dual-bus configuration, three Model 4350R enclosures Model 4350R Disk Enclosure Shelf 1 (Single-Bus) SCSCSI Bus ASI Bus A SCSI Bus B 10 SCSI ID 00 01 02 03 04 00 01 02 03 04 DISK ID Disk20400 9 Disk20300 8 Disk20200 7 Disk20100 6 Disk20000 5 Disk10400 4 Disk10300 3 Disk10200 2 Disk10100 1 Disk10000 Bay Model 4350R Disk Enclosure Shelf 2 (Single-Bus) SCSCSI Bus ASI Bus A SCSI Bus B 10 SCSI ID 00 01 02 03 04 00
Planning Storage Configurations Table 6: PTL addressing, single-bus configuration, six Model 4314R enclosures Model 4314R Disk Enclosure Shelf 6 (Single-Bus) 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID Disk61500 13 Disk61400 12 Disk61300 11 Disk61200 10 Disk61100 9 Disk61000 8 Disk60900 7 Disk60800 6 Disk60500 5 Disk60400 4 Disk60300 3 Disk60200 2 Disk60100 1 Disk60000 Bay Model 4314R Disk Enclosure Shelf 5 (Single-Bus) 14 SCSI ID 00 01 02
Planning Storage Configurations Table 6: PTL addressing, single-bus configuration, six Model 4314R enclosures (Continued) Model 4314R Disk Enclosure Shelf 2 (Single-Bus) 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID Disk21500 13 Disk21400 12 Disk21300 11 Disk21200 10 Disk21100 9 Disk21000 8 Disk20900 7 Disk20800 6 Disk20500 5 Disk20400 4 Disk20300 3 Disk20200 2 Disk20100 1 Disk20000 Bay Model 4314R Disk Enclosure Shelf 3 (Single-Bus) 14 SCSI ID
Planning Storage Configurations Disk40800 Disk40500 Disk40400 Disk40300 Disk40200 Disk40100 Disk40000 Disk30800 Disk30500 Disk30400 Disk30300 Disk30200 Disk30100 DISK ID Disk30000 Table 7: PTL addressing, dual-bus configuration, three Model 4354A enclosures (Continued) Model 4354R Disk Enclosure Shelf 3 (Dual-Bus) 14 SCSI ID 00 01 02 03 04 05 08 00 01 02 03 04 05 08 DISK ID Disk60800 13 Disk60500 12 Disk60400 11 Disk60300 10 Disk60200 9 Disk60100 8 Disk60000 7
Planning Storage Configurations Containers Partition Single devices (JBOD) Stripeset (R0) Mirrorset (R1) Striped mirrorset (R0+1) RAIDset (R3/5) Storagesets CXO6677A Figure 23: Storage container types HSG80 ACS Solution Software V8.
Planning Storage Configurations Table 8 compares the different kinds of containers to help you determine which ones satisfy your requirements.
Planning Storage Configurations Creating a Storageset Profile Creating a profile for your storagesets, partitions, and devices can simplify the configuration process. Filling out a storageset profile helps you choose the storagesets that best suit your needs and to make informed decisions about the switches you can enable for each storageset or storage device that you configure in your subsystem. For an example of a storageset profile, see Table 9.
Planning Storage Configurations Initialize Switches: Chunk size _X_ Automatic (default) Save Configuration ___No (default) Metadata _X_Destroy (default) ___ 64 blocks _X_Yes ___Retain ___ 128 blocks ___ 256 blocks Unit Switches: Caching Access by following hosts enabled Read caching_______X__ Read-ahead caching_____ Write-back caching___X__ Write-through caching____ _ALL_____________________________________________ ____________ _________________________________________________ ___________ ______
Planning Storage Configurations For example, in a three-member stripeset that contains disk drives Disk 10000, Disk 20000, and Disk 10100, the first chunk of an I/O request is written to Disk 10000, the second to Disk 20000, the third to Disk 10100, the fourth to Disk 10000, until all of the data has been written to the drives (Figure 24).
Planning Storage Configurations Virtual disk Operating system view Actual device mappings Block 0 Block 1 Block 2 Block 3 Block 4 Block 5 etc. Disk 1 Disk 2 Disk 3 Block 0 Block 3 etc. Block 1 Block 4 etc. Block 2 Block 5 etc. Stripeset CXO4592B Figure 25: Three-member RAID 0 stripeset (example 2) Keep the following points in mind as you plan your stripesets: ■ Reporting methods and size limitations prevent certain operating systems from working with large stripesets.
Planning Storage Configurations For this reason, you should avoid using a stripeset to store critical data. Stripesets are more suitable for storing data that can be reproduced easily or whose loss does not prevent the system from supporting its critical mission. ■ Evenly distribute the members across the device ports to balance the load and provide multiple paths. ■ Stripesets may contain between two and 24 members.
Planning Storage Configurations Mirrorset Planning Considerations Mirrorsets (RAID 1) use redundancy to ensure availability, as illustrated in Figure 26. For each primary disk drive, there is at least one mirror disk drive. Thus, if a primary disk drive fails, its mirror drive immediately provides an exact copy of the data. Figure 27 shows a second example of a Mirrorset.
Planning Storage Configurations Keep these points in mind when planning mirrorsets ■ Data availability with a mirrorset is excellent but comes with a higher cost—you need twice as many disk drives to satisfy a given capacity requirement. If availability is your top priority, consider using dual-redundant controllers and redundant power supplies. ■ You can configure up to a maximum of 20 RAID 3/5 mirrorsets per controller or pair of dual-redundant controllers. Each mirrorset may contain up to 6 members.
Planning Storage Configurations Virtual disk Operating system view Disk 1 Block 0 Block 5 Block 10 Block 15 Block 0 Block 1 Block 2 Block 3 Block 4 Block 5 etc.
Planning Storage Configurations ■ A RAIDset must include at least 3 disk drives, but no more than 14. ■ A storageset should only contain disk drives of the same capacity. The controller limits the capacity of each member to the capacity of the smallest member in the storageset. Thus, if you combine 9 GB disk drives with 4 GB disk drives in the same storageset, you waste 5 GB of capacity on each 9 GB member.
Planning Storage Configurations p t Mirrorset1 Mirrorset2 Disk 20000 Disk 10100 Disk 20200 A B C Disk 10000 Disk 20100 Disk 10200 B' C' A' Mirrorset3 CXO7289A Figure 29: Striped mirrorset (example 1) The failure of a single disk drive has no effect on the ability of the storageset to deliver data to the host. Under normal circumstances, a single disk drive failure has very little effect on performance.
Planning Storage Configurations Plan the mirrorset members, and plan the stripeset that will contain them. Review the recommendations in “Planning Considerations for Storageset” on page 76, and “Mirrorset Planning Considerations” on page 80. Storageset Expansion Considerations Storageset Expansion allows for the joining of two of the same kind of storage containers by concatenating RAIDsets, stripesets, or individual disks, thereby forming a larger virtual disk, which is presented as a single unit.
Planning Storage Configurations unpartitioned storageset or device. Partitions are separately addressable storage units; therefore, you can partition a single storageset to service more than one user group or application. Defining a Partition Partitions are expressed as a percentage of the storageset or single disk unit that contains them: ■ Mirrorsets and single disk units—the controller allocates the largest whole number of blocks that are equal to or less than the percentage you specify.
Planning Storage Configurations Changing Characteristics Through Switches CLI command switches allow the user another level of command options. There are three types of switches that modify the storageset and unit characteristics: ■ Storageset switches ■ Initialization switches ■ Unit switches The following sections describe how to enable/modify switches. They also contain a description of the major CLI command switches.
Planning Storage Configurations Specifying Storageset and Partition Switches The characteristics of a particular storageset can be set by specifying switches when the storageset is added to the controllers’ configuration. Once a storageset has been added, the switches can be changed by using a SET command. Switches can be set for partitions and the following types of storagesets: ■ RAIDset ■ Mirrorset Stripesets have no specific switches associated with their ADD and SET commands.
Planning Storage Configurations Partition Switches The following switches are available when creating a partition: ■ Size ■ Geometry For details on the use of these switches, refer to CREATE_PARTITION command in the HP StorageWorks HSG60 and HSG80 Array Controller and Array Controller Software Command Line Interface (CLI) Reference Guide. Specifying Initialization Switches Initialization switches set characteristics for established storagesets before they are made into units.
Planning Storage Configurations ■ CHUNKSIZE=DEFAULT lets the controller set the chunk size based on the number of disk drives (d) in a stripeset or RAIDset. If number of drives is less or equal to 9, then chunk size = 256. If the number of drives is greater than 9, then chunk size = 128. ■ CHUNKSIZE=n lets you specify a chunk size in blocks. The relationship between chunk size and request size determines whether striping increases the request rate or the data-transfer rate.
Planning Storage Configurations ■ Many parallel I/Os that use a small area of disk should use a chunk size of 10 times the average transfer request rate. ■ Random I/Os that are scattered over all the areas of the disks should use a chunk size of 20 times the average transfer request rate. If you do not know, then you should use a chunk size of 15 times the average transfer request rate.
Planning Storage Configurations Note: DO NOT use SAVE_CONFIGURATION in dual redundant controller installations. It is not supported and may result in unexpected controller behavior. Note: HP recommends that you do not use SAVE_CONFIGURATION on every unit and device on the controller. Destroy/Nodestroy Specify whether to destroy or retain the user data and metadata when a disk is initialized after it has been used in a mirrorset or as a single-disk unit.
Planning Storage Configurations ■ SECTORS_PER_TRACK—the number of sectors per track used. The range is from 1 to 255. Specifying Unit Switches Several switches control the characteristics of units. The unit switches are described under the SET unit-number command in the HP StorageWorks HSG60 and HSG80 Array Controller and Array Controller Software Command Line Interface (CLI) Reference Guide.
Planning Storage Configurations 2. Note the position of all the drives contained within D104. 3. Enter the following command to turn off the flashing LEDs: LOCATE CANCEL The following procedure is an example command to locate all the drives that make up RAIDset R1: 1. Enter the following command: LOCATE R1 2. Note the position of all the drives contained within R1. 3. Enter the following command to turn off the flashing LEDs: LOCATE CANCEL 94 HSG80 ACS Solution Software V8.
Planning Storage Configurations Example Storage Map—Model 4310R Disk Enclosure Table 11 shows an example of four Model 4310R disk enclosures (single-bus I/O).
Planning Storage Configurations Table 11: Model 4310 disk enclosure, example of storage map (Continued) Model 4310R Disk Enclosure Shelf 2 (Single-Bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 D100 R1 D101 S1 M1 D102 M3 D104 S2 D106 R2 D108 S3 D1 S4 M5 D2 R3 D3 S5 D4 M7 DISK ID Disk21200 9 Disk21100 8 Disk21000 7 Disk20800 6 Disk20500 5 Disk20400 4 Disk20300 3 Disk20200 2 Disk20100 1 Disk20000 Bay Model 4310R Disk Enclosure Shelf 3 (Single-Bus) 10 SCSI ID 00 01
Planning Storage Configurations ■ Unit D103 is a 2-member mirrorset named M4. M4 consists of Disk30200 and Disk40200. ■ Unit D104 is 3-member stripeset named S2. S2 consists of Disk10300, Disk20300, and Disk30300. ■ Unit D105 is a single (JBOD) disk named Disk40300. ■ Unit D106 is a 3-member RAID 3/5 storageset named R2. R2 consists of Disk10400, Disk20400, and Disk30400. ■ Unit D107 is a single (JBOD) disk named Disk40400. ■ Unit D108 is a 4-member stripeset named S3.
Planning Storage Configurations 98 HSG80 ACS Solution Software V8.
Preparing the Host System 3 3 This chapter describes how to prepare your Sun Solaris host computer to accommodate the HSG80 controller storage subsystem. The following information is included in this chapter: ■ Making a Physical Connection, page 104 ■ Installing Solution Software Packages, page 108 ■ Creating and Tuning File Systems, page 113 Refer to Chapter 4 for instructions on how to install and configure the HSG Agent. The Agent for HSG is operating system-specific and polls the storage.
Preparing the Host System Installing RAID Array Storage System WARNING: A shock hazard exists at the backplane when the controller enclosure bays or cache module bays are empty. Be sure the enclosures are empty, then mount the enclosures into the rack. DO NOT use the disk enclosure handles to lift the enclosure. The handles cannot support the weight of the enclosure. Only use these handles to position the enclosure in the mounting brackets.
Preparing the Host System 3. Install the elements. Install the disk drives. Make sure you install blank panels in any unused bays. Fibre Channel cabling information is shown to illustrate supported configurations. In a dual-bus disk enclosure configuration, disk enclosures 1, 2, and 3 are stacked below the controller enclosure—two SCSI Buses per enclosure (see Figure 33).
Preparing the Host System 1 8 2 3 4 5 7 6 CXO7383A 1 3 5 7 SCSI Bus 1 Cable SCSI Bus 3 Cable SCSI Bus 5 Cable AC Power Inputs 2 4 6 8 SCSI Bus 2 Cable SCSI Bus 4 Cable SCSI Bus 6 Cable Fibre Channel Ports Figure 33: Dual-Bus Enterprise Storage RAID Array Storage System 102 HSG80 ACS Solution Software V8.
Preparing the Host System 6 5 4 8 1 7 2 3 CXO7382A 1 3 5 7 SCSI Bus 1 Cable SCSI Bus 3 Cable SCSI Bus 5 Cable AC Power Inputs 2 4 6 8 SCSI Bus 2 Cable SCSI Bus 4 Cable SCSI Bus 6 Cable Fibre Channel Ports Figure 34: Single-Bus Enterprise Storage RAID Array Storage System HSG80 ACS Solution Software V8.
Preparing the Host System Making a Physical Connection To attach a host computer to the storage subsystem, install one or more host bus adapters into the computer. A Fibre Channel (FC) cable goes from the host bus adapter to an FC switch. Preparing to Install Host Bus Adapter Before installing the host bus adapter: 1. Perform a complete backup of the entire system. 2. Shut down the computer system or perform a hot addition of the adapter based upon directions for that server.
Preparing the Host System Table 12: StorageWorks Solution Software Packages Package Description CPQfcraid Agent software and system updates required for RAID system operation. This package should always be installed. CPQfcaw HBA driver for the CPQ/JNI FC64-1063Fibre Channel 64-bit Sbus adapter. This package should be loaded when a CPQ/JNI SWSA4-SC adapter is installed in the server. CPQfcaPCI HBA driver for the CPQ/JNI FCI-1063 32-bit PCI bus adapter.
Preparing the Host System To install the System Manager packages onto the hard disk: 1. Download the fcraid_v3.0d.tar.Z from the web site http://h18006.www1.hp.com/products/storageworks/ma8kema12k/kits.html. 2. Copy or move the file to a temporary directory, uncompress and un-tar the bundle, and run ./install_stgwks. Note: The uncompressed tar file plus the un-tarred files require 107 MB of file system space. 3. Load the Fibre Channel adapter driver packages. 4.
Preparing the Host System Verifying/Installing Required Versions To prepare your Enterprise Storage RAID Array for the RAID Manager software installation: 1. Back up your entire system according to your normal procedure. 2. Select a system user with superuser privileges (for example: root) as the RAID administrator. 3. Login as the RAID administrator. 4. To find a filesystem with at least 500 KB free space, type: # df -k 5. Choose a directory in which to install the SWCC software.
Preparing the Host System Installing Solution Software Packages Several significant installation changes have been made in Solution Software V8.8 for Sun Solaris. The most significant changes are a friendlier and more logical installation process and moving /opt/steam/bin/config.sh Option 20 to /opt/HPfcraid/bin/config.sh.
Preparing the Host System 3. Start the Installation Manager by running the install_stgwks script. 4. The script will scan the server for compatible HBAs and ask you if you wish to install drivers for those HBAs. The default answer is Yes. Answer no (n) if you do not wish to install a driver.
Preparing the Host System 6. If any adapter drivers were loaded in previous steps, answer “Y” to allow the Installation Manager to rescan your system for supported adapters. Since the rescan will probe every possible I/O slot for new hardware, it can take up to 5 minutes to complete on a large system. During this time, a heavily loaded system may appear to hang. This is normal. 7. System configuration files for adapters can be selected automatically or manually. — Choose “M” to manually edit your file.
Preparing the Host System Preparing LUNs for Use by the FileSystem Each logical unit number (LUN) created on the Enterprise Storage RAID Array appears as a SCSI hard disk to the host. Therefore, it must be labeled before it can be used and, in most instances, a new file system must be created. Labeling LUNs A LUN is labeled using the /etc/format utility. The label contains information about the LUN such as controller-type, geometry, and partitions.
Preparing the Host System 1. 2. 3. 1. 5. 6. 7. 8. 9. 10. 11. 12. 13. 11. 15. Quantum ProDrive 80S Quantum ProDrive 105S CDC Wren IV 94171-344 SUN0104 SUN0207 SUN0327 SUN0340 SUN0424 SUN0535 SUN0669 SUN1.0G SUN1.05 SUN1.3G SUN2.1G other Specify disk type (enter its number)[19]: 0 c1t0d0: configured with capacity of 1.
Preparing the Host System Creating and Tuning File Systems Before the new LUN can be used by the system, a new filesystem must be created on each partition that will be mounted. Use the newfs command to create filesystems and the tunefs command to modify existing filesystems. For more information, refer to the online Help for the newfs and tunefscommands.
Preparing the Host System Refer to the HP StorageWorks HSG60 and HSG80 Array Controller and Array Controller Software Maintenance and Service Guide and the Solution Software Release Notes for the latest information on upgrades. The Solutions Software Version 8.8 for Sun Solaris Kit should come with the ordered items on the FCO. This Version 8.8 Solution Software includes Version 2.5b of the CPQfcraid RAID Manager Software, and it contains upgraded HBA drivers (if you were running ACS version 8.
Preparing the Host System HSG80 ACS Solution Software V8.
Preparing the Host System 116 HSG80 ACS Solution Software V8.
Installing and Configuring the HSG Agent 4 StorageWorks Command Console (SWCC) enables real-time configuration of the storage environment and permits you to monitor and configure the storage connected to the HSG80 controller.
Installing and Configuring the HSG Agent Why Use StorageWorks Command Console (SWCC)? StorageWorks Command Console (SWCC) enables you to monitor and configure the storage connected to the HSG80 controller. SWCC consists of Client and Agent. ■ The Client provides pager notification and lets you manage your virtual disks. The Client runs on Windows Server 2003 (32-bit), Windows 2000 with Service Pack 3 and 4, and Windows NT 4.0 with Service Pack 6A or above.
Installing and Configuring the HSG Agent Note: For serial and SCSI connections, the Agent is not required for creating virtual disks. Installation and Configuration Overview Table 14 provides an overview of the installation. Table 14: Installation and Configuration Overview Step Procedure 1. Verify that your hardware has been set up correctly. See the previous chapters in this guide. 2. 2 Verify that RAID Solution Software has been loaded properly. Refer to Chapter 3. 3.
Installing and Configuring the HSG Agent About the Network Connection for the Agent The network connection, shown in Figure 35, displays the subsystem connected to a hub or a switch. SWCC can consist of any number of Clients and Agents in a network. However, it is suggested that you install only one Agent on a computer. By using a network connection, you can configure and monitor the subsystem from anywhere on the LAN. If you have a WAN or a connection to the Internet, monitor the subsystem with TCP/IP.
Installing and Configuring the HSG Agent 7 1 A T V A T -S H V T N E C O O A T V O 4 4 7 A T V A T -S H 2 V T N E C O O 5 4 3 6 CXO7240A 1 Agent system (has the Agent 5 Hub or switch software) 2 TCP/IP network 6 HSG80 controller and its device subsystem 3 Client system (has the Client 7 Servers software) 4 Fibre Channel cable Figure 35: An example of a network connection HSG80 ACS Solution Software V8.
Installing and Configuring the HSG Agent Before Installing the Agent The Agent requires the minimum system requirements, as defined in the release notes for your operating system. The program is designed to operate with the Client version 2.5 on Windows 2000, Windows NT, or Windows Server 2003 (32-bit). Verify that your system meets the minimum requirements by completing the following steps. 122 HSG80 ACS Solution Software V8.
Installing and Configuring the HSG Agent Downloading the Host Kit Software From the Web The host kit software is available for download. You can save the software to your computer or create a CD-ROM. Platform kit software is stored on the download web site based on operating system. Follow the steps below to obtain the software from the web site. 1. Go to http://h18006.www1.hp.com/products/storageworks/ma8kema12k/kits.html. 2. Select the kit for download. 3.
Installing and Configuring the HSG Agent Configuring the Agent with Install.sh (for First-time Configurations) For first-time configurations, run the install.sh script from the $basedir/steam/bin directory (usually /opt/steam/bin). This script will guide you through the following actions: ■ Adding a subsystem entry ■ Adding a Client entry ■ Creating a password ■ Setting up email notification ■ Starting the Agent 1. To change directories, enter the following command: # cd /opt/steam/bin 2.
Installing and Configuring the HSG Agent ---- RAID Array V8.
Installing and Configuring the HSG Agent Adding a Subsystem Entry Any storageset belonging to the subsystem can be used to add a subsystem entry, but be careful not to delete the LUN from the subsystem when reconfiguring, as this breaks the communication link to the Agent for the entire subsystem. Add the subsystems you want the Agent to monitor by performing the following procedure, starting at the RAID Array Configuration Menu. 1. In the Storage Subsystem Options group, select option 12, View Subsystems.
Installing and Configuring the HSG Agent Restarting the SWCC Agent After you make any changes to the SWCC Agent configuration, the Agent daemon must be stopped and restarted. This ensures that changes to the configuration files are read by the steamd program. The steamd program is the daemon for the Agent. Configuring the Agent within FirstWatch If you have VERITAS FirstWatch installed, you may configure the StorageWorks Command Console Agent to run under FirstWatch.
Installing and Configuring the HSG Agent 128 HSG80 ACS Solution Software V8.
FC Configuration Procedures 5 This chapter describes procedures to configure a subsystem that uses Fibre Channel (FC) fabric topology. In fabric topology, the controller connects to its hosts through switches.
FC Configuration Procedures Establishing a Local Connection A local connection is required to configure the controller until a command console LUN (CCL) is established using the CLI. Communication with the controller can be through the CLI or SWCC. The maintenance port, shown in Figure 37, provides a way to connect a maintenance terminal. The maintenance port can be an EIA-423 compatible terminal or a computer running a terminal emulator program. The maintenance port accepts a standard RS-232 jack.
FC Configuration Procedures Establishing Connection with a SPARC System To set up your SPARC system for connection with the HSG80 Controller, follow these steps: 1. Use the supplied serial cable and the 9 to 25 pin RS-232 adapter (P/N=12-45238-01) to connect the serial port on the SPARC system to the serial port on the RAID array controller. 2. If you use a SUN A/B Serial Splitter Cable, and/or you attach the controller to serial port A, you may need to modify the remote file to specify ttya as follows: a.
FC Configuration Procedures Setting Up a Single Controller Powering On and Establishing Communication 1. Connect the computer or terminal to the controller, as shown in Figure 37. The connection to the computer is through the COM1 or COM2 port. 2. Turn on the computer or terminal. 3. Apply power to the storage subsystem. 4. Verify that the computer or terminal is configured as follows: — 9600 baud — 8 data bits — 1 stop bit — no parity — no flow control 5. Press Enter.
FC Configuration Procedures 4 1 2 5 3 5 4 CXO6881B 1 Controller 4 Cable from the switch to the host Fibre Channel 2 Host port 1 adapter 3 Host port 2 5 FC switch Figure 38: Single controller cabling Configuring a Single Controller Using CLI To configure a single controller using CLI involves the following processes: ■ Verifying the Node ID and Checking for Any Previous Connections. ■ Configuring Controller Settings. ■ Restart the Controller. ■ Setting Time and Verifying all Commands.
FC Configuration Procedures The node ID is located in the third line of the SHOW THIS result: HSG80> SHOW THIS Controller: HSG80 ZG80900583 Software V8.8, Hardware E11 NODE_ID = 5000-1FE1-0001-3F00 ALLOCATION_CLASS = 0 If the node ID is present, go to step 5. If the node ID is all zeroes, enter node ID and checksum, which are located on a sticker on the controller enclosure.
FC Configuration Procedures Note: If SCSI-2 is selected, you must disable CCL using the command: SET THIS NOCOMMAND_CONSOLE_LUN 6. Assign an identifier for the communication LUN (also called the command console LUN, or CCL). The CCL must have a unique identifier that is a decimal number in the range 1 to 32767, and which is different from the identifiers of all units.
FC Configuration Procedures FRUTIL will print out a procedure, but will not give you a prompt. Ignore the procedure and press the Enter key. 3. Set up any additional optional controller settings, such as changing the CLI prompt. See the SET THIS CONTROLLER/OTHER CONTROLLER command in the HP StorageWorks HSG60 and HSG80 Array Controller and Array Controller Software Command Line Interface (CLI) Reference Guide for the format of optional settings. 4. Verify that all commands have taken effect.
FC Configuration Procedures The following sample is a result of a SHOW THIS command, with the areas of interest in bold. Controller: HSG80 ZG94214134 Software V8.
FC Configuration Procedures 5. Turn on the switches, if not done previously. If you want to communicate with the Fibre Channel switches through Telnet, set an IP address for each switch. See the manuals that came with the switches for details. Plugging in the FC Cable and Verify Connections 6. Plug the Fibre Channel cable from the first host bus adapter into the switch. Enter the SHOW CONNECTIONS command to view the connection table: SHOW CONNECTIONS 7.
FC Configuration Procedures Setting Up a Controller Pair Powering Up and Establishing Communication 1. Connect the computer or terminal to the controller as shown in Figure 37. The connection to the computer is through the COM1 or COM2 ports. 2. Turn on the computer or terminal. 3. Apply power to the storage subsystem. 4. Configure the computer or terminal as follows: — 9600 baud — 8 data bits — 1 stop bit — no parity — no flow control 5. Press Enter.
FC Configuration Procedures Figure 39 shows a controller pair with failover cabling showing one HBA per server with HSG80 controller in transparent failover mode. 5 6 1 3 4 2 6 5 CXO6887B 1 Controller A 4 Host port 2 2 Controller B 5 Cable from the switch to the host FC adapter 3 Host port 1 6 FC switch Figure 39: Controller pair failover cabling Configuring a Controller Pair Using CLI To configure a controller pair using CLI: ■ Configuring Controller Settings. ■ Restarting the Controller.
FC Configuration Procedures The node ID is located in the third line of the SHOW THIS result: HSG80> show this Controller: HSG80 ZG80900583 Software V8.8, Hardware E11 NODE_ID = 5000-1FE1-0001-3F00 ALLOCATION_CLASS = 0 If the node ID is present, go to step 5. If the node ID is all zeroes, enter the node ID and checksum, which are located on a sticker on the controller enclosure.
FC Configuration Procedures 10. Set the topology for the controller. If both ports are used, set topology for both ports: SET THIS PORT_1_TOPOLOGY=FABRIC SET THIS PORT_2_TOPOLOGY=FABRIC If the controller is not factory-new, it may have another topology set, in which case these commands will result in an error message.
FC Configuration Procedures 15. Verify that all commands have taken effect by entering the following command: SHOW THIS HSG80 ACS Solution Software V8.
FC Configuration Procedures 16. Verify node ID, allocation class, SCSI version, failover mode, identifier, and port topology. The following display is a sample result of a SHOW THIS command, with the areas of interest in bold. Controller: HSG80 ZG94214134 Software V8.
FC Configuration Procedures 17. Turn on the switches if not done previously. If you want to communicate with the FC switches through Telnet, set an IP address for each switch. See the manuals that came with the switches for details. Plugging in the FC Cable and Verify Connections 18. Plug the FC cable from the first host adapter into the switch. Enter a SHOW CONNECTIONS command to view the connection table: SHOW CONNECTIONS The first connection will have one or more entries in the connection table.
FC Configuration Procedures Verifying Installation To verify installation for your Sun Solaris host, reboot your system with the -r option (reconfigure boot). After the system is booted, use the format command to verify that your LUNs are accessible. Configuring Devices The disks on the device bus of the HSG80 can be configured manually or with the CONFIG utility. The CONFIG utility is easier.
FC Configuration Procedures Containers Partition Stripeset (R0) Single devices (JBOD) Mirrorset (R1) Striped mirrorset (R0+1) RAIDset (R3/5) Storagesets CXO6677A Figure 40: Storage container types Configuring a Stripeset 1. Create the stripeset by adding its name to the controller's list of storagesets and by specifying the disk drives it contains. Use the following syntax: ADD STRIPESET STRIPESET-NAME DISKNNNNN DISKNNNNN....... 2.
FC Configuration Procedures Configuring a Mirrorset 1. Create the mirrorset by adding its name to the controller's list of storagesets and by specifying the disk drives it contains. Optionally, you can append mirrorset switch values: ADD MIRRORSET MIRRORSET-NAME DISKNNNNN DISKNNNNN SWITCHES Note: See the ADD MIRRORSET command in the HP StorageWorks HSG60 and HSG80 Array Controller and Array Controller Software Command Line Interface (CLI) Reference Guide for a description of the mirrorset switches. 2.
FC Configuration Procedures Note: See the ADD RAIDSET command in the HP StorageWorks HSG60 and HSG80 Array Controller and Array Controller Software Command Line Interface (CLI) Reference Guide for a description of the RAIDset switches. 2. Initialize the RAIDset, specifying any desired switches: INITIALIZE RAIDSET-NAME SWITCH Note: HP recommends that you allow initial reconstruct to complete before allowing I/O to the RAIDset. Not doing so may generate forced errors at the host level.
FC Configuration Procedures See “Specifying Initialization Switches” on page 89 for a description of the initialization switches. 4. Verify the striped mirrorset configuration: SHOW STRIPESET-NAME 5. Assign the stripeset mirrorset a unit number to make it accessible by the hosts. See “Assigning Unit Numbers and Unit Qualifiers” on page 152.
FC Configuration Procedures 2. Create each partition in the storageset or disk drive by indicating the partition's size. Also specify any desired switch settings: CREATE_PARTITION STORAGESET-NAME SIZE=N SWITCHES or CREATE_PARTITION DISK-NAME SIZE=N SWITCHES where N is the percentage of the disk drive or storageset that will be assigned to the partition. Enter SIZE=LARGEST, on the last partition only, to let the controller assign the largest free space available to the partition.
FC Configuration Procedures Assigning Unit Numbers and Unit Qualifiers Each storageset, partition, or single (JBOD) disk must be assigned a unit number for the host to access. As the units are added, their properties can be specified through the use of command qualifiers, which are discussed in detail under the ADD UNIT command in the HP StorageWorks HSG60 and HSG80 Array Controller and Array Controller Software Command Line Interface (CLI) Reference Guide.
FC Configuration Procedures Configuration Options Changing the CLI Prompt To change the CLI prompt, enter a 1- to 16-character string as the new prompt, according to the following syntax: SET THIS_CONTROLLER PROMPT = “NEW PROMPT” If you are configuring dual-redundant controllers, also change the CLI prompt on the “other controller.” Use the following syntax: SET OTHER_CONTROLLER PROMPT = “NEW PROMPT” Note: It is suggested that the prompt name reflect some information about the controllers.
FC Configuration Procedures Note: This procedure assumes that the disks that you are adding to the spareset have already been added to the controller's list of known devices. To add the disk drive to the controller's spareset list, use the following syntax: ADD SPARESET DISKNNNNN Repeat this step for each disk drive you want to add to the spareset: For example: The following example shows the syntax for adding DISK11300 and DISK21300 to the spareset.
FC Configuration Procedures To disable autospare, use the following command: SET FAILEDSET NOAUTOSPARE During initialization, AUTOSPARE checks to see if the new disk drive contains metadata. Metadata is information the controller writes on the disk drive when the disk drive is configured into a storageset. Therefore, the presence of metadata indicates that the disk drive belongs to, or has been used by, a storageset. If the disk drive contains metadata, initialization stops.
FC Configuration Procedures Displaying the Current Switches To display the current switches for a storageset or single-disk unit, enter a SHOW command, specifying the FULL switch: SHOW STORAGESET-NAME or SHOW DEVICE-NAME Note: FULL is not required when showing a particular device. It is used when showing all devices, for example, SHOW DEVICES FULL.
FC Configuration Procedures FC Considerations for Both Loop and Fabric Environments In a mixed environment of Fibre Channel loop and Fibre Channel switched access, there are specific differences in transport and access to storage subsystems—either on the loop or in the fabric. This section describes the differences and configuration processes for Sun servers and adapters for loop and switch support.
FC Configuration Procedures Servers Servers are the computing power and the communication access for the storage that the computing applications require. Servers can connect to local storage or RAID Arrays. Many have tried to characterize the loading of a server in relation to a “number of” quantities involved in a server-storage relationship.
FC Configuration Procedures Note: This chapter does not attempt to answer questions of loading or performance. Storage The term Storage, as used in this chapter, is a RAID Array that is supported by the HSG80 controller. This controller supports all of the standard RAID set configurations in a loop or fabric environment. These configuration elements are discussed in the following sections. Active-Passive vs.
FC Configuration Procedures In Case 2, both I/O channels are protected as well as distributing the processing load over both controllers. The fact that one controller is active while the other controller is passive leads to a misnomer of Active-Passive to describe the redundant pair. The reason that this is a misnomer is the fact that the description is referring to a single controller and not the controller pair.
FC Configuration Procedures As indicated earlier, the complexity of a configuration can increase in direct proportion to the number of components added to the Fibre Channel storage network. While switches provide a great deal of flexibility and scalability of storage access and sharing, they also tend to make transparent the path from server to storage. Note: Take the time to plan your configuration, test the portions of it and record the various connections and paths.
FC Configuration Procedures In the fibre fabric, the basic target is the controller’s Worldwide Node Name (WWNN). The WWNN is the fabric network address for the controller of the RAID array. WWNNs have a format of AAAA-BBBB-CCCC-DDDD, where A, B, C, D are alphanumeric characters. See “Worldwide Names (Node IDs and Port IDs)” on page 56 for details. Each controller has two ports and each port has a designated Worldwide Port Name (WWPN).
FC Configuration Procedures FFFF-SSSS-TTTT-LLL1 for Port 1 FFFF-SSSS-TTTT-LLL2 for Port 2 By utilizing the WWPN, the user has complete control over the mappings (and subsequently the system binding) from a server to a port in the fabric. By enabling the Port Names, there is the flexibility of mapping a server to a specific port of a fabric array or to multiple ports of a fabric array. Additionally, the user is able to map a server with multiple adapters to multiple arrays in the fabric.
FC Configuration Procedures Loop Mappings Three entries from a sd.conf file are shown below. name=”sd” parent=”/sbus@a,0/fcaw@2,0” Target=64 lun=0; name=”sd” parent=”/sbus@a,0/fcaw@2,0” Target=64 lun=1; ... name=”sd” parent=”/sbus@a,0/fcaw@2,0” Target=64 lun=15; Analysis Device Class: name=”sd” The device class that is enabled. The sd driver is the SCSI disk, or Target driver for Solaris.
FC Configuration Procedures Analysis Device Class: name="sd" The device class that that is enabled. As with the loop settings, the SCSI disk driver is used. Adapter Path: parent="/iommu@f,e0000000/sbus@f,e0001000/fcaw@1,0" Identifies the adapter, the instance and the adapter/driver name. Target: Target=64 The default bindings for Solaris use Targets 64 and 65 for Fabric.
FC Configuration Procedures The menu is shown below. All the options in this menu apply to any subsystem and any server, regardless of the Fibre Channel access chosen. ---- RAID Array V8.
FC Configuration Procedures Note: Changing values or the driver mode for one adapter instance will not affect the mode for the other adapter instances. In this way, unique instances of each adapter can exist for a specific type of a fibre transport medium, loop or fabric. . To make a change to the adapters, select Option 20, Add/Change Adapters. When selected, you access the following sub-menu: Note: Option 20 has been moved to /opt/HPfcraid/bin/config.sh. --- Adapter Configuration Menu --(sd.conf & fc*.
FC Configuration Procedures Option 1 - View Adapters The following example shows one JNI SBUS 64-bit Adapter and two JNI PCI Adapters. The JNI SBUS adapter is configured for Arbitrated Loop and the JNI PCI adapters are configured for Fabric mode. Note also that both PCI adapters are communicating with the same Worldwide Port Name entities on the Fabric. The RAID configuration employed is unknown. NN Adapter # Control.
FC Configuration Procedures Option 5 - View Available WWPNs Select this option to run the Scan Adapters Utility. This utility detects available adapters and WWPNs for this host. HSG80 ACS Solution Software V8.
FC Configuration Procedures As an example of Option 4, you will: ■ Change the mode from fabric to loop. ■ Change the number of LUNs per Target. ■ Change the number of LUNs per Target. ■ Set the Target IDs. NN Adapter # Control.
FC Configuration Procedures Change the Number of LUNs per Target When changing from loop to fabric mode, you need to supply the WWPNs. Do you want to change the number of LUNs per Target? [y,N] By default, the Solaris Platform kit software supplies LUNs 0-15 for a total of 16 LUNs per Target. This number is more than adequate for many purposes.
FC Configuration Procedures — Enter WWPN (n=none, RETURN=old value, if any): 5000-1fe1-0000-03f1 These prompts are provided for each Target that has been defined. The user has three choices at this prompt: ■ Press Enter to restore the old value for the WWPN. ■ Enter the WWPN value for each Target. ■ Enter the letter, n. This value specifies that there are no WWPNs specified. The impact of this action is very important as it will preserve the adapter in the mode requested but will remove all the sd.
FC Configuration Procedures Configuration Procedures This section presents a set of macro-level procedures to perform: ■ Loop Configurations, page 173 ■ Fabric Configurations, page 174 ■ Reconfiguring from Loop to Fabric, page 174 ■ Reconfiguring from Fabric to Loop, page 175 The following assumptions have been made in these procedures: ■ The server in question has an adapter to support the Fibre Channel loop or fabric. ■ A storage system is available that will support a fibre connection.
FC Configuration Procedures 6. After reboot, enter the format command to see the intended Targets list. This assumes that at least one unit has been created on the subsystems. Fabric Configurations Note: This procedure assumes that the Solaris platform kit is being loaded onto the system and a specific mode of operation is being selected. 1. Load the Solaris Platform kit and select the Fabric mode for the driver when prompted. There are no default WWPNs so the software will prompt for the specific WWPNs.
FC Configuration Procedures 4. Create the connections from server to switch and from switch to subsystem. For configurations with multiple adapters and multiple subsystems, verify all the Fibre Channel paths from server to subsystems. 5. At the servers, reconfigure the adapters to support the fabric WWPNs and remove the unused loop Targets. This is accomplished with /opt/HPfcraid/bin/config.sh, Option 4, Modify Adapters.
FC Configuration Procedures 5. At the servers, reconfigure the adapters to support loop. This is accomplished with /opt/HPfcraid/bin/config.sh, Option 4, Modify Adapters. This option allows the mode of the driver to be changed and the Targets to be verified for the configuration. For each adapter that is used in the changed configuration, this modify step must be repeated. For each server in the changed configuration, this modify step must be repeated. 6.
Using CLI for Configuration 6 This chapter presents an example of how to configure a storage subsystem using the Command Line Interpreter (CLI). The CLI configuration example shown assumes: ■ A normal, new controller pair, which includes: — NODE ID set — No previous failover mode — No previous topology set ■ Two single-bus model 4214R disk enclosure shelves ■ PCMCIA cards installed in both controllers A storage subsystem example is shown in Figure 41.
Using CLI for Configuration Figure 41 shows an example storage system map for the BA370 enclosure. Details on building your own map are described in Chapter 2. Templates to help you build your storage map are supplied in Appendix A.
Using CLI for Configuration Host 1 "RED" Host 2 "GREY" Host 3 "BLUE" FCA1 FCA2 FCA1 FCA2 FCA1 FCA2 Switch or hub Connections RED1B1 GREY1B1 BLUE1B1 Switch or hub Connections RED1A1 GREY1A1 BLUE1A1 Connections RED2A2 GREY2A2 BLUE2A2 Host port 1 active Host port 2 active Controller A D0 D1 D2 D101 Connections RED2B2 GREY2B2 BLUE2B2 D102 D120 All units visible to all ports Host port 1 active Controller B Host port 2 active NOTE: FCA = Fibre Channel Adapter CXO7109B Figure 42: Example, t
Using CLI for Configuration "RED" "GREY" "BLUE" D1 D0 D2 D101 D102 D120 CXO7110B Figure 43: Example, logical or virtual disks comprised of storagesets "PURPLE" "WHITE" "TAN" D0 at LUN 0 D1 at LUN 1 D100 at LUN 0 D120 at LUN 0 CXO7297B Figure 44: Example, virtual system layout from hosts’ point of view CLI Configuration Example Text conventions used in this example are listed below: ■ Text in italics indicates an action you take. ■ Text in THIS FORMAT, indicates a command you type.
Using CLI for Configuration CLEAR CLI SET THIS SCSI_VERSION=SCSI-2 SET THIS ALLOCATION_CLASS=0 RESTART OTHER RESTART THIS SET THIS TIME=10-Mar-2001:12:30:34 RUN FRUTIL Do you intend to replace this controller's cache battery? Y/N [Y] Y Plug serial cable from maintenance terminal into bottom controller. Note: Bottom controller (B) becomes “this” controller.
Using CLI for Configuration Connection Name !NEWCON00 Operating System Controller WINNT THIS HOST_ID=XXXX-XXXX-XXXX-XXXX Port 1 Address Status Unit Offset XXXXXX OL this 0 ADAPTER_ID=XXXX-XXXX-XXXX-XXXX RENAME !NEWCON00 PURPLE1A1 SET PURPLE1A1 OPERATING_SYSTEM=SUN SHOW CONNECTIONS Note: Connection table sorts alphabetically.
Using CLI for Configuration RENAME !NEWCON01 WHITE1B2 SET WHITE1B2 OPERATING_SYSTEM=SUN SHOW CONNECTIONS Mark or tag both end of Fibre Channel cables. Connection Name PURPLE1A1 Operating System SUN Controller OTHER Port 1 Address Status XXXXXX OL other Unit Offset 0 HOST_ID=XXXX-XXXX-XXXX-XXXX ADAPTER_ID=XXXX-XXXX-XXXX-XXXX WHITE1B2 2 SUN THIS HOST_ID=XXXX-XXXX-XXXX-XXXX XXXXXX OL this 100 ADAPTER_ID=XXXX-XXXX-XXXX-XXXX Plug in the Fibre Channel cable from the adapter in host “TAN”.
Using CLI for Configuration Connection Name PURPLE1A1 Operating System SUN Controller OTHER Port 1 Address Status XXXXXX OL other Unit Offset 0 HOST_ID=XXXX-XXXX-XXXX-XXXX ADAPTER_ID=XXXX-XXXX-XXXX-XXXX WHITE1B2 2 SUN THIS XXXXXX OL this 100 HOST_ID=XXXX-XXXX-XXXX-XXXX ADAPTER_ID=XXXX-XXXX-XXXX-XXXX TAN1B2 2 SUN THIS HOST_ID=XXXX-XXXX-XXXX-XXXX XXXXXX OL this 120 ADAPTER_ID=XXXX-XXXX-XXXX-XXXX Mark or tag both end of Fibre Channel cables. 184 HSG80 ACS Solution Software V8.
Using CLI for Configuration RUN CONFIG ADD RAIDSET R1 DISK10000 DISK20000 DISK30000 DISK40000 DISK50000 DISK60000 INITIALIZE R1 ADD UNIT D102 R1 DISABLE_ACCESS_PATH=ALL SET D102 ENABLE_ACCESS_PATH=PURPLE1A1 ADD RAIDSET R2 DISK10100 DISK20100 DISK30100 DISK40100 DISK50100 DISK60100 INITIALIZE R2 ADD UNIT D120 R2 DISABLE_ACCESS_PATH=ALL SET D120 ENABLE_ACCESS_PATH=(TAN1B2) ADD MIRRORSET MI DISK10200 DISK20200 ADD MIRRORSET M2 DISK30200 DISK40200 ADD STRIPESET S1 M1 M2 INITIALIZE S1 ADD UNIT D0 S1 DISABLE_A
Using CLI for Configuration 186 HSG80 ACS Solution Software V8.
Backing Up, Cloning, and Moving Data 7 This chapter includes the following topics: ■ Backing Up Subsystem Configurations, page 188 ■ Creating Clones for Backup, page 189 ■ Moving Storagesets, page 195 HSG80 ACS Solution Software V8.
Backing Up, Cloning, and Moving Data Backing Up Subsystem Configurations The controller stores information about the subsystem configuration in its nonvolatile memory. This information could be lost if the controller fails or when you replace a module in the subsystem. Use the following command to produce a display that shows if the save configuration feature is active and which devices are being used to store the configuration.
Backing Up, Cloning, and Moving Data Creating Clones for Backup Use the CLONE utility to duplicate the data on any unpartitioned single-disk unit, stripeset, mirrorset, or striped mirrorset in preparation for backup. When the cloning operation is complete, you can back up the clones rather than the storageset or single-disk unit, which can continue to service its I/O load. When you are cloning a mirrorset, CLONE does not need to create a temporary mirrorset.
Backing Up, Cloning, and Moving Data Unit Unit Temporary mirrorset Disk10300 Disk10300 New member Unit Temporary mirrorset Unit Copy Disk10300 Disk10300 New member Clone Unit Clone of Disk10300 CXO5510A Figure 45: CLONE utility steps for duplicating unit members Use the following steps to clone a single-disk unit, stripeset, or mirrorset: 1. Establish a connection to the controller that accesses the unit you want to clone. 2. Start CLONE using the following command: RUN CLONE 3.
Backing Up, Cloning, and Moving Data The following example shows the commands you would use to clone storage unit D6. The clone command terminates after it creates storage unit D33, a clone or copy of D6. RUN CLONE CLONE LOCAL PROGRAM INVOKED UNITS AVAILABLE FOR CLONING: 98 ENTER UNIT TO CLONE? 98 CLONE WILL CREATE A NEW UNIT WHICH IS A COPY OF UNIT 98. ENTER THE UNIT NUMBER WHICH YOU WANT ASSIGNED TO THE NEW UNIT? 99 THE NEW UNIT MAY BE ADDED USING ONE OF THE FOLLOWING METHODS: 1.
Backing Up, Cloning, and Moving Data USE AVAILABLE DEVICE DISK20300(SIZE=832317) FOR MEMBER DISK10000(SIZE=832317) (Y,N) [Y]? Y MIRROR DISK10000 C_MB SET C_MB NOPOLICY SET C_MB MEMBERS=2 SET C_MB REPLACE=DISK20300 COPY IN PROGRESS FOR EACH NEW MEMBER. PLEASE BE PATIENT...
Backing Up, Cloning, and Moving Data Controller Serial is the low-order 48 bits of the controller that “initialized” the storage set. The Controller_Serial is composed from several fields, but the high order 12 bits are a Reserved Field. VSN_Seed is a counter that is incremented every time we initialize a storage set. In the case that the linked WWID is already in use, a unique WWID will be allocated, and a message to this effect will be displayed.
Backing Up, Cloning, and Moving Data Run Clone - Works the same as v86. In other words a unique WWID is always allocated to the clone unit. Clonew of a Snap - The user wants to clone a snap unit without using any more WWIDs. The clone created from the snap unit will be created using the linked WWID associated with the snap unit. Exception: A new WWID will be allocated if the snapshot was created using the use_parent_wwid switch. Each WWID only has one linked WWID variation.
Backing Up, Cloning, and Moving Data Moving Storagesets You can move a storageset from one subsystem to another without destroying its data. You also can follow the steps in this section to move a storageset to a new location within the same subsystem. Caution: Move only normal storagesets. Do not move storagesets that are reconstructing or reduced, or data corruption will result. See the release notes for the version of your controller software for information on which drives can be supported.
Backing Up, Cloning, and Moving Data 5. Delete each disk drive, one at a time, that the storageset contained. Use the following syntax: DELETE DISK-NAME DELETE DISK-NAME DELETE DISK-NAME 6. Remove the disk drives and move them to their new PTL locations. 7. Again add each disk drive to the controller's list of valid devices. Use the following syntax: ADD DISK DISK-NAME PTL-LOCATION ADD DISK DISK-NAME PTL-LOCATION ADD DISK DISK-NAME PTL-LOCATION 8.
Backing Up, Cloning, and Moving Data New cabinet ADD DISK DISK10000 ADD DISK DISK10100 ADD DISK DISK20000 ADD DISK DISK20100 ADD RAIDSET RAID99 DISK10000 DISK10100 DISK20000 DISK20100 ADD UNIT D100 RAID99 HSG80 ACS Solution Software V8.
Backing Up, Cloning, and Moving Data 198 HSG80 ACS Solution Software V8.
Subsystem Profile Templates A This appendix contains storageset profiles to copy and use to create your profiles. It also contains an enclosure template to use to help keep track of the location of devices and storagesets in your shelves. Four (4) templates will be needed for the subsystem. Note: The storage map templates for the Model 4310R and Model 4214R or 4314R reflect the physical location of the disk enclosures in the rack.
Subsystem Profile Templates Storageset Profile Type of Storageset: _____ Mirrorset __X_ RAIDset _____ Stripeset _____ Striped Mirrorset ____ JBOD Storageset Name Disk Drives Unit Number Partitions: Unit # Unit # Unit # Unit # Unit # Unit # Unit # Unit # RAIDset Switches: Reconstruction Policy ___Normal (default) Reduced Membership __ _No (default) Replacement Policy ___Best performance (default) ___Fast ___Yes, missing: ___Best fit ___None Mirrorset Switches: Replacement Policy Copy
Subsystem Profile Templates Unit Switches: Caching Read caching__________ Read-ahead caching_____ Write-back caching______ Write-through caching____ Access by following hosts enabled _________________________________________________ ___________ _________________________________________________ ___________ _________________________________________________ ___________ _________________________________________________ ___________ HSG80 ACS Solution Software V8.
Subsystem Profile Templates Storage Map Template 1 for the BA370 Enclosure Use this template for: ■ BA370 single-enclosure subsystems ■ first enclosure of multiple BA370 enclosure subsystems 1 2 Port 3 4 5 6 Power Supply Power Supply 3 D10300 D20300 D30300 D40300 D50300 D60300 Power Supply Power Supply 2 D20200 D30200 D40200 D50200 Targets D10200 D60200 Power Supply Power Supply 1 D10100 D20100 D30100 D40100 D50100 D60100 Power Supply Power Supply 0 D10000 202 D20000
Subsystem Profile Templates Storage Map Template 2 for the Second BA370 Enclosure Use this template for the second enclosure of multiple BA370 enclosure subsystems.
Subsystem Profile Templates Storage Map Template 3 for the TSingle-Bushird BA370 Enclosure Use this template for the third enclosure of multiple BA370 enclosure subsystems.
Subsystem Profile Templates Storage Map Template 4 for the Model 4214R Disk Enclosure Use this template for a subsystem with a three-shelf Model 4214R disk enclosure (single-bus). You can have up to six Model 4214R disk enclosures per controller shelf.
Subsystem Profile Templates (Continued) 206 Bay 1 2 3 4 5 6 7 8 9 1 0 1 1 1 2 1 3 1 4 SCSI ID 0 0 0 1 0 2 0 3 0 4 0 5 0 8 0 9 1 0 1 1 1 2 1 3 1 4 1 5 DISK ID Disk30000 Disk30100 Disk30200 Disk30300 Disk30400 Disk30500 Disk30800 Disk30900 Disk31000 Disk31100 Disk31200 Disk31300 Disk31400 Disk31500 Model 4214R Disk Enclosure Shelf 3 (Single-Bus) HSG80 ACS Solution Software V8.
Subsystem Profile Templates Storage Map Template 5 for the Model 4254 Disk Enclosure Use this template for a subsystem with a three-shelf Model 4254 disk enclosure (dual-bus). You can have up to three Model 4254 disk enclosures per controller shelf.
Subsystem Profile Templates (Continued) Model 4254 Disk Enclosure Shelf 3 (Dual-Bus) 208 Bay 1 2 3 4 5 6 7 8 9 1 0 1 1 1 2 1 3 1 4 SCSI ID 0 0 0 1 0 2 0 3 0 4 0 5 0 8 0 0 0 1 0 2 0 3 0 4 0 5 0 8 DISK ID Disk50100 Disk50200 Disk50300 Disk50400 Disk50500 Disk50800 Disk60000 Disk60100 Disk60200 Disk60300 Disk60400 Disk60500 Disk60800 Bus B Disk50000 Bus A HSG80 ACS Solution Software V8.
Subsystem Profile Templates Storage Map Template 6 for the Model 4310R Disk Enclosure Use this template for a subsystem with a six-shelf Model 4310R disk enclosure (single-bus). You can have up to six Model 4310R disk enclosures per controller shelf.
Subsystem Profile Templates Model 4310R Disk Enclosure Shelf 1 (Single-Bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID Disk11200 9 Disk11100 8 Disk11000 7 Disk10800 6 Disk10500 5 Disk10400 4 Disk10300 3 Disk10200 2 Disk10100 1 Disk10000 Bay Model 4310R Disk Enclosure Shelf 2 (Single-Bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID Disk21200 9 Disk21100 8 Disk21000 7 Disk20800 6 Disk20500 5 Disk20400 4 Disk20300 3 Disk20200 2 Disk20100 1 Disk20000 Ba
Subsystem Profile Templates Storage Map Template 7 for the Model 4350R Disk Enclosure Use this template for a subsystem with a three-shelf Model 4350R disk enclosure (single-bus). You can have up to three Model 4350R disk enclosures per controller shelf.
Subsystem Profile Templates Storage Map Template 8 for the Model 4314R Disk Enclosure Use this template for a subsystem with a six-shelf Model 4314R disk enclosure. You can have a maximum of six Model 4314R disk enclosures with each Model 2200 controller enclosure.
Subsystem Profile Templates continued from previous page Model 4314R Disk Enclosure Shelf 1 (Single-Bus) 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID Disk11500 13 Disk11400 12 Disk11300 11 Disk11200 10 Disk11100 9 Disk11000 8 Disk10900 7 Disk10800 6 Disk10500 5 Disk10400 4 Disk10300 3 Disk10200 2 Disk10100 1 Disk10000 Bay Model 4314R Disk Enclosure Shelf 2 (Single-Bus) 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID
Subsystem Profile Templates Storage Map Template 9 for the Model 4354R Disk Enclosure Use this template for a subsystem with a three-shelf Model 4354R disk enclosure (dual-bus). You can have up to three Model 4354R disk enclosures per controller shelf.
Installing, Configuring, and Removing the Client B The following information is included in this appendix: ■ Why Install the Client?, page 216 ■ Before You Install the Client, page 217 ■ Installing the Client, page 218 ■ Installing the Integration Patch, page 219 ■ Troubleshooting Client Installation, page 221 ■ Adding Storage Subsystem and its Host to Navigation Tree, page 223 ■ Removing Command Console Client, page 225 ■ Where to Find Additional Information, page 226 HSG80 ACS Solution S
Installing, Configuring, and Removing the Client Why Install the Client? The Client monitors and manages a storage subsystem by performing the following tasks: 216 ■ Create mirrored device group (RAID 1) ■ Create striped device group (RAID 0) ■ Create striped mirrored device group (RAID 0+1) ■ Create striped parity device group (3/5) ■ Create an individual device (JBOD) ■ Monitor many subsystems at once ■ Set up pager notification HSG80 ACS Solution Software V8.
Installing, Configuring, and Removing the Client Before You Install the Client 1. Verify you are logged into an account that is a member of the administrator group. 2. Check the software product description that came with the software for a list of supported hardware. 3. Verify that you have the SNMP service installed on the computer. SNMP must be installed on the computer for this software to work properly. The Client software uses SNMP to receive traps from the Agent.
Installing, Configuring, and Removing the Client Installing the Client The following restriction should be observed when installing SWCC on Windows NT 4.0 Workstations. If you select all of the applets during installation, the installation will fail on the HSG60 applet and again on one of the HSG80 applets. The workaround is to install all of the applets you want except for the HSG60 applet and the HSG80 ACS 8.5 applet. You can then return to the setup program and install the one that you need. 1.
Installing, Configuring, and Removing the Client Installing the Integration Patch The integration patch determines which version of firmware the controller is using and launches the appropriate StorageWorks Command Console (SWCC) Storage Window within Insight Manager (CIM) version 4.23. Should I Install the Integration Patch? Install this patch if your HSG80 controller uses ACS 8.7 or later. This patch enables you to use the controller’s SWCC Storage Window within CIM to monitor and manage the controller.
Installing, Configuring, and Removing the Client Caution: If you remove the integration patch, HSG80 Storage Window V2.1 will no longer work and you will need to reinstall HSG80 Storage Window V2.1. The integration patch uses some of the same files as the HSG80 Storage Window V2.1. Integrating Controller’s SWCC Storage Window with CIM You can open the controller’s Storage Window from within the Windows-based CIM version 4.23 by doing the following: 1.
Installing, Configuring, and Removing the Client Finding the Controller’s Storage Window If you installed Insight Manager before SWCC, Insight Manager will be unable to find the controller’s Storage Window. To find the controller’s Storage Window, perform the following procedure: 1. Double-click the Insight Agents icon (Start > Settings > Control Panel). A window appears showing you the active and inactive Agents under the Services tab. 2. Highlight the entry for Fibre Array Information and click Add.
Installing, Configuring, and Removing the Client If the Network Information Services (NIS) are being used to provide named port lookup services, contact the network administrator to add the correct ports.
Installing, Configuring, and Removing the Client Adding Storage Subsystem and its Host to Navigation Tree The Navigation Tree enables you to manage storage over the network by using the Storage Window. If you plan to use pager notification, you must add the storage subsystem to the Navigation Tree. 1. Verify that you have properly installed and configured the HS-Series Agent on the storage subsystem host. 2. Click Start > Programs > Command Console > StorageWorks Command Console.
Installing, Configuring, and Removing the Client Figure 47: Navigation window showing storage host system “Atlanta” 6. Click the plus sign to expand the host icon. When expanded, the Navigation Window displays an icon for the storage subsystem. To access the Storage Window for the subsystem, double-click the Storage Window icon. Figure 48: Navigation window showing expanded “Atlanta” host icon 224 HSG80 ACS Solution Software V8.
Installing, Configuring, and Removing the Client Note: You can create virtual disks by using the Storage Window. For more information on the Storage Window, refer to HP StorageWorks Command Console Version 2.5, User Guide. Removing Command Console Client Before you remove the Command Console Client (CCL) from the computer, remove AES. This will prevent the system from reporting that a service failed to start every time the system is restarted. Steps 2 through 5 describe how to remove the CCL.
Installing, Configuring, and Removing the Client Note: This procedure removes only the Command Console Client (SWCC Navigation Window). You can remove the HSG80 Client by using the Add/Remove program. Where to Find Additional Information You can find additional information about SWCC by referring to the online Help and to HP StorageWorks Command Console Version 2.5, User Guide. About the User Guide HP StorageWorks Command Console Version 2.5, User Guide contains additional information on how to use SWCC.
glossary Glossary This glossary defines terms pertaining to the ACS solution software. It is not a comprehensive glossary of computer terms Glossary 8B/10B A type of byte definition encoding and decoding to reduce errors in data transmission patented by the IBM Corporation. This process of encoding and decoding data for transmission has been adopted by ANSI. adapter A device that converts the protocol and hardware interface of one bus type into another without changing the function of the bus.
Glossary association set A group of remote copy sets that share selectable attributes for logging and failover. Members of an association set transition to the same state simultaneously. For example, if one association set member assumes the failsafe locked condition, then other members of the association set also assume the failsafe locked condition. An association set can also be used to share a log between a group of remote copy set members that require efficient use of the log space.
Glossary built-in self-test A diagnostic test performed by the array controller software on the controller policy processor. byte A binary character string made up of 8 bits operated on as a unit. cache memory A portion of memory used to accelerate read and write operations. cache module A fast storage buffer CCL CCL-Command Console LUN, a “SCSI Logical Unit Number” virtual-device used for communicating with Command Console Graphical User Interface (GUI) software.
Glossary controller A hardware device that, with proprietary software, facilitates communications between a host and one or more devices organized in an array. The HSG80 family controllers are examples of array controllers. copying A state in which data to be copied to the mirrorset is inconsistent with other members of the mirrorset. See also normalizing. copying member Any member that joins the mirrorset after the mirrorset is created is regarded as a copying member.
Glossary DOC DWZZA-On-a-Chip. ASCSI bus extender chip used to connect a SCSI bus in an expansion cabinet to the corresponding SCSI bus in another cabinet (See DWZZA). driver A hardware device or a program that controls or regulates another device. For example, a device driver is a driver developed for a specific device that allows a computer to operate with the device, such as a printer or a disk drive.
Glossary ESD Electrostatic discharge. The discharge of potentially harmful static electrical voltage as a result of improper grounding. extended subsystem A subsystem in which two cabinets are connected to the primary cabinet. external cache battery See ECB. F_Port A port in a fabric where an N_Port or NL_Port may attach. fabric A group of interconnections between ports that includes a fabric element.
Glossary FCC Federal Communications Commission. The federal agency responsible for establishing standards and approving electronic devices within the United States. FCC Class A This certification label appears on electronic devices that can only be used in a commercial environment within the United States. FCC Class B This certification label appears on electronic devices that can be used in either a home or a commercial environment within the United States.
Glossary FRU Field replaceable unit. A hardware component that can be replaced at the customer location by service personnel or qualified customer service personnel. FRUTIL Field Replacement utility. full duplex (n) A communications system in which there is a capability for 2-way transmission and acceptance between two sites at the same time. full duplex (adj) Pertaining to a communications method in which data can be transmitted and received at the same time.
Glossary host compatibility mode A setting used by the controller to provide optimal controller performance with specific operating systems. This improves the controller performance and compatibility with the specified operating system. hot disks A disk containing multiple hot spots. Hot disks occur when the workload is poorly distributed across storage devices which prevents optimum subsystem performance. See also hot spots. hot spots A portion of a disk drive frequently accessed by the host.
Glossary I/O Refers to input and output functions. I/O driver The set of code in the kernel that handles the physical I/O to a device. This is implemented as a fork process. Same as driver. I/O interface See interface. I/O module A 16-bit SBB shelf device that integrates the SBB shelf with either an 8-bit single ended, 16-bit single-ended, or 16-bit differential SCSI bus (see SBB).
Glossary logical unit number LUN. A value that identifies a specific logical unit belonging to a SCSI target ID number. A number associated with a physical device unit during a task I/O operations. Each task in the system must establish its own correspondence between logical unit numbers and physical devices. logon Also called login. A procedure whereby a participant, either a person or network connection, is identified as being an authorized network participant. loop See arbitrated loop.
Glossary mirrored write-back caching A method of caching data that maintains two copies of the cached data. The copy is available if either cache module fails. mirrorset See RAID level 1. MIST Module Integrity Self-Test. multibus failover Allows the host to control the failover process by moving the units from one controller to another. N_port A port attached to a node for use with point-to-point topology or fabric topology. NL_port A port attached to a node for use in all topologies.
Glossary normalizing Normalizing is a state in which, block-for-block, data written by the host to a mirrorset member is consistent with the data on other normal and normalizing members. The normalizing state exists only after a mirrorset is initialized. Therefore, no customer data is on the mirrorset. normalizing member A mirrorset member whose contents are the same as all other normal and normalizing members for data that has been written since the mirrorset was created or lost cache data was cleared.
Glossary PCMCIA Personal Computer Memory Card Industry Association. An international association formed to promote a common standard for PC card-based peripherals to be plugged into notebook computers. The card commonly known as a PCMCIA card is about the size of a credit card. PDU Power distribution unit. The power entry device for StorageWorks cabinets. The CDU provides the connections necessary to distribute power to the cabinet shelves and fans.
Glossary protocol The conventions or rules for the format and timing of messages sent and received. PTL Port-Target-LUN. The controller method of locating a device on the controller device bus. PVA module Power Verification and Addressing module. quiesce The act of rendering bus activity inactive or dormant. For example, “quiesce the SCSI bus operations during a device warm-swap.” RAID Redundant Array of Independent Disks.
Glossary RAID level 3/5 A RAID storageset that stripes data and parity across three or more members in a disk array. A RAIDset combines the best characteristics of RAID level 3 and RAID level 5. A RAIDset is the best choice for most applications with small to medium I/O requests, unless the application is write intensive. A RAIDset is sometimes called parity RAID. RAIDset See RAID level 3/5. RAM Random access memory.
Glossary remote copy set A bound set of two units, one located locally and one located remotely, for long-distance mirroring. The units can be a single disk, or a storageset, mirrorset, or RAIDset. A unit on the local controller is designated as the “initiator” and a corresponding unit on the remote controller is designated as the “target”. request rate The rate at which requests are arriving at a servicing entity. RFI Radio frequency interference.
Glossary SCSI ID number The representation of the SCSI address that refers to one of the signal lines numbered 0 through 15. SCSI-P cable A 68-conductor (34 twisted-pair) cable generally used for differential bus connections. SCSI port (1) Software: The channel controlling communications to and from a specific SCSI bus in the system. (2) Hardware: The name of the logical socket at the back of the system unit to which a SCSI device is connected.
Glossary StorageWorks A family of modular data storage products that allow customers to design and configure their own storage subsystems. Components include power, packaging, cabling, devices, controllers, and software. Customers can integrate devices and array controllers in StorageWorks enclosures to form storage subsystems. StorageWorks systems include integrated SBBs and array controllers to form storage subsystems.
Glossary tape inline exerciser (TILX) The controller diagnostic software to test the data transfer capabilities of tape drives in a way that simulates a high level of user activity. topology An interconnection scheme that allows multiple Fibre Channel ports to communicate with each other. For example, point-to-point, Arbitrated Loop, and switched fabric are all Fibre Channel topologies.
Glossary warm swap A device replacement method that allows the complete system to remain online during device removal or insertion. The system bus may be halted, or quiesced, for a brief period of time during the warm-swap procedure. Wide Ultra SCSI Fast/20 on a Wide SCSI bus. Worldwide name A unique 64-bit number assigned to a subsystem by the Institute of Electrical and Electronics Engineers (IEEE) and set by manufacturing prior to shipping. This name is referred to as the node ID within the CLI.
Glossary 248 HSG80 ACS Solution Software V8.
index A B Back up, Clone, Move Data 187 backup cloning data 189 subsystem configuration 188 C Index ADD CONNECTIONS multiple-bus failover 47 transparent failover 45 ADD UNIT multiple-bus failover 47 transparent failover 45 adding Client 126 subsystem 126 virtual disks 226 adding a disk drive to the spareset configuration options 153 adding disk drives configuration options 153 Agent choosing passwords 125 configuring using config.
Index configuration options 156 choosing passwords Agent 125 chunk size choosing for RAIDsets and stripesets 89 controlling stripesize 89 using to increase request rate 90 using to increase write performance 91 CHUNKSIZE 89 CLI commands installation verification 138, 146 CLI configuration example 180 CLI prompt changing fabric topology 153 Client adding 126 removing 225 uninstalling 225 CLONE utility backup 189 cloning backup 189 command console LUN 39 SCSI-2 mode 48 SCSI-3 mode 48 configuration backup 188
Index filesystem 113 Creating Clones for Backup 189 creation file system 113 D Destroy/Nodestroy parameters 92 device switches changing fabric topology 156 devices changing switches fabric topology 155 configuration fabric topology 146 creating a profile 75 disk drives adding fabric topology 153 adding to the spareset fabric topology 153 array 74 corresponding storagesets 93 dividing 85 removing from the spareset fabric topology 154 displaying the current switches fabric topology 156 dividing storagesets
Index preparation 104 host connections 40 naming 40 HSG Agent configuration menu 124 install and configure 117 install.
Index host connections 47 restricting host access 53 disabling access paths 53 SET CONNECTIONS command 47 SET UNITcommand 47 N network port assignments 221 node IDs 56 restoring 57 NODE_ID worldwide name 56 NOSAVE_CONFIGURATION 91 O offset LUN presentation 46 restricting host access multiple-bus fafilover 55 transparent fafilover 52 SCSI version factor 47 online help SWCC 226 options for mirrorsets 88 for RAIDsets 88 initialize 89 other controller 29 P pager notification 226 configuring 226 partitions a
Index enabled for all storage units 37 general description 37 read requests decreasing the subsystem response time with read caching 37 read-ahead caching 37 enabled for all disk units 37 removing Client 225 request rate 90 requirements host adapter installation 104 restarting Agent 127 restricting host access disabling access paths multiple-bus failover 53 transparent failover 51 multiple-bus failover 53 separate links transparent failover 50 transparent failover 50 S SAVE_CONFIGURATION 91 saving configu
Index Storage map template 9 first enclosure of multiple-enclosure subsystem 214 storageset deleting fabric topology 155 fabric topology changing switches 155 planning considerations 76 mirrorsets 80 partitions 85 RAIDsets 81 striped mirrorsets 83 stripesets 76 profile 75 profiles 199 storageset profile 75 storageset switches SET command 88 storagesets creating a profile 75 moving 195 striped mirrorsets planning 84 planning considerations 83 stripesets distributing members across buses 79 planning 78 plann
Index assigning depending on SCSI version 48 assigning in fabric topology partition 152 single disk 152 unit qualifiers assigning fabric topology 152 unit switches changing fabric topology 156 units LUN IDs 58 Upgrade procedures solution software 113 V verification controller installation 138, 146 verification of installation controller 138, 146 256 Verifying/Installing Required Versions 107 virtual disks adding 226 W worldwide names 56 NODE_ID 56 REPORTED PORT_ID 56 restoring 57 write performance 91 w