Installation and Configuration Guide HP StorageWorks HSG80 ACS Solution Software V8.8 for Novell NetWare Product Version: 8.8-1 First Edition (March 2005) Part Number: AA–RV1MA–TE This guide provides installation and configuration instructions and reference material for operation of the HSG80 ACS Solution Software V8.8-1 for Novell NetWare.
© Copyright 2000-2005 Hewlett-Packard Development Company, L.P. Hewlett-Packard Company makes no warranty of any kind with regard to this material, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. Hewlett-Packard shall not be liable for errors contained herein or for incidental or consequential damages in connection with the furnishing, performance, or use of this material.
contents Contents About this Guide. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11 Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Intended Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Related Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Determining Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Naming Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Numbers of Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Assigning Unit Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Defining a Partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Changing Characteristics Through Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enabling Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Changing Switches. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Cabling a Single Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring a Single Controller Using CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verifying the Node ID and Check for Any Previous Connections . . . . . . . . . . . . . Configuring Controller Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Restarting the Controller . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Enabling Autospare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deleting a Storageset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Changing Switches for a Storageset or Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Displaying the Current Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Invalid Network Port Assignments During Installation. . . . . . . . . . . . . . . . . . . . . . . . . “There is no disk in the drive” Message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adding Storage Subsystem and its Host to Navigation Tree . . . . . . . . . . . . . . . . . . . . . . . . Removing Command Console Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Where to Find Additional Information . . . . . . . . . . . . . . .
Contents 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 Mirrorsets maintain two copies of the same data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Mirrorset example 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Five-member RAIDset using parity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Striped mirrorset (example 1) . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents 10 HSG80 ACS Solution Software V8.
about this guide About this Guide This installation guide for HSG80 ACS Solution Software V8.8-1 for Novell NetWare provides information to help you: About this Guide ■ Plan the storage array subsystem. ■ Install and configure the storage array subsystem on individual operating system platforms.
About this Guide Overview This section covers the following topics: ■ "Intended Audience", page 12 ■ "Related Documentation", page 12 Intended Audience This book is intended for use by systems administrators and systems technicians who are experienced with the following: ■ Storage ■ Networking Related Documentation In addition to this guide, HP provides corresponding information: 12 ■ ACS V8.
About this Guide Solution software host support includes the following platforms: — IBM AIX — HP-UX — Linux (Red Hat x86, SuSE x86) — Novell NetWare — Open VMS — Sun Solaris — Tru64 UNIX — Windows NT/2000/Windows Server 2003 (32-bit) Additional support required by HSG80 ACS Solution Software V8.
About this Guide Chapter Content Summary Table 1 below summarizes the content of the chapters. Table 1: Summary of chapter contents Chapters 14 Descriptions 1. Planning a Subsystem This chapter focuses on technical terms and knowledge needed to plan and implement storage array subsystems. 2. Planning Storage Configurations Plan the storage configuration of your subsystem, using individual disk drives (JBOD), storageset types (mirrorsets, stripesets, and so on), and/or partitioned drives.
About this Guide Table 1: Summary of chapter contents (Continued) Chapters Descriptions 7. Backing Up, Cloning, and Moving Data Description of common procedures that are not mentioned elsewhere in this guide. ■ Backing Up Subsystem Configuration ■ Cloning Data for Backup ■ Moving Storagesets Appendix A. Subsystem Profile Templates This appendix contains storageset profiles to copy and use to create your system profiles.
About this Guide Conventions Conventions consist of the following: ■ "Document Conventions" ■ "Symbols in Text" ■ "Symbols on Equipment" Document Conventions This document follows the conventions in Table 2.
About this Guide Tip: Text in a tip provides additional help to readers by providing nonessential or optional techniques, procedures, or shortcuts. Note: Text set off in this manner presents commentary, sidelights, or interesting points of information. Symbols on Equipment The following equipment symbols may be found on hardware for which this guide pertains.
About this Guide Power supplies or systems marked with these symbols indicate the presence of multiple sources of power. WARNING: To reduce the risk of personal injury from electrical shock, remove all power cords to completely disconnect power from the power supplies and systems. Any product or assembly marked with these symbols indicates that the component exceeds the recommended weight for one individual to handle safely.
About this Guide Rack Stability Rack stability protects personnel and equipment. WARNING: To reduce the risk of personal injury or damage to the equipment, be sure that: ■ The leveling jacks are extended to the floor. ■ The full weight of the rack rests on the leveling jacks. ■ In single rack installations, the stabilizing feet are attached to the rack. ■ In multiple rack installations, the racks are coupled. ■ Only one rack component is extended at any time.
About this Guide Getting Help If you still have a question after reading this guide, contact an HP authorized service provider or access our web site http://www.hp.com. HP Technical Support Telephone numbers for worldwide technical support are listed on the following HP web site http://www.hp.com/support/. From this web site, select the country of origin. Note: For continuous quality improvement, calls may be recorded or monitored.
About this Guide HP Authorized Reseller For the name of your nearest HP authorized reseller: ■ In the United States, call 1-800-345-1518 ■ In Canada, call 1-800-263-5868 ■ Elsewhere, see the HP web site for locations and telephone numbers http://www.hp.com. HSG80 ACS Solution Software V8.
About this Guide Configuration Flowchart A three-part flowchart (Figures 1-3) is shown on the following pages. Refer to these charts while installing and configuring a new storage subsystem. All references in the flowcharts pertain to pages in this guide, unless otherwise indicated. 22 HSG80 ACS Solution Software V8.
About this Guide See the unpacking instructions on shipping box Unpack subsystem Plan a subsystem Chapter 1 Plan storage configurations Chapter 2 Prepare host system Chapter 3 Make local connection page 114 Controller pair Single controller Cable controller page 115 Cable controllers page 122 Configure controller page 116 Configure controllers page 123 Installing SWCC ? No A Yes B See Figure 3 on page 25 See Figure 2 on page 24 Figure 1: General configuration flowchart (panel 1) HSG80 ACS
About this Guide A Configure devices page 130 Create storagesets and partitions: Stripeset, page 132 Mirrorset, page 132 RAIDset, page 133 Striped mirrorset, page 134 Single (JBOD) disk, page 135 Continue creating units until you have you have completed your planned configuration. Partition, page 135 Assign unit numbers page 137 Setting configuration options page 139 Verify storage setup page 142 Figure 2: General configuration flowchart (panel 2) 24 HSG80 ACS Solution Software V8.
About this Guide B Install Agent Chapter 4 Install Client Appendix B Create storage See SWCC online help Verify storage setup page 142 Figure 3: Configuring storage with SWCC flowchart (panel 3) HSG80 ACS Solution Software V8.
About this Guide 26 HSG80 ACS Solution Software V8.
Planning a Subsystem 1 This chapter provides information that helps you plan how to configure the storage array subsystem. This chapter focuses on the technical terms and knowledge needed to plan and implement storage subsystems. Note: This chapter frequently references the command line interface (CLI). For the complete syntax and descriptions of the CLI commands, see the HP StorageWorks HSG60 and HSG80 Array Controller and Array Controller Software Command Line Interface Reference Guide.
Planning a Subsystem Defining Subsystems This section describes the terms this controller and other controller. It also presents graphics of the Model 2200 and BA370 enclosures. Note: The HSG80 controller uses the BA370 or Model 2200 enclosure. Controller Designations A and B The terms A, B, “this controller,” and “other controller,” respectively are used to distinguish one controller from another in a two-controller (also called dual-redundant) subsystem.
Planning a Subsystem BA370 Enclosure 1 2 3 4 5 2 1 3 4 5 6 6 EMU PVA Controller A Controller B Cache module A Cache module B CXO6283B Figure 5: Location of controllers and cache modules in a BA370 enclosure Controller Designations “This Controller” and “Other Controller” Some CLI commands use the terms “this” and “other” to identify one controller or the other in a dual-redundant pair. These designations are a shortened form of “this controller” and “other controller.
Planning a Subsystem Model 2200 Enclosure 1 2 CXO7366A 1 This controller 2 Other controller Figure 6: “This controller” and “other controller” for the Model 2200 enclosure BA370 Enclosure 1 2 CXO6468D 1 Other controller 2 This controller Figure 7: “This controller” and “other controller” for the BA370 enclosure 30 HSG80 ACS Solution Software V8.
Planning a Subsystem What is Failover Mode? Failover is a way to keep the storage array available to the host if one of the controllers becomes unresponsive. A controller can become unresponsive because of a controller hardware failure or, in multiple-bus mode only, due to a failure of the link between host and controller or host-bus adapter. Failover keeps the storage array available to the hosts by allowing the surviving controller to take over total control of the subsystem.
Planning a Subsystem At any time, host port 1 is active on only one controller, and host port 2 is active on only one controller. The other ports are in standby mode. In normal operation, both host port 1 on controller A and host port 2 on controller B are active. A representative configuration is shown in Figure 8. The active and standby ports share port identity, enabling the standby port to take over for the active one.
Planning a Subsystem Host 1 Host 2 Switch or hub Switch or hub Host port 1 active D0 Host 3 D1 Host port 1 standby Host port 2 standby Controller A D100 Controller B D101 D120 Host port 2 active CXO7036A Figure 8: Transparent failover—normal operation HSG80 ACS Solution Software V8.
Planning a Subsystem Host 1 Host 2 Switch or hub Switch or hub Host port 1 active D0 Host 3 D1 Host port 1 not available Host port 2 active Controller A D100 Controller B not available D101 D120 Host port 2 not available CXO7035A Figure 9: Transparent failover—after failover from Controller B to Controller A Multiple-Bus Failover Mode Multiple-bus failover mode has the following characteristics: ■ Host controls the failover process by moving the units from one controller to another ■ A
Planning a Subsystem Note: Do not use LUN 0 (for example, D0 or D100) in Netware installations. In multiple-bus failover mode, you can specify which units are normally serviced by a specific controller of a controller pair. Units can be preferred to one controller or the other by the PREFERRED_PATH switch of the ADD UNIT (or SET unit) command.
Planning a Subsystem Host 1 "RED" Host 2 "GREY" Host 3 "BLUE" FCA1 FCA2 FCA1 FCA2 FCA1 FCA2 Switch or hub Switch or hub Host port 1 active D0 Host port 2 active Controller A D1 D2 D100 D101 D120 All units visible to all ports Host port 1 active Controller B Host port 2 active NOTE: FCA = Fibre Channel Adapter CXO7094B Figure 10: Typical multiple-bus configuration 36 HSG80 ACS Solution Software V8.
Planning a Subsystem Selecting a Cache Mode The cache module supports read, read-ahead, write-through, and write-back caching techniques. The cache technique is selected separately for each unit. For example, you can enable only read and write-through caching for some units while enabling only write-back caching for other units. Read Caching When the controller receives a read request from the host, it reads the data from the disk drives, delivers it to the host, and stores the data in its cache module.
Planning a Subsystem Write-Through Caching Write-through caching is enabled when write-back caching is disabled. When the controller receives a write request from the host, it places the data in its cache module, writes the data to the disk drives, then notifies the host when the write operation is complete. This process is called write-through caching because the data actually passes through—and is stored in—the cache memory on its way to the disk drives. 38 HSG80 ACS Solution Software V8.
Planning a Subsystem Enabling Mirrored Caching In mirrored caching, half of each controller’s cache mirrors the companion controller’s cache, as shown in Figure 11. The total memory available for cached data is reduced by half, but the level of protection is greater.
Planning a Subsystem Note: In ACS V8.8-1, the maximum number of supported connections is 96. Naming Connections It is highly recommended that you assign names to connections that have meaning in the context of your particular configuration.
Planning a Subsystem ■ If a controller pair is in multiple-bus failover mode, each adapter has two connections, as shown in Figure 14. Note: Do not use LUN 0 (for example, D0 or D100) in Netware installations.
Planning a Subsystem Host 1 "GREEN" Host 2 "ORANGE" Host 3 "PURPLE" FCA1 FCA1 FCA1 Switch or hub Connections GREEN1A1 ORANGE1A1 PURPLE1A1 Host port 1 active D0 Host port 2 standby Controller A D1 Host port 1 standby Connections GREEN1B2 ORANGE1B2 PURPLE1B2 D100 Controller B D101 D120 Host port 2 active NOTE: FCA = Fibre Channel Adapter CXO7079B Figure 13: Connections in single-link, transparent failover mode configurations 42 HSG80 ACS Solution Software V8.
Planning a Subsystem Host 1 "VIOLET" FCA1 FCA2 Switch or hub Connection VIOLET1B1 Switch or hub Connection VIOLET1A1 Connection VIOLET2A2 Host port 1 active D0 Host port 2 active Controller A D1 D2 D100 Connection VIOLET2B2 D101 D120 All units visible to all ports Host port 1 active Controller B Host port 2 active NOTE: FCA = Fibre Channel Adapter CXO7080B Figure 14: Connections in multiple-bus failover mode HSG80 ACS Solution Software V8.
Planning a Subsystem Assigning Unit Numbers The controller keeps track of the unit with the unit number. The unit number can be from 0–199 prefixed by a D, which stands for disk drive. A unit can be presented as different LUNs to different connections.
Planning a Subsystem For example, if all host connections use the default offset values, unit D2 is presented to a port 1 host connection as LUN 2 (unit number of 2 minus offset of 0). Unit D102 is presented to a port 2 host connection as LUN 2 (unit number of D102 minus offset of 100). Figure 15 shows how units are presented as different LUNs, depending on the offset of the host.
Planning a Subsystem Similarly, unit D127 would be visible to a host connection on port 2 that had an offset of 120 as LUN 7 (unit number of 127 minus offset of 120). The unit would not be visible to a host connection with a unit offset of 128 or greater, because that offset is not within the unit’s range (unit number of 127 minus offset of 128 is a negative number). An additional factor to consider when assigning unit numbers and offsets is SCSI version.
Planning a Subsystem The PREFERRED_PATH switch of the ADD UNIT (or SET unit) command determines which controller of a dual-redundant pair initially accesses the unit. Initially, PREFERRED_PATH determines which controller presents the unit as Ready. The other controller presents the unit as Not Ready. Hosts can issue a SCSI Start Unit command to move the unit from one controller to the other.
Planning a Subsystem HP recommends that you use the following conventions when assigning host connection offsets and unit numbers in SCSI-2 mode: ■ Offsets should be divisible by 10 (for consistency and simplicity). ■ Unit numbers should be assigned at connection offsets (so that every host connection has a unit presented at LUN 0). Table 3 summarizes the recommendations for unit assignments based on the SCSI_VERSION switch.
Planning a Subsystem What is Selective Storage Presentation? Selective Storage presentation is a feature of the HSG80 controller that enables you to control the allocation of storage space and shared access to storage across multiple hosts. This is also known as Restricting Host Access. In a subsystem that is attached to more than one host or if the hosts have more than one adapter, it is possible to reserve certain units for the exclusive use of certain host connections.
Planning a Subsystem Note: These techniques also work for a single controller. Restricting Host Access by Separate Links In transparent failover mode, host port 1 of controller A and host port 1 of controller B share a common Fibre Channel link. Host port 2 of controller A and host port 2 of controller B also share a common Fibre Channel link.
Planning a Subsystem Host 1 "AQUA" Host 2 "BLACK" Host 3 "BROWN" FCA1 FCA1 FCA1 Switch or hub Switch or hub Connection AQUA1A1 Host port 1 active Host port 2 standby Controller A Connection BLACK1B2 Connection BROWN1B2 D0 D1 Host port 1 standby D100 Controller B D101 D120 Host port 2 active NOTE: FCA = Fibre Channel Adapter CXO7081B Figure 16: Limiting host access in transparent failover mode Restricting Host Access by Disabling Access Paths If more than one host is on a link (that is
Planning a Subsystem For example: In Figure 17, restricting the access of unit D101 to host 3, the host named BROWN can be done by enabling only the connection to host 3. Enter the following commands: SET D101 DISABLE_ACCESS_PATH=ALL SET D101 ENABLE_ACCESS_PATH=BROWN1B2 If the storage subsystem has more than one host connection, carefully specify the access path to avoid providing undesired host connections access to the unit.
Planning a Subsystem Restricting Host Access by Offsets Offsets establish the start of the range of units that a host connection can access. For example: In Figure 16, assume both host connections on port 2 (connections BLACK1B2 and BROWN1B2) initially have the default port 2 offset of 100. Setting the offset of connection BROWN1B2 to 120 presents unit D120 to host BROWN as LUN 0. SET BROWN1B2 UNIT_OFFSET=120 Host BROWN cannot see units lower than its offset, so it cannot access units D100 and D101.
Planning a Subsystem Host 1 "RED" Host 2 "GREY" Host 3 "BLUE" FCA1 FCA2 FCA1 FCA2 FCA1 FCA2 Switch or hub Connections RED1B1 GREY1B1 BLUE1B1 Switch or hub Connections RED2A2 GREY2A2 BLUE2A2 Connections RED1A1 GREY1A1 BLUE1A1 Host port 1 active Host port 2 active Controller A D0 D1 D2 D100 Connections RED2B2 GREY2B2 BLUE2B2 D101 D120 All units visible to all ports Host port 1 active Controller B Host port 2 active NOTE: FCA = Fibre Channel Adapter CXO7078 Figure 17: Limiting host acces
Planning a Subsystem multiple-bus failover to work. For most operating systems, it is desirable to have all connections to the host enabled.
Planning a Subsystem For example: In Figure 17, assume all host connections initially have the default offset of 0. Giving all connections access to host BLUE, an offset of 120 presents unit D120 to host BLUE as LUN 0. Enter the following commands: SET BLUE1A1 UNIT_OFFSET=120 SET BLUE1B1 UNIT_OFFSET=120 SET BLUE2A2 UNIT_OFFSET=120 SET BLUE2B2 UNIT_OFFSET=120 Host BLUE cannot see units lower than its offset, so it cannot access any other units.
Planning a Subsystem In multiple-bus failover mode, each of the host ports has its own port ID: ■ Controller B, port 1—worldwide name + 1, for example 5000-1FE1-FF0C-EE01 ■ Controller B, port 2—worldwide name + 2, for example 5000-1FE1-FF0C-EE02 ■ Controller A, port 1—worldwide name + 3, for example 5000-1FE1-FF0C-EE03 ■ Controller A, port 2—worldwide name + 4, for example 5000-1FE1-FF0C-EE04 Use the CLI command, SHOW THIS_CONTROLLER/OTHER_CONTROLLER to display the subsystem’s worldwide name.
Planning a Subsystem 1 2 Node ID (Worldwide name) Checksum 1 WWN INFORMATION P/N: WWN: S/N: NNNN – NNNN – NNNN – NNNN Checksum: NN 2 CXO6873B Figure 19: Placement of the worldwide name label on the BA370 enclosure Caution: Each subsystem has its own unique worldwide name (node ID). If you attempt to set the subsystem worldwide name to a name other than the one that came with the subsystem, the data on the subsystem is not accessible.
Planning Storage Configurations 2 This chapter provides information to help you plan the storage configuration of your subsystem. Storage containers are individual disk drives (JBOD), storageset types (mirrorsets, stripesets, and so on), and/or partitioned drives. Use the guidelines found in this section to plan the various types of storage containers needed.
Planning Storage Configurations Where to Start The following procedure outlines the steps to follow when planning your storage configuration. See Appendix A to locate the blank templates for keeping track of the containers being configured. 1. Determine your storage requirements. Use the questions in "Determining Storage Requirements", page 62, to help you. 2. Review configuration rules. See "Configuration Rules for the Controller", page 63. 3.
Planning Storage Configurations — Use the Command Line Interpreter (CLI) commands. This method allows you flexibility in defining and naming your storage containers. See the HP StorageWorks HSG60 and HSG80 Array Controller and Array Controller Command Line Interface Reference Guide. HSG80 ACS Solution Software V8.
Planning Storage Configurations Determining Storage Requirements It is important to determine your storage requirements.
Planning Storage Configurations Configuration Rules for the Controller The following list defines maximum configuration rules for the controller: ■ 128 visible LUNs/200 assignable unit numbers — In SCSI-2 mode, if the CCL is enabled, the result is 127 visible LUNs and one CCL. — In SCSI-3 mode, if the CCL is enabled, the result is 126 visible LUNs and two CCLs.
Planning Storage Configurations Tip: If you are redeploying disks that have been operating under a prior version of ACS into a newly established container, as a best practice, always initialize the devices and the new container before proceeding with subsystem activities to avoid operational and performance issues. 64 HSG80 ACS Solution Software V8.
Planning Storage Configurations Addressing Conventions for Device PTL The HSG80 controller has six SCSI device ports, each of which connects to a SCSI bus. In dual-controller subsystems, these device buses are shared between the two controllers. The standard BA370 enclosure provides a maximum of four SCSI target identifications (ID) for each device port. If more target IDs are needed, expansion enclosures can be added to the subsystem.
Planning Storage Configurations ■ L—Designates the logical unit (LUN) of the device. For disk devices the LUN is always 0. 1 02 Disk 10200 LUN 00 Target 02 Port 1 Figure 21: PTL naming convention The controller can either operate with a BA370 enclosure or with a Model 2200 controller enclosure combined with Model 4214R, Model 4254, Model 4310R, Model 4350R, Model 4314R, or Model 4354R disk enclosures. The controller operates with BA370 enclosures that are assigned ID numbers 0, 2, and 3.
Planning Storage Configurations Examples - Model 2200 Storage Maps, PTL Addressing The Model 2200 controller enclosure can be combined with the following: ■ Model 4214R disk enclosure—Ultra2 SCSI with 14 drive bays, single-bus I/O module. ■ Model 4254 disk enclosure—Ultra2 SCSI with 14 drive bays, dual-bus I/O module. Note: The Model 4214R uses the same storage maps as the Model 4314R, and the Model 4254 uses the same storage maps as the Model 4354R disk enclosures.
Planning Storage Configurations ■ Model 4354R disk enclosure—Ultra3 SCSI with 14 drive bays, dual-bus I/O module. Table 7 shows the addresses for each device in a three-shelf, dual-bus configuration. A maximum of three Model 4354R disk enclosures can be used with each Model 2200 controller enclosure. Note: Appendix A contains storageset profiles you can copy and use to create your own system profiles.
Planning Storage Configurations 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID Disk40100 Disk40200 Disk40300 Disk40400 Disk40500 Disk40800 Disk41000 Disk41200 9 Disk41100 8 Bay 1 2 3 4 5 6 7 8 9 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID Disk11000 7 Disk10800 6 Disk10500 5 Disk10400 4 Disk10300 3 Disk10200 2 Disk10100 1 Disk10000 Bay Disk40000 Model 4310R Disk Enclosure Shelf 4 (Single-bus) Disk11100 Disk11200 Bay 1 2 3 4 5 6
Planning Storage Configurations Table 5: PTL addressing, dual-bus configuration, three Model 4350R enclosures Model 4350R Disk Enclosure Shelf 1 (Single-bus) SCSI Bus B 9 SCSI ID 00 01 02 03 04 00 01 02 03 DISK ID Disk20300 8 Disk20200 7 Disk20100 6 Disk20000 5 Disk10400 4 Disk10300 3 Disk10200 2 Disk10100 1 Disk10000 Bay 10 04 Disk20400 SCSI Bus A Model 4350R Disk Enclosure Shelf 2 (Single-bus) SCSI Bus B 9 SCSI ID 00 01 02 03 04 00 01 02 03 DISK ID Disk40300
Planning Storage Configurations Table 6: PTL addressing, single-bus configuration, six Model 4314R enclosures 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID Disk60100 Disk60200 Disk60300 Disk60400 Disk60500 Disk60800 Disk60900 Disk61000 Disk61100 Disk61200 Disk61500 13 Disk61400 12 Disk61300 11 Bay 1 2 3 4 5 6 7 8 9 10 11 12 13 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID Disk51200 10 Disk51100 9 Disk51000 8
Planning Storage Configurations 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID Disk20100 Disk20200 Disk20300 Disk20400 Disk20500 Disk20800 Disk20900 Disk21000 Disk21100 Disk21200 Disk21500 00 Bay 1 2 3 4 5 6 7 8 9 10 11 12 13 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID Disk31500 SCSI ID Disk21400 14 Disk31400 13 Disk21300 12 Disk31300 11 Disk31200 10 Disk31100 9 Disk31000 8 Disk30900 7 Disk30800 6 Disk30500 5 Di
Planning Storage Configurations Table 7: PTL addressing, dual-bus configuration, three Model 4354A enclosures.
Planning Storage Configurations Choosing a Container Type Different applications may have different storage requirements. You probably want to configure more than one kind of container within your subsystem. In choosing a container, you choose between independent disks (JBODs) or one of several storageset types, as shown in Figure 23. The independent disks and the selected storageset may also be partitioned. The storagesets implement RAID (Redundant Array of Independent Disks) technology.
Planning Storage Configurations Table 8 compares the different kinds of containers to help you determine which ones satisfy your requirements.
Planning Storage Configurations Creating a Storageset Profile Creating a profile for your storagesets, partitions, and devices can simplify the configuration process. Filling out a storageset profile helps you choose the storagesets that best suit your needs and to make informed decisions about the switches you can enable for each storageset or storage device that you configure in your subsystem. For an example of a storageset profile, see Table 9.
Planning Storage Configurations Initialize Switches: Chunk size Save Configuration Metadata _X_ Automatic (default) ___No (default) _X_Destroy (default) ___ 64 blocks _X_Yes ___Retain ___ 128 blocks ___ 256 blocks Unit Switches: Caching Read caching_______X__ Read-ahead caching_____ Write-back caching___X__ Write-through caching____ Access by following hosts enabled _ALL_____________________________________________ ____________ __________________________________________________ __________ ___
Planning Storage Configurations Planning Considerations for Storageset This section contains the guidelines for choosing the storageset type needed for your subsystem: ■ "Stripeset Planning Considerations", page 78 ■ "Mirrorset Planning Considerations", page 80 ■ "RAIDset Planning Considerations", page 82 ■ "Striped Mirrorset Planning Considerations", page 84 ■ "Storageset Expansion Considerations", page 86 ■ "Partition Planning Considerations", page 86 Stripeset Planning Considerations Stripes
Planning Storage Configurations The relationship between the chunk size and the average request size determines if striping maximizes the request rate or the data-transfer rate. You can set the chunk size or use the default setting (see "Chunk Size", page 91, for information about setting the chunk size). Figure 25 shows another example of a three-member RAID 0 stripeset. A major benefit of striping is that it balances the I/O load across all of the disk drives in the storageset.
Planning Storage Configurations ■ Striping does not protect against data loss. In fact, because the failure of one member is equivalent to the failure of the entire stripeset, the likelihood of losing data is higher for a stripeset than for a single disk drive. For example, if the mean time between failures (MTBF) for a single disk is l hour, then the MTBF for a stripeset that comprises N such disks is l/N hours.
Planning Storage Configurations Disk 10000 Disk 20000 A A' Disk 20100 Disk 10100 B B' Disk 10200 Disk 20200 C C' Mirror drives contain copy of data CXO7288A Figure 26: Mirrorsets maintain two copies of the same data Virtual disk Operating system view Actual device mappings Block 0 Block 1 Block 2 etc. Disk 1 Disk 2 Block 0 Block 1 Block 2 etc. Block 0 Block 1 Block 2 etc.
Planning Storage Configurations ■ You can configure up to a maximum of 20 RAID 3/5 mirrorsets per controller or pair of dual-redundant controllers. Each mirrorset may contain up to 6 members. Refer to "Configuration Rules for the Controller", page 63, for detailed information on maximum numbers. 30 RAID 3/5 and RAID 1 mirrorsets are permitted, however, there is limit of no more than 20 RAID 3/5 mirrorsets in such a configuration. ■ Both write-back cache modules must be the same size.
Planning Storage Configurations Virtual disk Operating system view Disk 1 Block 0 Block 5 Block 10 Block 15 Block 0 Block 1 Block 2 Block 3 Block 4 Block 5 etc.
Planning Storage Configurations ■ A RAIDset must include at least 3 disk drives, but no more than 14. ■ A storageset should only contain disk drives of the same capacity. The controller limits the capacity of each member to the capacity of the smallest member in the storageset. Thus, if you combine 9 GB disk drives with 4 GB disk drives in the same storageset, you waste 5 GB of capacity on each 9 GB member.
Planning Storage Configurations p t Mirrorset1 Mirrorset2 Disk 20000 Disk 10100 Disk 20200 A B C Disk 10000 Disk 20100 Disk 10200 B' C' A' Mirrorset3 CXO7289A Figure 29: Striped mirrorset (example 1) The failure of a single disk drive has no effect on the ability of the storageset to deliver data to the host. Under normal circumstances, a single disk drive failure has very little effect on performance.
Planning Storage Configurations Plan the mirrorset members, and plan the stripeset that contains them. Review the recommendations in "Planning Considerations for Storageset", page 78, and "Mirrorset Planning Considerations", page 80. Storageset Expansion Considerations Storageset Expansion allows for the joining of two of the same kind of storage containers by concatenating RAIDsets, stripesets, or individual disks, thereby forming a larger virtual disk which is presented as a single unit.
Planning Storage Configurations unpartitioned storageset or device. Partitions are separately addressable storage units; therefore, you can partition a single storageset to service more than one user group or application. Defining a Partition Partitions are expressed as a percentage of the storageset or single disk unit that contains them: ■ Mirrorsets and single disk units—the controller allocates the largest whole number of blocks that are equal to or less than the percentage you specify.
Planning Storage Configurations Changing Characteristics Through Switches CLI command switches allow you another level of command options. There are three types of switches that modify the storageset and unit characteristics: ■ Storageset switches ■ Initialization switches ■ Unit switches The following sections describe how to enable/modify switches. They also contain a description of the major CLI command switches.
Planning Storage Configurations Specifying Storageset and Partition Switches The characteristics of a particular storageset can be set by specifying switches when the storageset is added to the controllers’ configuration. Once a storageset has been added, the switches can be changed by using a SET command. Switches can be set for partitions and the following types of storagesets: ■ RAIDset ■ Mirrorset Stripesets have no specific switches associated with their ADD and SET commands.
Planning Storage Configurations Partition Switches The following switches are available when creating a partition: ■ Size ■ Geometry For details on the use of these switches, refer to CREATE_PARTITION command in the HP StorageWorks HSG60 and HSG80 Array Controller and Array Controller Command Line Interface Reference Guide. 90 HSG80 ACS Solution Software V8.
Planning Storage Configurations Specifying Initialization Switches Initialization switches set characteristics for established storagesets before they are made into units. The following kinds of switches effect the format of a disk drive or storageset: ■ Chunk Size (for stripesets and RAIDsets only) ■ Save Configuration ■ Destroy/Nodestroy ■ Geometry Each of these switches is described in the following sections.
Planning Storage Configurations Increasing the Request Rate A large chunk size (relative to the average request size) increases the request rate by enabling multiple disk drives to respond to multiple requests. If one disk drive contains all of the data for one request, then the other disk drives in the storageset are available to handle other requests. Thus, separate I/O requests can be handled in parallel, which increases the request rate. This concept is shown in Figure 32.
Planning Storage Configurations ■ If you have mostly sequential reads or writes (like those needed to work with large graphic files), make the chunk size for RAID 0 and RAID 0+1 a small number (for example: 67 sectors). For RAID 5, make the chunk size a relatively large number (for example: 253 sectors). Table 10 shows a few examples of chunk size selection.
Planning Storage Configurations Note: HP recommends that you DO NOT use SAVE_CONFIGURATION on every unit and device on the controller. Destroy/Nodestroy Specify whether to destroy or retain your data and metadata when a disk is initialized after it has been used in a mirrorset or as a single-disk unit. Note: The DESTROY and NODESTROY switches are only valid for mirrorsets and striped mirrorsets. ■ DESTROY (default) overwrites your data and forced-error metadata when a disk drive is initialized.
Planning Storage Configurations Specifying Unit Switches Several switches control the characteristics of units. The unit switches are described under the SET unit-number command in the HP StorageWorks HSG60 and HSG80 Array Controller and Array Controller Command Line Interface Reference Guide. One unit switch, ENABLE/DISABLE_ACCESS_PATH, determines which host connections can access the unit, and it is described in the larger topic of matching units to specific hosts.
Planning Storage Configurations Creating Storage Maps Configuring a subsystem is easier if you know how the storagesets, partitions, and JBODs correspond to the disk drives in your subsystem. You can more easily see this relationship by creating a hardcopy representation, also known as a storage map. To make a storage map, fill out the templates provided in Appendix A as you add storagesets, partitions, and JBOD disks to the configuration and assign them unit numbers.
Planning Storage Configurations Example Storage Map–Model 4310R Disk Enclosure Table 11 shows an example of four Model 4310R disk enclosures (single-bus I/O). ■ Unit D100 is a 4-member RAID 3/5 storageset named R1. R1 consists of Disk10000, Disk20000, Disk30000, and Disk40000. ■ Unit D101 is a 2-member striped mirrorset named S1. S1 consists of M1 and M2: — M1 is a 2-member mirrorset consisting of Disk10100 and Disk20100. — M2 is a 2-member mirrorset consisting of Disk30100 and Disk40100.
Planning Storage Configurations Table 11: Model 4310R disk enclosure, example of storage map 6 7 8 9 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 D100 D101 D103 D105 D107 D108 D1 R1 S1 M4 S3 S4 M2 M6 D2 R3 D3 S5 spare Disk41000 Disk41100 Disk41200 Bay 1 2 3 4 5 6 7 8 9 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 D100 D101 D102 D104 D106 D108 D1 R1 S1 M3 S2 R2 S3 S4 M1 M5 D2 R3 D3 S5 D4 M7 Disk10000 Disk11000 DISK ID Disk40800 5 Disk40500 4 Disk40400 3 Dis
Planning Storage Configurations 6 7 8 9 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 D100 D101 D103 D104 D106 D108 D1 R1 S1 M4 S2 R2 S3 S4 M2 M6 D2 R3 D3 S5 spare Disk31000 Disk31100 Disk31200 DISK ID Disk30800 5 Disk30500 4 Disk30400 3 Disk30300 2 Disk30200 1 Disk30100 Bay Disk30000 Model 4310R Disk Enclosure Shelf 3 (Single-bus) HSG80 ACS Solution Software V8.
Planning Storage Configurations 100 HSG80 ACS Solution Software V8.
Preparing the Host System 3 This chapter describes how to prepare your Novell NetWare host computer to accommodate the HSG80 controller storage subsystem.
Preparing the Host System Installing RAID Array Storage System WARNING: A shock hazard exists at the backplane when the controller enclosure bays or cache module bays are empty. Be sure the enclosures are empty, then mount the enclosures into the rack. DO NOT use the disk enclosure handles to lift the enclosure. The handles cannot support the weight of the enclosure. Only use these handles to position the enclosure in the mounting brackets.
Preparing the Host System 5. Install the elements. Install the disk drives. Make sure you install blank panels in any unused bays. Fibre Channel cabling information is shown to illustrate supported configurations. In a dual-bus disk enclosure configuration, disk enclosures 1, 2, and 3 are stacked below the controller enclosure—two SCSI Buses per enclosure (see Figure 33).
Preparing the Host System 1 8 2 3 4 5 7 6 CXO7383A 1 3 5 7 SCSI Bus 1 Cable SCSI Bus 3 Cable SCSI Bus 5 Cable AC Power Inputs 2 4 6 8 SCSI Bus 2 Cable SCSI Bus 4 Cable SCSI Bus 6 Cable Fibre Channel Ports Figure 33: Dual-bus enterprise storage RAID array storage system 104 HSG80 ACS Solution Software V8.
Preparing the Host System 6 5 4 8 1 7 2 3 CXO7382A 1 3 5 7 SCSI Bus 1 Cable SCSI Bus 3 Cable SCSI Bus 5 Cable AC Power Inputs 2 4 6 8 SCSI Bus 2 Cable SCSI Bus 4 Cable SCSI Bus 6 Cable Fibre Channel Ports Figure 34: Single-bus enterprise storage RAID array storage system HSG80 ACS Solution Software V8.
Preparing the Host System Making a Physical Connection To attach a host computer to the storage subsystem, install one or more host bus adapters into the computer. A Fibre Channel (FC) cable goes from the host bus adapter to an FC switch or hub. Preparing to Install Host Bus Adapter Before installing the host bus adapter, perform the following steps: 1. Perform a complete backup of the entire system. 2.
Preparing the Host System Verifying/Installing Required Versions Installing Novell NetWare Driver To install the Novell NetWare driver on a server, using the preferred method, from the Enterprise Storage RAID Array Solution Software V8.8-1 kit for Novell NetWare, perform the following procedure. Note: The QL2300 HAM Driver may not be compatible with NetWare versions earlier than 5.x. 1. Enter the following command from the console screen: LOAD HDETECH 2.
Preparing the Host System Note: If you have a Proliant server, ensure that CPQSHD.CDM is later than 2.0. Logintmo Parameter load cpqfc.ham logintmo = Number of seconds that the driver waits for the Enterprise Storage RAID Array to log back in with the initiator after a loop/link reset In some cases, Enterprise Storage RAID Array configurations using large caches took longer during transparent failovers than the driver’s default login timeout of 15 seconds allowed.
Preparing the Host System Completing Your Configuration under Novell NetWare Novell NetWare recognizes new Enterprise Storage RAID Array Fibre Channel Storage subsystem devices or changes to existing configurations. You do not need to restart the NetWare server. Refer to the Novell documentation that came with your system to determine whether to use NWconfig, Console one, or NSSMU to configure NSS or Legacy storage types. These options also depend upon the operating system version you have installed.
Preparing the Host System 110 HSG80 ACS Solution Software V8.
Installing and Configuring HSG Agent 4 StorageWorks Command Console (SWCC) enables real-time configuration of the storage environment and permits you to monitor and configure the storage connected to the HSG80 controller. The following information is included in this chapter: ■ "Controller Configuration with NetWare", page 112 HSG80 ACS Solution Software V8.
Installing and Configuring HSG Agent Controller Configuration with NetWare StorageWorks Command Console (SWCC) provides a graphical user interface that can be used to configure and monitor your storage system. There is no Agent for the NetWare server (see second bullet below). Solution Software V8.8-1 for NetWare no longer provides SWCC but the controller can be configured in other ways.
FC Configuration Procedures 5 This chapter describes procedures to configure a subsystem that uses Fibre Channel (FC) fabric topology. In fabric topology, the controller connects to its hosts through switches.
FC Configuration Procedures Establishing a Local Connection A local connection is required to configure the controller until a command console LUN (CCL) is established using the CLI. Communication with the controller can be through the CLI. The maintenance port, shown in Figure 35, provides a way to connect a maintenance port. The maintenance port can be an EIA-423 compatible terminal or a computer running a terminal emulator program. The maintenance port accepts a standard RS-232 jack.
FC Configuration Procedures Setting Up a Single Controller Powering On and Establishing Communication 1. Connect the computer or terminal to the controller, as shown in Figure 35. The connection to the computer is through the COM1 or COM2 port. 2. Turn on the computer or terminal. 3. Apply power to the storage subsystem. 4. Verify that the computer or terminal is configured as follows: — 9600 baud — 8 data bits — 1 stop bit — no parity — no flow control 5. Press Enter.
FC Configuration Procedures 4 1 2 5 3 5 4 CXO6881B 1 Controller 4 Cable from the switch to the host Fibre Channel 2 Host port 1 adapter 3 Host port 2 5 FC switch Figure 36: Single controller cabling Configuring a Single Controller Using CLI To configure a single controller using CLI involves the following processes: ■ "Verifying the Node ID and Check for Any Previous Connections", page 116 ■ "Configuring Controller Settings", page 117 ■ "Restarting the Controller", page 118 ■ "Setting T
FC Configuration Procedures The node ID is located in the third line of the SHOW THIS result: HSG80> SHOW THIS Controller: HSG80 ZG80900583 Software V8.8, Hardware E11 NODE_ID = 5000-1FE1-0001-3F00 ALLOCATION_CLASS = 0 If the node ID is present, go to step 5. If the node ID is all zeroes, enter node ID and checksum, which are located on a sticker on the controller enclosure.
FC Configuration Procedures Note: If SCSI-2 is selected, you must disable CCL using the command: SET THIS NOCOMMAND_CONSOLE_LUN 6. Assign an identifier for the communication LUN (also called the command console LUN, or CCL). The CCL must have a unique identifier that is a decimal number in the range 1 to 32767, and which is different from the identifiers of all units.
FC Configuration Procedures 10. Assign an identifier for the communication LUN (also called the command console LUN, or CCL). The CCL must have a unique identifier that is a decimal number in the range 1 to 32767, and which is different from the identifiers of all units. Use the following syntax: SET THIS IDENTIFIER=N Identifier must be unique among all the controllers attached to the fabric within the specified allocation class. Setting Time and Verifying All Commands 1.
FC Configuration Procedures The following sample is a result of a SHOW THIS command, with the areas of interest in bold. Controller: HSG80 ZG94214134 Software V8.
FC Configuration Procedures ....... 5. Turn on the switches, if not done previously. If you want to communicate with the Fibre Channel switches through Telnet, set an IP address for each switch. See the manuals that came with the switches for details. Plugging in the FC Cable and Verifying Connections 6. Plug the Fibre Channel cable from the first host bus adapter into the switch. Enter the SHOW CONNECTIONS command to view the connection table: SHOW CONNECTIONS 7.
FC Configuration Procedures Setting Up a Controller Pair The following procedures describe how to set up a controller pair. Powering Up and Establishing Communication 1. Connect the computer or terminal to the controller as shown in Figure 35. The connection to the computer is through the COM1 or COM2 ports. 2. Turn on the computer or terminal. 3. Apply power to the storage subsystem. 4. Configure the computer or terminal as follows: — 9600 baud — 8 data bits — 1 stop bit — no parity — no flow control 5.
FC Configuration Procedures Figure 37 shows a controller pair with failover cabling showing one HBA per server with HSG80 controller in transparent failover mode.
FC Configuration Procedures The node ID is located in the third line of the SHOW THIS result: HSG80> show this Controller: HSG80 ZG80900583 Software V8.8, Hardware E11 NODE_ID = 5000-1FE1-0001-3F00 ALLOCATION_CLASS = 0 If the node ID is present, go to step 5. If the node ID is all zeroes, enter the node ID and checksum, which are located on a sticker on the controller enclosure.
FC Configuration Procedures 6. Assign an identifier for the communication LUN (also called the command console LUN, or CCL). The CCL must have a unique identifier that is a decimal number in the range 1 to 32767, and which is different from the identifiers of all units. Use the following syntax: SET THIS IDENTIFIER=N Identifier must be unique among all the controllers attached to the fabric within the specified allocation class. 7. Set the topology for the controller.
FC Configuration Procedures When FRUTIL asks if you intend to replace the battery, answer Y: Do you intend to replace this controller's cache battery? Y/N [N] Y FRUTIL prints out a procedure, but does not give you a prompt. Ignore the procedure and press Enter. 12. Set up any additional optional controller settings, such as changing the CLI prompt.
FC Configuration Procedures 14. Verify node ID, allocation class, SCSI version, failover mode, identifier, and port topology. The following display is a sample result of a SHOW THIS command, with the areas of interest in bold. Controller: HSG80 ZG94214134 Software V8.
FC Configuration Procedures 15. Turn on the switches if not done previously. If you want to communicate with the FC switches through Telnet, set an IP address for each switch. See the manuals that came with the switches for details. Plugging in the FC Cable and Verifying Connections 16. Plug the FC cable from the first host adapter into the switch. Enter a SHOW CONNECTIONS command to view the connection table: SHOW CONNECTIONS The first connection has one or more entries in the connection table.
FC Configuration Procedures Verifying Installation To verify installation for your Novell Netware host, enter the following command from the Netware Console: SHOW DEVICES HSG80 ACS Solution Software V8.
FC Configuration Procedures Configuring Devices The disks on the device bus of the HSG80 can be configured manually or with the CONFIG utility. The CONFIG utility is easier. Invoke CONFIG with the following command: RUN CONFIG WARNING: HP recommends that you use the CONFIG utility only at reduced I/O loads. The CONFIG utility takes about two minutes to discover and to map the configuration of a completely populated storage system. 130 HSG80 ACS Solution Software V8.
FC Configuration Procedures Configuring Storage Containers For a technology refresher on this subject, refer to "Choosing a Container Type", page 74. In choosing a container, you choose between independent disks (JBODs) or one of several storageset types, as shown in Figure 38. The independent disks and the selected storageset may also be partitioned.
FC Configuration Procedures Configuring a Stripeset 1. Create the stripeset by adding its name to the controller's list of storagesets and by specifying the disk drives it contains. Use the following syntax: ADD STRIPESET STRIPESET-NAME DISKNNNNN DISKNNNNN....... 2. Initialize the stripeset, specifying any desired switches: INITIALIZE STRIPESET-NAME SWITCHES See "Specifying Initialization Switches", page 91, for a description of the initialization switches. 3.
FC Configuration Procedures 3. Verify the mirrorset configuration: SHOW MIRRORSET-NAME 4. Assign the mirrorset a unit number to make it accessible by the hosts. See "Assigning Unit Numbers and Unit Qualifiers", page 137. For example: The commands to create Mirr1, a mirrorset with two members (DISK10000 and DISK20000), and to initialize it using default switch settings: ADD MIRRORSET MIRR1 DISK10000 DISK20000 INITIALIZE MIRR1 SHOW MIRR1 Configuring a RAIDset 1.
FC Configuration Procedures 4. Assign the RAIDset a unit number to make it accessible by the hosts. See "Assigning Unit Numbers and Unit Qualifiers", page 137. For example: The commands to create RAID1, a RAIDset with three members (DISK10000, DISK20000, and DISK10100) and to initialize it with default values: ADD RAIDSET RAID1 DISK10000 DISK20000 DISK30000 INITIALIZE RAID1 SHOW RAID1 Configuring a Striped Mirrorset 1. Create, but do not initialize, at least two mirrorsets.
FC Configuration Procedures Configuring a Single-Disk Unit (JBOD) 1. Initialize the disk drive, specifying any desired switches: INITIALIZE DISK-NAME SWITCHES See "Specifying Initialization Switches", page 91, for a description of the initialization switches. 2. Verify the configuration by entering the following command: SHOW DISK-NAME 3. Assign the disk a unit number to make it accessible by the hosts. See "Assigning Unit Numbers and Unit Qualifiers", page 137. Configuring a Partition 1.
FC Configuration Procedures or SHOW DISK-NAME The partition number is displayed in the first column, followed by the size and starting block of each partition. 4. Assign the partition a unit number to make it accessible by the hosts. See "Assigning Unit Numbers and Unit Qualifiers", page 137. For example: The commands to create RAID1, a three-member RAIDset, then partition it into two storage units are shown below.
FC Configuration Procedures Assigning Unit Numbers and Unit Qualifiers Each storageset, partition, or single (JBOD) disk must be assigned a unit number for the host to access. As the units are added, their properties can be specified through the use of command qualifiers, which are discussed in detail under the ADD UNIT command in the HP StorageWorks HSG60 and HSG80 Array Controller and Array Controller Software Command Line Interface Reference Guide.
FC Configuration Procedures Preferring Units In multiple-bus failover mode, individual units can be preferred to a specific controller. For example, to prefer unit D102 to “this controller,” use the following command: SET D102 PREFERRED_PATH=THIS RESTART commands must be issued to both controllers for this command to take effect: RESTART OTHER_CONTROLLER RESTART THIS_CONTROLLER Note: The controllers need to restart together for the preferred settings to take effect.
FC Configuration Procedures Configuration Options There are multiple options that allow you to configure your system. Changing the CLI Prompt To change the CLI prompt, enter a 1- to 16- character string as the new prompt, according to the following syntax: SET THIS_CONTROLLER PROMPT = “NEW PROMPT” If you are configuring dual-redundant controllers, also change the CLI prompt on the “other controller.
FC Configuration Procedures Note: This procedure assumes that the disks that you are adding to the spareset have already been added to the controller's list of known devices. To add the disk drive to the controller's spareset list, use the following syntax: ADD SPARESET DISKNNNNN Repeat this step for each disk drive you want to add to the spareset: For example: The following example shows the syntax for adding DISK11300 and DISK21300 to the spareset.
FC Configuration Procedures To disable autospare, use the following command: SET FAILEDSET NOAUTOSPARE During initialization, AUTOSPARE checks to see if the new disk drive contains metadata. Metadata is information the controller writes on the disk drive when the disk drive is configured into a storageset. Therefore, the presence of metadata indicates that the disk drive belongs to, or has been used by, a storageset. If the disk drive contains metadata, initialization stops.
FC Configuration Procedures or SHOW DEVICE-NAME Note: FULL is not required when showing a particular device. It is used when showing all devices, for example, SHOW DEVICES FULL. Changing RAIDset and Mirrorset Switches Use the SET storageset-name command to change the RAIDset and Mirrorset switches associated with an existing storageset.
Using CLI for Configuration 6 This chapter presents an example of how to configure a storage subsystem using the Command Line Interpreter (CLI). The CLI configuration example shown assumes: ■ A normal, new controller pair, which includes: — NODE ID set — No previous failover mode — No previous topology set ■ Full array with no expansion cabinet ■ PCMCIA cards installed in both controllers A storage subsystem example is shown in Figure 39.
Using CLI for Configuration Port 1 2 3 4 5 6 D0 S1 MI DISK102 00 D0 S1 M1 DISK202 00 D0 S1 M2 DISK302 00 D0 S1 M2 DISK402 00 D2 D101 DISK503 00 D1 M3 DISK502 00 D120 R2 DISK201 00 D120 R2 DISK301 00 D120 R2 DISK401 00 D120 R2 DISK501 00 D102 R1 DISK200 00 D102 R1 DISK300 00 D102 R1 DISK400 00 D102 R1 DISK500 00 Power Supply D120 R2 DISK601 00 Power Supply D102 R1 DISK100 00 Power Supply D1 M3 DISK602 00 Power Supply D120 R2 DISK101 00 Power Supply spareset member DISK603 00 Tar
Using CLI for Configuration Host 1 "RED" Host 2 "GREY" Host 3 "BLUE" FCA1 FCA2 FCA1 FCA2 FCA1 FCA2 Switch or hub Connections RED1B1 GREY1B1 BLUE1B1 Switch or hub Connections RED1A1 GREY1A1 BLUE1A1 Connections RED2A2 GREY2A2 BLUE2A2 Host port 1 active Host port 2 active Controller A D0 D1 D2 D101 Connections RED2B2 GREY2B2 BLUE2B2 D102 D120 All units visible to all ports Host port 1 active Controller B Host port 2 active NOTE: FCA = Fibre Channel Adapter CXO7547B Figure 40: Example, t
Using CLI for Configuration "RED" "GREY" "BLUE" D1 D0 D2 D101 D102 D120 CXO7110B Figure 41: Example, logical or virtual disks comprised of storagesets CLI Configuration Example Text conventions used in this example are listed below: ■ Text in italics indicates an action you take. ■ Text in THIS FORMAT, indicates a command you type. Be certain to press Enter after each command. ■ Text enclosed within a box, indicates information that is displayed by the CLI interpreter.
Using CLI for Configuration Plug serial cable from maintenance terminal into top controller.
Using CLI for Configuration Note: This command causes the controllers to restart. SET THIS PROMPT=“BTVS BOTTOM” SET OTHER PROMPT=“BTVS TOP” SHOW THIS SHOW OTHER Plug in the Fibre Channel cable from the first adapter in host “RED.” SHOW CONNECTIONS RENAME !NEWCON00 RED1B1 SET RED1B1 OPERATING_SYSTEM=NETWARE RENAME !NEWCON01 RED1A1 SET RED1A1 OPERATING_SYSTEM=NETWARE SHOW CONNECTIONS Note: Connection table sorts alphabetically.
Using CLI for Configuration Connection Name Operating System Controller !NEWCON0 NETWARE 2 THIS Port 2 Address Status XXXXX OL this X Unit Offset 0 HOST_ID=XXXX-XXXX-XXXX-XX ADAPTER_ID=XXXX-XXXX-XXX XX X-XXXX !NEWCON0 NETWARE 3 OTHER 2 XXXXX OL other X 0 HOST_ID=XXXX-XXXX-XXXX-XX ADAPTER_ID=XXXX-XXXX-XXX XX X-XXXX RED1A1 NETWARE OTHER 1 XXXXX OL other X 0 ...
Using CLI for Configuration Connection Name RED1A1 Operating System Controller NETWARE OTHER HOST_ID=XXXX-XXXX-XXXX-X XXX RED1B1 NETWARE THIS HOST_ID=XXXX-XXXX-XXXX-X XXX RED2A2 NETWARE OTHER HOST_ID=XXXX-XXXX-XXXX-X XXX RED2B2 NETWARE THIS HOST_ID=XXXX-XXXX-XXXX-X XXX Port Address Status 1 XXXX XX OL other Unit Offset 0 ADAPTER_ID=XXXX-XXXX-XXX X-XXXX 1 XXXX XX OL this 0 ADAPTER_ID=XXXX-XXXX-XXX X-XXXX 2 XXXX XX OL other 0 ADAPTER_ID=XXXX-XXXX-XXX X-XXXX 2 XXXX XX OL this
Using CLI for Configuration Connection Name Operating System Controller Port Address Status GREY1A1 NETWARE OTHER 1 XXXX XX OL other HOST_ID=XXXX-XXXX-XXXX-XX XX GREY1B1 NETWARE THIS HOST_ID=XXXX-XXXX-XXXX-XX XX GREY2A2 NETWARE OTHER HOST_ID=XXXX-XXXX-XXXX-XX XX GREY2B2 NETWARE THIS HOST_ID=XXXX-XXXX-XXXX-XX XX BLUE1A1 NETWARE OTHER HOST_ID=XXXX-XXXX-XXXX-XX XX BLUE1B1 NETWARE THIS HOST_ID=XXXX-XXXX-XXXX-XX XX BLUE2A2 NETWARE OTHER HOST_ID=XXXX-XXXX-XXXX-XX XX BLUE2B2 NETWARE
Using CLI for Configuration HOST_ID=XXXX-XXXX-XXXX-XX XX RED1A1 NETWARE OTHER HOST_ID=XXXX-XXXX-XXXX-XX XX RED1B1 NETWARE THIS HOST_ID=XXXX-XXXX-XXXX-XX XX RED2A2 NETWARE OTHER HOST_ID=XXXX-XXXX-XXXX-XX XX RED2B2 NETWARE THIS HOST_ID=XXXX-XXXX-XXXX-XX XX 152 ADAPTER_ID=XXXX-XXXX-XXX X-XXXX 1 XXXX XX OL other 0 ADAPTER_ID=XXXX-XXXX-XXX X-XXXX 1 XXXX XX OL this 0 ADAPTER_ID=XXXX-XXXX-XXX X-XXXX 2 XXXX XX OL other 0 ADAPTER_ID=XXXX-XXXX-XXX X-XXXX 2 XXXX XX OL this 0 ADAPTER_ID=X
Using CLI for Configuration SET CONNECTION BLUE1A1 UNIT_OFFSET=100 SET CONNECTION BLUE1B1 UNIT_OFFSET=100 SET CONNECTION BLUE2A2 UNIT_OFFSET=100 SET CONNECTION BLUE2B2 UNIT_OFFSET=100 RUN CONFIG ADD RAIDSET R1 DISK10000 DISK20000 DISK30000 DISK40000 DISK50000 DISK60000 INITIALIZE R1 ADD UNIT D102 R1 DISABLE_ACCESS_PATH=ALL SET D102 ENABLE_ACCESS_PATH=(RED1A1, RED1B1, RED2A2, RED2B2) ADD RAIDSET R2 DISK10100 DISK20100 DISK30100 DISK40100 DISK50100 DISK60100 INITIALIZE R2 ADD UNIT D120 R2 DISABLE_ACCESS_PATH
Using CLI for Configuration SHOW UNITS FULL 154 HSG80 ACS Solution Software V8.
Backing Up, Cloning, and Moving Data 7 This chapter includes the following topics: ■ "Backing Up Subsystem Configurations", page 156 ■ "Creating Clones for Backup", page 157 ■ "Moving Storagesets", page 161 HSG80 ACS Solution Software V8.
Backing Up, Cloning, and Moving Data Backing Up Subsystem Configurations The controller stores information about the subsystem configuration in its nonvolatile memory. This information could be lost if the controller fails or when you replace a module in the subsystem. Use the following command to produce a display that shows if the save configuration feature is active and which devices are being used to store the configuration.
Backing Up, Cloning, and Moving Data Creating Clones for Backup Use the Clone utility to duplicate the data on any unpartitioned single-disk unit, stripeset, mirrorset, or striped mirrorset in preparation for backup. When the cloning operation is complete, you can back up the Clones rather than the storageset or single-disk unit, which can continue to service its I/O load. When you are cloning a mirrorset, Clone does not need to create a temporary mirrorset.
Backing Up, Cloning, and Moving Data Unit Unit Temporary mirrorset Disk10300 Disk10300 New member Unit Temporary mirrorset Unit Copy Disk10300 Disk10300 New member Clone Unit Clone of Disk10300 CXO5510A Figure 42: Clone utility steps for duplicating unit members To Clone a single-disk unit, stripeset, or mirrorset: 1. Establish a connection to the controller that accesses the unit you want to Clone. 2. Start Clone using the following command: RUN CLONE 3.
Backing Up, Cloning, and Moving Data The following example shows the commands you would use to Clone storage unit D6. The Clone command terminates after it creates storage unit D33, a Clone or copy of D6. HSG80 >run clone Clone Local Program Invoked NOTE: The number of existing storagesets plus the storagesets added for the cloned unit is limited. CLONE will fail if these limits are exceeded. Refer to the User's Guide for further information.
Backing Up, Cloning, and Moving Data copy from DISK30400 to DISK60200 is 2% complete copy from DISK30400 to DISK60200 is 6% complete ...... ...... ...... copy from DISK30400 to DISK60200 is 99% complete copy from DISK30400 to DISK60200 is 100% complete Press RETURN when you want the new unit to be created reduce DISK60200 set M2 policy=BEST_PERFORMANCE add mirrorset C_M2 init C_M2 DISK60200 nodestroy add unit D33 C_M2 D33 has been created. It is a clone of D6.
Backing Up, Cloning, and Moving Data Moving Storagesets You can move a storageset from one subsystem to another without destroying its data. You also can follow the steps in this section to move a storageset to a new location within the same subsystem. Caution: Move only normal storagesets. Do not move storagesets that are reconstructing or reduced, or data corruption results. See the release notes for the version of your controller software for information on which drives can be supported.
Backing Up, Cloning, and Moving Data 5. Delete each disk drive, one at a time, that the storageset contained. Use the following syntax: DELETE DISK-NAME DELETE DISK-NAME DELETE DISK-NAME 6. Remove the disk drives and move them to their new PTL locations. 7. Again add each disk drive to the controller's list of valid devices. Use the following syntax: ADD DISK DISK-NAME PTL-LOCATION ADD DISK DISK-NAME PTL-LOCATION ADD DISK DISK-NAME PTL-LOCATION 8.
Backing Up, Cloning, and Moving Data New cabinet ADD DISK DISK10000 ADD DISK DISK10100 ADD DISK DISK20000 ADD DISK DISK20100 ADD RAIDSET RAID99 DISK10000 DISK10100 DISK20000 DISK20100 ADD UNIT D100 RAID99 HSG80 ACS Solution Software V8.
Backing Up, Cloning, and Moving Data 164 HSG80 ACS Solution Software V8.
Subsystem Profile Templates A This appendix contains storageset profiles to copy and use to create your profiles. It also contains an enclosure template to use to help keep track of the location of devices and storagesets in your shelves. Four (4) templates are needed for the subsystem. Note: The storage map templates for the Model 4310R and Model 4214R or 4314R reflect the physical location of the disk enclosures in the rack.
Subsystem Profile Templates Storageset Profile Type of Storageset: _____ Mirrorset __X_ RAIDset _____ Stripeset _____ Striped Mirrorset ____ JBOD Storageset Name Disk Drives Unit Number Partitions: Unit # Unit # Unit # Unit # Unit # Unit # Unit # Unit # RAIDset Switches: Reconstruction Policy Reduced Membership Replacement Policy ___Normal (default) __ _No (default) ___Best performance (default) ___Fast ___Yes, missing: ___Best fit ___None Mirrorset Switches: Replacement Policy C
Subsystem Profile Templates Unit Switches: Caching Read caching__________ Read-ahead caching_____ Write-back caching______ Write-through caching____ Access by following hosts enabled __________________________________________________ __________ __________________________________________________ __________ __________________________________________________ __________ __________________________________________________ __________ HSG80 ACS Solution Software V8.
Subsystem Profile Templates Storage Map Template 1 for the BA370 Enclosure Use this template for: ■ BA370 single-enclosure subsystems ■ first enclosure of multiple BA370 enclosure subsystems 1 2 Port 3 4 5 6 Power Supply Power Supply 3 D10300 D20300 D30300 D40300 D50300 D60300 Power Supply Power Supply 2 D30200 D40200 D50200 Targets D10200 D20200 D60200 Power Supply Power Supply 1 D10100 D20100 D30100 D40100 D50100 D60100 Power Supply Power Supply 0 D10000 D20000 168 D300
Subsystem Profile Templates Storage Map Template 2 for the Second BA370 Enclosure Use this template for the second enclosure of multiple BA370 enclosure subsystems.
Subsystem Profile Templates Storage Map Template 3 for the Third BA370 Enclosure Use this template for the third enclosure of multiple BA370 enclosure subsystems.
Subsystem Profile Templates Storage Map Template 4 for the Model 4214R Disk Enclosure Use this template for a subsystem with a three-shelf Model 4214R disk enclosure (single-bus). You can have up to six Model 4214R disk enclosures per controller shelf.
Subsystem Profile Templates 172 Bay 1 2 3 4 5 6 7 8 9 1 0 1 1 1 2 1 3 1 4 SCSI ID 0 0 0 1 0 2 0 3 0 4 0 5 0 8 0 9 1 0 1 1 1 2 1 3 1 4 1 5 DISK ID Disk30000 Disk30100 Disk30200 Disk30300 Disk30400 Disk30500 Disk30800 Disk30900 Disk31000 Disk31100 Disk31200 Disk31300 Disk31400 Disk31500 Model 4214R Disk Enclosure Shelf 3 (Single-bus) HSG80 ACS Solution Software V8.
Subsystem Profile Templates Storage Map Template 5 for the Model 4254 Disk Enclosure Use this template for a subsystem with a three-shelf Model 4254 disk enclosure (dual-bus). You can have up to three Model 4254 disk enclosures per controller shelf.
Subsystem Profile Templates continued from previous page Model 4254 Disk Enclosure Shelf 3 (Dual-bus) 174 Bay 1 2 3 4 5 6 7 8 9 1 0 1 1 1 2 1 3 1 4 SCSI ID 0 0 0 1 0 2 0 3 0 4 0 5 0 8 0 0 0 1 0 2 0 3 0 4 0 5 0 8 DISK ID Disk50100 Disk50200 Disk50300 Disk50400 Disk50500 Disk50800 Disk60000 Disk60100 Disk60200 Disk60300 Disk60400 Disk60500 Disk60800 Bus B Disk50000 Bus A HSG80 ACS Solution Software V8.
Subsystem Profile Templates Storage Map Template 6 for the Model 4310R Disk Enclosure Use this template for a subsystem with a six-shelf Model 4310R disk enclosure (single-bus). You can have up to six Model 4310R disk enclosures per controller shelf.
Subsystem Profile Templates Model 4310R Disk Enclosure Shelf 4 (Single-bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID Disk41200 9 Disk41100 8 Disk41000 7 Disk40800 6 Disk40500 5 Disk40400 4 Disk40300 3 Disk40200 2 Disk40100 1 Disk40000 Bay Model 4310R Disk Enclosure Shelf 1 (Single-bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID Disk11200 9 Disk11100 8 Disk11000 7 Disk10800 6 Disk10500 5 Disk10400 4 Disk10300 3 Disk10200 2 Disk10100
Subsystem Profile Templates Model 4310R Disk Enclosure Shelf 3 (Single-bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID HSG80 ACS Solution Software V8.
Subsystem Profile Templates Storage Map Template 7 for the Model 4350R Disk Enclosure Use this template for a subsystem with a three-shelf Model 4350R disk enclosure (single-bus). You can have up to three Model 4350R disk enclosures per controller shelf.
Subsystem Profile Templates Model 4350R Disk Enclosure Shelf 4 (Single-bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID HSG80 ACS Solution Software V8.
Subsystem Profile Templates Storage Map Template 8 for the Model 4314R Disk Enclosure Use this template for a subsystem with a six-shelf Model 4314R disk enclosure. You can have a maximum of six Model 4314R disk enclosures with each Model 2200 controller enclosure.
Subsystem Profile Templates Model 4314R Disk Enclosure Shelf 4 (Single-bus) 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID Disk41500 13 Disk41400 12 Disk41300 11 Disk41200 10 Disk41100 9 Disk41000 8 Disk40900 7 Disk40800 6 Disk40500 5 Disk40400 4 Disk40300 3 Disk40200 2 Disk40100 1 Disk40000 Bay Model 4314R Disk Enclosure Shelf 1 (Single-bus) 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID Disk11500 13 Disk11400 1
Subsystem Profile Templates Model 4314R Disk Enclosure Shelf 3 (Single-bus) 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID 182 Disk31500 13 Disk31400 12 Disk31300 11 Disk31200 10 Disk31100 9 Disk31000 8 Disk30900 7 Disk30800 6 Disk30500 5 Disk30400 4 Disk30300 3 Disk30200 2 Disk30100 1 Disk30000 Bay HSG80 ACS Solution Software V8.
Subsystem Profile Templates Storage Map Template 9 for the Model 4354R Disk Enclosure Use this template for a subsystem with a three-shelf Model 4354R disk enclosure (dual-bus). You can have up to three Model 4354R disk enclosures per controller shelf.
Subsystem Profile Templates Model 4354R Disk Enclosure Shelf 3 (Dual-bus) SCSI Bus A SCSI Bus B 14 SCSI ID 00 01 02 03 04 05 08 00 01 02 03 04 05 08 DISK ID 184 Disk60800 13 Disk60500 12 Disk60400 11 Disk60300 10 Disk60200 9 Disk60100 8 Disk60000 7 Disk50800 6 Disk50500 5 Disk50400 4 Disk50300 3 Disk50200 2 Disk50100 1 Disk50000 Bay HSG80 ACS Solution Software V8.
Installing, Configuring, and Removing the Client B The following information is included in this appendix: ■ "Why Install the Client?", page 186 ■ "Before You Install the Client", page 187 ■ "Installing the Client", page 188 ■ "Installing the Integration Patch", page 189 ■ "Troubleshooting Client Installation", page 192 ■ "Adding Storage Subsystem and its Host to Navigation Tree", page 194 ■ "Removing Command Console Client", page 196 ■ "Where to Find Additional Information", page 198 HSG8
Installing, Configuring, and Removing the Client Why Install the Client? The Client monitors and manages a storage subsystem by performing the following tasks: 186 ■ Create mirrored device group (RAID 1) ■ Create striped device group (RAID 0) ■ Create striped mirrored device group (RAID 0+1) ■ Create striped parity device group (3/5) ■ Create an individual device (JBOD) ■ Monitor many subsystems at once ■ Set up pager notification HSG80 ACS Solution Software V8.
Installing, Configuring, and Removing the Client Before You Install the Client 1. Verify that you are logged into an account that is a member of the administrator group. 2. Check the software product description that came with the software for a list of supported hardware. 3. Verify that you have the SNMP service installed on the computer. SNMP must be installed on the computer for this software to work properly. The Client software uses SNMP to receive traps from the Agent.
Installing, Configuring, and Removing the Client Installing the Client The following restriction should be observed when installing SWCC on Windows NT 4.0 Workstations. If you select all of the applets during installation, the installation fails on the HSG60 applet and again on one of the HSG80 applets. The workaround is to install all of the applets you want except for the HSG60 applet and the HSG80 ACS 8.5 applet. You can then return to the setup program and install the one that you need. 1.
Installing, Configuring, and Removing the Client Installing the Integration Patch The integration patch determines which version of firmware the controller is using and launches the appropriate StorageWorks Command Console (SWCC) Storage Window within Insight Manager (CIM) V4.23. Should I Install the Integration Patch? Install this patch if your HSG80 controller uses ACS 8.6 or later. This patch enables you to use the controller’s SWCC Storage Window within CIM to monitor and manage the controller.
Installing, Configuring, and Removing the Client Caution: If you remove the integration patch, HSG80 Storage Window V2.1 no longer works and you need to reinstall HSG80 Storage Window V2.1. The integration patch uses some of the same files as the HSG80 Storage Window V2.1. Integrating Controller’s SWCC Storage Window with CIM You can open the controller’s Storage Window from within the Windows-based CIM V4.23 by doing the following: 1.
Installing, Configuring, and Removing the Client “Insight Manager Unable to Find Controller’s Storage Window” If you installed Insight Manager before SWCC, Insight Manager is unable to find the controller’s Storage Window. To find the controller’s Storage Window, perform the following procedure: 1. Double-click the Insight Agents icon (Start > Settings > Control Panel). A window is displayed showing you the active and inactive Agents under the Services tab. 2.
Installing, Configuring, and Removing the Client Troubleshooting Client Installation This section provides information on how to resolve some of the problems that may occur when installing the Client software: ■ ■ Invalid Network Port Assignments During Installation “There is no disk in the drive” Message Invalid Network Port Assignments During Installation SWCC Clients and Agents communicate by using sockets.
Installing, Configuring, and Removing the Client spagent 4999/tcp #HS-Series Client and Agent spagent3 4994/tcp #HSZ22 Client and Agent ccagent 4997/tcp #RA200 Client and Agent spagent2 4995/tcp #RA200 Client and Agent “There is no disk in the drive” Message When you install the Command Console Client, the software checks the shortcuts on the desktop and in the Start menu. The installation checks the shortcuts of all users for that computer, even if they are not currently logged on.
Installing, Configuring, and Removing the Client Adding Storage Subsystem and its Host to Navigation Tree The Navigation Tree enables you to manage storage over the network by using the Storage Window. If you plan to use pager notification, you must add the storage subsystem to the Navigation Tree. 1. Verify that you have properly installed and configured the HS-Series Agent on the storage subsystem host. 2. Click Start > Programs > Command Console > StorageWorks Command Console.
Installing, Configuring, and Removing the Client Figure 44: Navigation window showing storage host system “Atlanta” 6. Click the plus sign to expand the host icon. When expanded, the Navigation Window displays an icon for the storage subsystem. To access the Storage Window for the subsystem, double-click the Storage Window icon. Figure 45: Navigation window showing expanded “Atlanta” host icon HSG80 ACS Solution Software V8.
Installing, Configuring, and Removing the Client Note: You can create virtual disks by using the Storage Window. For more information on the Storage Window, refer to HP StorageWorks Command Console V2.5 User Guide. Removing Command Console Client Before you remove the Command Console Client from the computer, remove AES. This prevents the system from reporting that a service failed to start every time the system is restarted. Steps 2 through 5 describe how to remove the Command Console Client.
Installing, Configuring, and Removing the Client Note: This procedure removes only the Command Console Client (SWCC Navigation Window). You can remove the HSG80 Client by using the Add/Remove program. HSG80 ACS Solution Software V8.
Installing, Configuring, and Removing the Client Where to Find Additional Information You can find additional information about SWCC by referring to the online Help and to HP StorageWorks Command Console V2.5 User Guide. About the User Guide HP StorageWorks Command Console V2.5 User Guide contains additional information on how to use SWCC.
glossary Glossary This glossary defines terms pertaining to the ACS solution software. It is not a comprehensive glossary of computer terms. 8B/10B A type of byte definition encoding and decoding to reduce errors in data transmission patented by the IBM Corporation. This process of encoding and decoding data for transmission has been adopted by ANSI. adapter A device that converts the protocol and hardware interface of one bus type into another without changing the function of the bus.
Glossary association set A group of remote copy sets that share selectable attributes for logging and failover. Members of an association set transition to the same state simultaneously. For example, if one association set member assumes the failsafe locked condition, then other members of the association set also assume the failsafe locked condition. An association set can also be used to share a log between a group of remote copy set members that require efficient use of the log space.
Glossary built-in self-test A diagnostic test performed by the array controller software on the controller policy processor. byte A binary character string made up of 8 bits operated on as a unit. cache memory A portion of memory used to accelerate read and write operations. cache module A fast storage buffer CCL CCL-Command Console LUN, a “SCSI Logical Unit Number” virtual-device used for communicating with Command Console Graphical User Interface (GUI) software.
Glossary controller A hardware device that, with proprietary software, facilitates communications between a host and one or more devices organized in an array. The HSG80 family controllers are examples of array controllers. copying A state in which data to be copied to the mirrorset is inconsistent with other members of the mirrorset. See also normalizing. copying member Any member that joins the mirrorset after the mirrorset is created is regarded as a copying member.
Glossary DOC DWZZA-On-a-Chip. ASCSI bus extender chip used to connect a SCSI bus in an expansion cabinet to the corresponding SCSI bus in another cabinet (See DWZZA). driver A hardware device or a program that controls or regulates another device. For example, a device driver is a driver developed for a specific device that allows a computer to operate with the device, such as a printer or a disk drive.
Glossary ESD Electrostatic discharge. The discharge of potentially harmful static electrical voltage as a result of improper grounding. extended subsystem A subsystem in which two cabinets are connected to the primary cabinet. external cache battery See ECB. F_Port A port in a fabric where an N_Port or NL_Port may attach. fabric A group of interconnections between ports that includes a fabric element.
Glossary FCC Federal Communications Commission. The federal agency responsible for establishing standards and approving electronic devices within the United States. FCC Class A This certification label is on electronic devices that can only be used in a commercial environment within the United States. FCC Class B This certification label is on electronic devices that can be used in either a home or a commercial environment within the United States.
Glossary FRU Field replaceable unit. A hardware component that can be replaced at the customer location by service personnel or qualified customer service personnel. FRUTIL Field Replacement utility. full duplex (n) A communications system in which there is a capability for 2-way transmission and acceptance between two sites at the same time. full duplex (adj) Pertaining to a communications method in which data can be transmitted and received at the same time.
Glossary host compatibility mode A setting used by the controller to provide optimal controller performance with specific operating systems. This improves the controller performance and compatibility with the specified operating system. hot disks A disk containing multiple hot spots. Hot disks occur when the workload is poorly distributed across storage devices which prevents optimum subsystem performance. See also hot spots. hot spots A portion of a disk drive frequently accessed by the host.
Glossary interface A set of protocols used between components, such as cables, connectors, and signal levels. I/O Refers to input and output functions. I/O driver The set of code in the kernel that handles the physical I/O to a device. This is implemented as a fork process. Same as driver. I/O interface See interface. I/O module A 16-bit SBB shelf device that integrates the SBB shelf with either an 8-bit single ended, 16-bit single-ended, or 16-bit differential SCSI bus (see SBB).
Glossary logical unit A physical or virtual device addressable through a target ID number. LUNs use their target bus connection to communicate on the SCSI bus. logical unit number LUN. A value that identifies a specific logical unit belonging to a SCSI target ID number. A number associated with a physical device unit during a task I/O operations. Each task in the system must establish its own correspondence between logical unit numbers and physical devices. logon Also called login.
Glossary mirrored write-back caching A method of caching data that maintains two copies of the cached data. The copy is available if either cache module fails. mirrorset See RAID level 1. MIST Module Integrity Self-Test. multibus failover Allows the host to control the failover process by moving the units from one controller to another. N_port A port attached to a node for use with point-to-point topology or fabric topology. NL_port A port attached to a node for use in all topologies.
Glossary normalizing Normalizing is a state in which, block-for-block, data written by the host to a mirrorset member is consistent with the data on other normal and normalizing members. The normalizing state exists only after a mirrorset is initialized. Therefore, no customer data is on the mirrorset. normalizing member A mirrorset member whose contents are the same as all other normal and normalizing members for data that has been written since the mirrorset was created or lost cache data was cleared.
Glossary partition A logical division of a container, represented to the host as a logical unit. PCMCIA Personal Computer Memory Card Industry Association. An international association formed to promote a common standard for PC card-based peripherals to be plugged into notebook computers. The card commonly known as a PCMCIA card is about the size of a credit card. PDU Power distribution unit. The power entry device for HP StorageWorks cabinets.
Glossary private NL_Port An NL_Port which does not attempt login with the fabric and only communicates with NL_Ports on the same loop. program card The PCMCIA card containing the controller operating software. protocol The conventions or rules for the format and timing of messages sent and received. PTL Port-Target-LUN. The controller method of locating a device on the controller device bus. PVA module Power Verification and Addressing module.
Glossary RAID level 3/5 A RAID storageset that stripes data and parity across three or more members in a disk array. A RAIDset combines the best characteristics of RAID level 3 and RAID level 5. A RAIDset is the best choice for most applications with small to medium I/O requests, unless the application is write intensive. A RAIDset is sometimes called parity RAID. RAIDset See RAID level 3/5. RAM Random access memory.
Glossary remote copy set A bound set of two units, one located locally and one located remotely, for long-distance mirroring. The units can be a single disk, or a storageset, mirrorset, or RAIDset. A unit on the local controller is designated as the “initiator” and a corresponding unit on the remote controller is designated as the “target”. request rate The rate at which requests are arriving at a servicing entity. RFI Radio frequency interference.
Glossary SCSI ID number The representation of the SCSI address that refers to one of the signal lines numbered 0 through 15. SCSI-P cable A 68-conductor (34 twisted-pair) cable generally used for differential bus connections. SCSI port (1) Software: The channel controlling communications to and from a specific SCSI bus in the system. (2) Hardware: The name of the logical socket at the back of the system unit to which a SCSI device is connected.
Glossary StorageWorks A family of modular data storage products that allow customers to design and configure their own storage subsystems. Components include power, packaging, cabling, devices, controllers, and software. Customers can integrate devices and array controllers in HP StorageWorks enclosures to form storage subsystems. HP StorageWorks systems include integrated SBBs and array controllers to form storage subsystems.
Glossary tape inline exerciser (TILX) The controller diagnostic software to test the data transfer capabilities of tape drives in a way that simulates a high level of user activity. topology An interconnection scheme that allows multiple Fibre Channel ports to communicate with each other. For example, point-to-point, Arbitrated Loop, and switched fabric are all Fibre Channel topologies.
Glossary warm swap A device replacement method that allows the complete system to remain online during device removal or insertion. The system bus may be halted, or quiesced, for a brief period of time during the warm-swap procedure. Wide Ultra SCSI Fast/20 on a Wide SCSI bus. Worldwide name A unique 64-bit number assigned to a subsystem by the Institute of Electrical and Electronics Engineers (IEEE) and set by manufacturing prior to shipping. This name is referred to as the node ID within the CLI.
Glossary 220 HSG80 ACS Solution Software V8.
index A B ADD CONNECTIONS multiple-bus failover 46 transparent failover 44 ADD UNIT multiple-bus failover 46 transparent failover 44 adding virtual disks 198 adding a disk drive to the spareset configuration options 139 adding disk drives configuration options 139 array of disk drives 75 assigning unit numbers 44 assignment unit numbers fabric topology 137 unit qualifiers fabric topology 137 assignment of unit numbers fabric topology partition 137 single disk 137 asynchronous event service 198 audience 1
Index installation verification 121, 129 CLI configuration example 146 CLI configurations 143 CLI prompt changing fabric topology 139 Client removing 196 uninstalling 196 CLONE utility backup 157 cloning backup 157 command console LUN 39 SCSI-2 mode 47 SCSI-3 mode 47 comparison of container types 75 configuration backup 156 completing 109 fabric topology devices 130, 131 multiple-bus failover cabling 122 multiple-bus failover using CLI 146 single controller cabling 115 restoring 93 rules 63 configuration o
Index fabric topology 141 configuration fabric topology 130, 131 creating a profile 76 disk drives adding fabric topology 139 adding to the spareset fabric topology 139 array 75 corresponding storagesets 96 dividing 86 removing from the spareset fabric topology 140 displaying the current switches fabric topology 141 dividing storagesets 86 document conventions 25 prerequisites 12 related documentation 12 E equipment symbols 17 erasing metadata 94 establishing a local connection 114 F fabric topology conf
Index installation verification CLI commands 121, 129 integrating SWCC 198 invalid network port assignments 192 J JBOD 75 L LOCATE find devices 96 location cache module 28, 29 controller 28, 29 logintmo parameter 108 LUN IDs general description 58 LUN presentation 45 M maintenance port connection establishing a local connection 114 illustrated 114 mapping storagesets 96 messages there is no disk in the drive 193 mirrored caching enabling 39 illustrated 39 mirrorset switches changing fabric topology 142
Index P pager notification 198 configuring 198 partitions assigning a unit number fabric topology 137 defining 87 planning considerations 86 performance 82 Physical connection, making 106 planning 59 overview 76 striped mirrorsets 85 stripesets 79 Planning a subsystem 27 planning configurations where to start 60 planning considerations 82 planning storage containers 74 planning storagesets characteristics changing switches 88 initialization switch 88 storageset switch 88 unit switch 88 switches initializat
Index command console lun 47 Second enclosure of multiple-enclosure subsystem storage map template 2 169 selective storage presentation 49 SET CONNECTIONS multiple-bus failover 46 transparent failover 44 SET UNIT multiple-bus failover 46 setting controller configuration handling 93 single disk (JBOD) assigning a unit number fabric topology 137 Single-enclosure subsystem storage map template 1 168 storage creating map 96 profile example 166 storage configurations 59 storage map 96 Storage map template 1 168
Index SWCC additional information 198 integrating 198 online help 198 switches changing 88 changing characteristics 88 CHUNKSIZE 91 mirrorsets 89 NOSAVE_CONFIGURATION 93 RAIDset 89 SAVE_CONFIGURATION 93 switches for storagesets overview 88 symbols in text 25 symbols on equipment 17 T technical support, HP 20 templates subsystem profile 165 terminology other controller 29 this controller 29 text symbols 25 Third enclosure of multiple-enclosure subsystem storage map template 3 170 this controller 29 transpa
Index write performance 93 write requests improving the subsystem response time with write-back caching 37 placing data with write-through caching 38 228 write-back caching general description 37 write-through caching general description 38 HSG80 ACS Solution Software V8.