hp StorageWorks HSG80 ACS Solution Software Version 8.7 for Compaq Tru64 UNIX Installation and Configuration Guide Part Number: AA-RFAUE-TE Fifth Edition (August 2002) Product Version: 8.7 This guide provides installation and configuration instructions and reference material for operation of the HSG80 ACS Solution Software Version 8.7 for Compaq Tru64 UNIX.
© Hewlett-Packard Company, 2002. All rights reserved. Hewlett-Packard Company makes no warranty of any kind with regard to this material, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. Hewlett-Packard shall not be liable for errors contained herein or for incidental or consequential damages in connection with the furnishing, performance, or use of this material. This document contains proprietary information, which is protected by copyright.
1 Contents About this Guide Intended Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv Related Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv Document Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii Configuration Flowchart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Determining Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Naming Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Numbers of Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Assigning Unit Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Defining a Partition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Guidelines for Partitioning Storagesets and Disk Drives . . . . . . . . . . . . . . . . Changing Characteristics through Switches. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enabling Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Changing Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents HSG80 Units and Tru64 UNIX Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3–9 Add recognition for LP8000 adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3–9 File Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3–10 For V4.0G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3–10 For V5.1x. . . . . . . . . . .
Contents Error Threshold for Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3–34 4 Installing and Configuring HSG Agent Why Use StorageWorks Command Console (SWCC)? . . . . . . . . . . . . . . . . . . . . . . . . 4–1 Installation and Configuration Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–2 About the Network Connection for the Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Repeat Procedure for Each Host Adapter Connection . . . . . . . . . . . . . . . . . . Verify Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . For V4.0G Use. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . For V5.1x Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring Devices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents 7 Backing Up, Cloning, and Moving Data Backing Up Subsystem Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7–1 Creating Clones for Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7–2 .Moving Storagesets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7–5 A Subsystem Profile Templates Storageset Profile . . . . . . . . . . . . . . . . . . . . . . . . .
Contents C SWCC Agent in TruCluster Environment SWCC Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C–1 Running the SWCC Agent on a V4.0G Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C–2 Running the SWCC Agent under ASE Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C–2 Creating the Start/Stop Script. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Figures 1 2 3 1–1 1–2 1–3 1–4 1–5 1–6 1–7 1–8 1–9 1–10 1–11 1–12 1–13 1–14 1–15 1–16 2–1 2–2 2–3 2–4 2–5 2–6 2–7 2–8 2–9 2–10 2–11 2–12 2–13 3–1 3–2 General configuration flowchart (panel 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xx General configuration flowchart (panel 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi Configuring storage with SWCC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents 4–1 5–1 5–2 5–3 5–4 6–1 6–2 6–3 6–4 7–1 B–1 B–2 B–3 xii An example of a network connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–4 Maintenance port connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–2 Single controller cabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–4 Controller pair failover cabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Tables 1 2 1–1 2–1 2–2 2–3 2–4 2–5 2–6 2–7 2–8 4–1 4–2 4–3 4–4 4–5 4–6 C–1 C–2 C–3 Document Conventions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii Summary of Chapter Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii Unit Assignments and SCSI_VERSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–21 PTL addressing, single-bus configuration, six Model 4310R enclosures. . . . .
About this Guide This guide describes how to install and configure the HSG80 ACS Solution Software Version 8.7 for Compaq Tru64 UNIX. This guide describes: • How to plan the storage array subsystem; and, • How to install and configure the storage array subsystem on individual operating system platforms. This book does not contain information about the operating environments to which the controller may be connected; nor does it contain detailed information about subsystem enclosures or their components.
About this Guide • Installation and Configuration Guide (platform-specific) - the guide you are reading • Solution Software Release Notes (platform-specific) • FC-AL Application Note (AA-RS1ZA-TE) - Solution software host support includes the following platforms: — IBM AIX — HP-UX — Linux (Red Hat x86/Alpha, SuSE x86/Alpha, Caldera x86) — Novell NetWare — Open VMS — Sun Solaris — Tru64 UNIX — Windows NT/2000 Additional support required by HSG80 ACS Solution Software Version 8.
About this Guide Document Conventions The conventions included in Table 1 apply. Table 1: Document Conventions Element Convention Cross-reference links Blue text: Figure 1 Key names, menu items, buttons, and dialog box titles Bold File names, application names, and text emphasis Italics User input, command names, system responses (output and messages) Monospace font Variables Monospace, italic font Website addresses Sans serif font (http://www.compaq.
About this Guide Configuration Flowchart A three-part flowchart (Figures 1-3) is shown on the following pages. Refer to these charts while installing and configuring a new storage subsystem. All references in the flowcharts pertain to pages in this guide, unless otherwise indicated. Table 2 below summarizes the content of the chapters. Table 2: Summary of Chapter Contents Chapters Description 1.
About this Guide Table 2: Summary of Chapter Contents (Continued) Chapters Appendix A. Description Subsystem Profile Templates This appendix contains storageset profiles to copy and use to create your system profiles. It also contains an enclosure template to use to help keep track of the location of devices and storagesets in your shelves. Four (4) templates will be needed for the subsystem. Appendix B. The Client monitors and manages a storage subsystem.
About this Guide Unpack subsystem See the unpacking instructions on shipping box Plan a Subsystem Chapter 1 Plan Storage Configurations Chapter 2 Prepare Host System Chapter 3 Make Local Connection Page 5-2 Controller pair Single controller Cable Controller Page 5-3 Cable Controllers Page 5-10 Configure Controller Page 5-4 Configure Controllers Page 5-11 Installing SWCC ? No A Yes B See Figure 3 on page xxii See continuation of figure on next page Figure 1: General configuration flowchart (
About this Guide A Configure devices Page 5-16 Create Storagesets and Partitions: Stripeset, Page 5-17 Mirrorset, Page 5-18 RAIDset, Page 5-19 Striped Mirrorset, Page 5-19 Single (JBOD) Disk, Page 5-20 Continue creating units until you have you have completed your planned configuration. Partition, Page 5-20 Assign Unit Numbers Page 5-22 Configuration Options Page 5-23 Verify Storage Setup Figure 2: General configuration flowchart (panel 2) HSG80 ACS Solution Software Version 8.
About this Guide B Install Agent Chapter 4 Install Client Appendix B Create Storage See SWCC online help Verify Storage Set Up Figure 3: Configuring storage with SWCC xxii HSG80 ACS Solution Software Version 8.
About this Guide Symbols in Text These symbols may be found in the text of this guide. They have the following meanings. WARNING: Text set off in this manner indicates that failure to follow directions in the warning could result in bodily harm or loss of life. CAUTION: Text set off in this manner indicates that failure to follow directions could result in damage to equipment or data. IMPORTANT: Text set off in this manner presents clarifying information or specific instructions.
About this Guide Any surface or area of the equipment marked with these symbols indicates the presence of a hot surface or hot component. Contact with this surface could result in injury. WARNING: To reduce the risk of injury from a hot component, allow the surface to cool before touching. Power supplies or systems marked with these symbols indicate the presence of multiple sources of power.
About this Guide Getting Help If you still have a question after reading this guide, contact an authorized service provider or access our website. Technical Support In North America, call technical support at 1-800-OK-COMPAQ, available 24 hours a day, 7 days a week. NOTE: For continuous quality improvement, calls may be recorded or monitored. Outside North America, call technical support at the nearest location.
1 Planning a Subsystem This chapter provides information that helps you plan how to configure the storage array subsystem. This chapter focuses on the technical terms and knowledge needed to plan and implement storage subsystems. NOTE: This chapter frequently references the command line interface (CLI). For the complete syntax and descriptions of the CLI commands, see the StorageWorks HSG80 Array Controller ACS Version 8.7 CLI Reference Guide.
Planning a Subsystem Defining Subsystems This section describes the terms this controller and other controller. It also presents graphics of the Model 2200 and BA370 enclosures. NOTE: The HSG80 controller uses the BA370 or Model 2200 enclosure. Controller Designations A and B The terms A, B, “this controller,” and “other controller,” respectively are used to distinguish one controller from another in a two-controller (also called dual-redundant) subsystem.
Planning a Subsystem BA370 Enclosure 1 2 3 4 5 2 1 3 4 5 6 6 EMU PVA Controller A Controller B Cache module A Cache module B CXO6283B Figure 1–2: Location of controllers and cache modules in a BA370 enclosure Controller Designations “This Controller” and “Other Controller” Some CLI commands use the terms “this” and “other” to identify one controller or the other in a dual-redundant pair. These designations are a shortened form of “this controller” and “other controller.
Planning a Subsystem Model 2200 Enclosure 1 2 CXO7366A 1 This controller 2 Other controller Figure 1–3: “This controller” and “other controller” for the Model 2200 enclosure BA370 Enclosure 1 2 CXO6468D 1 Other controller 2 This controller Figure 1–4: “This controller” and “other controller” for the BA370 enclosure 1–4 HSG80 ACS Solution Software Version 8.
Planning a Subsystem What is Failover Mode? Failover is a way to keep the storage array available to the host if one of the controllers becomes unresponsive. A controller can become unresponsive because of a controller hardware failure or, in multiple-bus mode only, due to a failure of the link between host and controller or host-bus adapter. Failover keeps the storage array available to the hosts by allowing the surviving controller to take over total control of the subsystem.
Planning a Subsystem enabling the standby port to take over for the active one. If one controller fails, its companion controller (known as the surviving controller) takes control by making both its host ports active, as shown in Figure 1–6. Units are divided between the host ports: • Units 0-99 are on host port 1 of both controllers (but accessible only through the active port). • Units 100-199 are on host port 2 of both controllers (but accessible only through the active port).
Planning a Subsystem Host 1 Host 2 Switch or hub Switch or hub Host port 1 active D0 Host 3 D1 Host port 1 not available Host port 2 active Controller A D100 Controller B not available D101 D120 Host port 2 not available CXO7035A Figure 1–6: Transparent failover—after failover from controller B to controller A Multiple-Bus Failover Mode Multiple-bus failover mode has the following characteristics: • Host controls the failover process by moving the units from one controller to another •
Planning a Subsystem All hosts must have operating system software that supports multiple-bus failover mode. With this software, the host sees the same units visible through two (or more) paths. When one path fails, the host can issue commands to move the units from one path to another. A typical multiple-bus failover configuration is shown in Figure 1–7. In multiple-bus failover mode, you can specify which units are normally serviced by a specific controller of a controller pair.
Planning a Subsystem Host 1 "RED" Host 2 "GREY" Host 3 "BLUE" FCA1 FCA2 FCA1 FCA2 FCA1 FCA2 Switch or hub Switch or hub Host port 1 active D0 Host port 2 active Controller A D1 D2 D100 D101 D120 All units visible to all ports Host port 1 active Controller B Host port 2 active NOTE: FCA = Fibre Channel Adapter CXO7094B Figure 1–7: Typical multiple-bus configuration Selecting a Cache Mode The cache module supports read, read-ahead, write-through, and write-back caching techniques.
Planning a Subsystem Read Caching When the controller receives a read request from the host, it reads the data from the disk drives, delivers it to the host, and stores the data in its cache module. Subsequent reads for the same data will take the data from cache rather than accessing the data from the disks. This process is called read caching. Read caching can improve response time to many of the host’s read requests. By default, read caching is enabled for all units.
Planning a Subsystem The total memory available for cached data is reduced by half, but the level of protection is greater. Cache module A Cache module B A cache B cache Copy of B cache Copy of A cache CXO5729A Figure 1–8: Mirrored caching Before enabling mirrored caching, make sure the following conditions are met: • Both controllers support the same size cache. • Diagnostics indicate that both caches are good.
Planning a Subsystem • Serves as a communications device for the HS-Series Agent. The CCL identifies itself to the host by a unique identification string. In dual-redundant controller configurations, the commands described in the following sections alter the setting of the CCL on both controllers. The CCL is enabled only on host port 1. At least one storage device of any type must be configured on host port 2 before installing the Agent on a host connected to host port 2.
Planning a Subsystem Determining Connections The term “connection” applies to every path between a Fibre Channel adapter in a host computer and an active host port on a controller. NOTE: In ACS Version 8.7, the maximum number of supported connections is 96. Naming Connections It is highly recommended that you assign names to connections that have meaning in the context of your particular configuration.
Planning a Subsystem • If a controller pair is in transparent failover mode and port 1 and port 2 are on the same link (that is, all ports are on the same loop or fabric), each adapter will have two connections, as shown in Figure 1–10. • If a controller pair is in multiple-bus failover mode, each adapter has two connections, as shown in Figure 1–11.
Planning a Subsystem Host 1 "GREEN" Host 2 "ORANGE" Host 3 "PURPLE" FCA1 FCA1 FCA1 Switch or hub Connections GREEN1A1 ORANGE1A1 PURPLE1A1 Host port 1 active D0 Host port 2 standby Controller A D1 Host port 1 standby Connections GREEN1B2 ORANGE1B2 PURPLE1B2 D100 Controller B D101 D120 Host port 2 active NOTE: FCA = Fibre Channel Adapter CXO7079B Figure 1–10: Connections in single-link, transparent failover mode configurations HSG80 ACS Solution Software Version 8.
Planning a Subsystem Host 1 "VIOLET" FCA1 FCA2 Switch or hub Connection VIOLET1B1 Switch or hub Connection VIOLET1A1 Connection VIOLET2A2 Host port 1 active D0 Host port 2 active Controller A D1 D2 D100 Connection VIOLET2B2 D101 D120 All units visible to all ports Host port 1 active Controller B Host port 2 active NOTE: FCA = Fibre Channel Adapter CXO7080B Figure 1–11: Connections in multiple-bus failover mode Assigning Unit Numbers The controller keeps track of the unit with the unit n
Planning a Subsystem • The UNIT_OFFSET switch in the ADD CONNECTIONS (or SET connections) commands • The controller port to which the connection is attached • The SCSI_VERSION switch of the SET THIS_CONTROLLER/OTHER_CONTROLLER command The considerations for assigning unit numbers are discussed in the following sections.
Planning a Subsystem Controller units Host connection 1 Offset: 0 Host connection 2 Offset: 20 D0 LUN 0 D1 LUN 1 D2 LUN 2 D3 LUN 3 D20 LUN 20 LUN 0 D21 LUN 21 LUN 1 Host connection 3 Offset: 100 D100 LUN 0 D101 LUN 1 D102 LUN 2 D130 LUN 30 D131 LUN 31 CXO6455B Figure 1–12: LUN presentation to hosts, as determined by offset Offsets other than the default values can be specified.
Planning a Subsystem Matching Units to Host Connections in Multiple-Bus Failover Mode In multiple-bus failover mode, the ADD UNIT command creates a unit for host connections to access. All unit numbers (0 through 199) are potentially visible on all four controller ports, but are accessible only to those host connections for which access path is enabled and which have offsets in the unit's range.
Planning a Subsystem Assigning Host Connection Offsets and Unit Numbers in SCSI-3 Mode If SCSI_VERSION is set to SCSI-3, the CCL is presented as LUN 0 to all connections. The CCL supersedes any other unit assignment. Therefore, in SCSI-3 mode, a unit that would normally be presented to a connection as LUN 0 is not visible to that connection at all.
Planning a Subsystem Table 1–1: Unit Assignments and SCSI_VERSION SCSI_VERSI ON Offset Unit Assignment What the connection sees LUN 0 as SCSI-2 Divisible by 10 At offsets Unit whose number matches offset SCSI-3 Divisible by 10 Not at offsets CCL What is Selective Storage Presentation? Selective Storage presentation is a feature of the HSG80 controller that enables the user to control the allocation of storage space and shared access to storage across multiple hosts.
Planning a Subsystem Restricting Host Access by Separate Links In transparent failover mode, host port 1 of controller A and host port 1 of controller B share a common Fibre Channel link. Host port 2 of controller A and host port 2 of controller B also share a common Fibre Channel link. If the host 1 link is separate from the host 2 link, the simplest way to limit host access is to have one host or set of hosts on the port 1 link, and another host or set of hosts on the port 2 link.
Planning a Subsystem Restricting Host Access by Disabling Access Paths If more than one host is on a link (that is, attached to the same port), host access can be limited by enabling the access of certain host connections and disabling the access of others. This is done through the ENABLE_ACCESS_PATH and DISABLE_ACCESS_PATH switches of the ADD UNIT (or SET unit) commands. The access path is a unit switch, meaning it must be specified for each unit.
Planning a Subsystem Host BROWN cannot see units lower than its offset, so it cannot access units D100 and D101. However, host BLACK can still access D120 as LUN 20 if the operating system permits. To restrict access of D120 to only host BROWN, enable only host BROWN’s access, as follows: SET D120 DISABLE_ACCESS_PATH=ALL SET D120 ENABLE_ACCESS_PATH=BROWN1B2 NOTE: StorageWorks recommends that you provide access to only specific connections, even if there is just one connection on the link.
Planning a Subsystem Host 1 "RED" Host 2 "GREY" Host 3 "BLUE" FCA1 FCA2 FCA1 FCA2 FCA1 FCA2 Switch or hub Connections RED1B1 GREY1B1 BLUE1B1 Switch or hub Connections RED1A1 GREY1A1 BLUE1A1 Connections RED2A2 GREY2A2 BLUE2A2 Host port 1 active Host port 2 active Controller A D0 D1 D2 D100 Connections RED2B2 GREY2B2 BLUE2B2 D101 D120 All units visible to all ports Host port 1 active Controller B Host port 2 active NOTE: FCA = Fibre Channel Adapter CXO7078B Figure 1–14: Limiting host a
Planning a Subsystem For example: Figure 1–14 shows a representative multiple-bus failover configuration. Restricting the access of unit D101 to host BLUE can be done by enabling only the connections to host BLUE. At least two connections must be enabled for multiple-bus failover to work. For most operating systems, it is desirable to have all connections to the host enabled.
Planning a Subsystem For example: In Figure 1–14, assume all host connections initially have the default offset of 0. Giving all connections access to host BLUE, an offset of 120 will present unit D120 to host BLUE as LUN 0. Enter the following commands: SET BLUE1A1 UNIT_OFFSET=120 SET BLUE1B1 UNIT_OFFSET=120 SET BLUE2A2 UNIT_OFFSET=120 SET BLUE2B2 UNIT_OFFSET=120 Host BLUE cannot see units lower than its offset, so it cannot access any other units.
Planning a Subsystem • Controller A, port 1—worldwide name + 3, for example 5000-1FE1-FF0C-EE03 • Controller A, port 2—worldwide name + 4, for example 5000-1FE1-FF0C-EE04 Use the CLI command, SHOW THIS_CONTROLLER/OTHER_CONTROLLER to display the subsystem’s worldwide name.
Planning a Subsystem 1 2 Node ID (Worldwide name) Checksum 1 WWN INFORMATION P/N: WWN: S/N: NNNN – NNNN – NNNN – NNNN Checksum: NN 2 CXO6873B Figure 1–16: Placement of the worldwide name label on the BA370 enclosure CAUTION: Each subsystem has its own unique worldwide name (node ID). If you attempt to set the subsystem worldwide name to a name other than the one that came with the subsystem, the data on the subsystem will not be accessible.
2 Planning Storage Configurations This chapter provides information to help you plan the storage configuration of your subsystem. Storage containers are individual disk drives (JBOD), storageset types (mirrorsets, stripesets, and so on), and/or partitioned drives. Use the guidelines found in this section to plan the various types of storage containers needed.
Planning Storage Configurations Where to Start The following procedure outlines the steps to follow when planning your storage configuration. See Appendix A to locate the blank templates for keeping track of the containers being configured. 1. Determine your storage requirements. Use the questions in “Determining Storage Requirements,” page 2–3, to help you. 2. Review configuration rules. See “Configuration Rules for the Controller,” page 2–3. 3.
Planning Storage Configurations — Use the Command Line Interpreter (CLI) commands. This method allows you flexibility in defining and naming your storage containers. See the StorageWorks HSG80 Array Controller ACS Version 8.7 CLI Reference Guide. Determining Storage Requirements It is important to determine your storage requirements.
Planning Storage Configurations NOTE: For the previous two storageset configurations, this is a combined maximum, limited to no more than 20 RAID 3/5 storagesets in the individual combination.
Planning Storage Configurations The HSG80 controller identifies devices based on a Port-Target-LUN (PTL) numbering scheme, shown in Figure 2–2. The physical location of a device in its enclosure determines its PTL. • P—Designates the controller's SCSI device port number (1 through 6). • T—Designates the target ID number of the device. Valid target ID numbers for a single-controller configuration and dual-redundant controller configuration are 0 3 and 8 - 15, respectively.
Planning Storage Configurations The controller operates with BA370 enclosures that are assigned ID numbers 0, 2, and 3. These ID numbers are set through the PVA module. Enclosure ID number 1, which assigns devices to targets 4 through 7, is not supported. Figure 2–3 shows how data is laid out on disks in an extended configuration. Virtual disk Operating system view Actual device mappings Block 0 Block 1 Block 2 Block 3 Block 4 Block 5 etc. Disk 1 Disk 2 Disk 3 Block 0 Block 3 etc.
Planning Storage Configurations Examples - Model 2200 Storage Maps, PTL Addressing The Model 2200 controller enclosure can be combined with the following: • Model 4214R disk enclosure — Ultra2 SCSI with 14 drive bays, single-bus I/O module. • Model 4254 disk enclosure — Ultra2 SCSI with 14 drive bays, dual-bus I/O module. NOTE: The Model 4214R uses the same storage maps as the Model 4314R, and the Model 4254 uses the same storage maps as the Model 4354R disk enclosures.
Planning Storage Configurations Table 2–1: PTL addressing, single-bus configuration, six Model 4310R enclosures Model 4310R Disk Enclosure Shelf 6 (single-bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID Disk61200 9 Disk61100 8 Disk61000 7 Disk60800 6 Disk60500 5 Disk60400 4 Disk60300 3 Disk60200 2 Disk60100 1 Disk60000 Bay Model 4310R Disk Enclosure Shelf 5 (single-bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID Disk51200 9 Disk51100 8 Disk51000
Planning Storage Configurations Model 4310R Disk Enclosure Shelf 2 (single-bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID Disk21200 9 Disk21100 8 Disk21000 7 Disk20800 6 Disk20500 5 Disk20400 4 Disk20300 3 Disk20200 2 Disk20100 1 Disk20000 Bay Model 4310R Disk Enclosure Shelf 3 (single-bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID HSG80 ACS Solution Software Version 8.
Planning Storage Configurations Table 2–2: PTL addressing, dual-bus configuration, three Model 4350R enclosures Model 4350R Disk Enclosure Shelf 1 (single-bus) SCSI Bus A SCSI Bus B 10 SCSI ID 00 01 02 03 04 00 01 02 03 04 DISK ID Disk20400 9 Disk20300 8 Disk20200 7 Disk20100 6 Disk20000 5 Disk10400 4 Disk10300 3 Disk10200 2 Disk10100 1 Disk10000 Bay Model 4350R Disk Enclosure Shelf 2 (single-bus) SCSI Bus A SCSI Bus B 10 SCSI ID 00 01 02 03 04 00 01 02 03 04
Planning Storage Configurations Table 2–3: PTL addressing, single-bus configuration, six Model 4314R enclosures Model 4314R Disk Enclosure Shelf 6 (single-bus) 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID Disk61500 13 Disk61400 12 Disk61300 11 Disk61200 10 Disk61100 9 Disk61000 8 Disk60900 7 Disk60800 6 Disk60500 5 Disk60400 4 Disk60300 3 Disk60200 2 Disk60100 1 Disk60000 Bay Model 4314R Disk Enclosure Shelf 5 (single-bus) 14 SCSI ID 00 01 02
Planning Storage Configurations Model 4314R Disk Enclosure Shelf 2 (single-bus) 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID Disk21500 13 Disk21400 12 Disk21300 11 Disk21200 10 Disk21100 9 Disk21000 8 Disk20900 7 Disk20800 6 Disk20500 5 Disk20400 4 Disk20300 3 Disk20200 2 Disk20100 1 Disk20000 Bay Model 4314R Disk Enclosure Shelf 3 (single-bus) 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID 2–12 Disk31500 13 Disk
Planning Storage Configurations Table 2–4: PTL addressing, dual-bus configuration, three Model 4354A enclosures.
Planning Storage Configurations Choosing a Container Type Different applications may have different storage requirements. You probably want to configure more than one kind of container within your subsystem. In choosing a container, you choose between independent disks (JBODs) or one of several storageset types, as shown in Figure 2–4. The independent disks and the selected storageset may also be partitioned. The storagesets implement RAID (Redundant Array of Independent Disks) technology.
Planning Storage Configurations Table 2–5 compares the different kinds of containers to help you determine which ones satisfy your requirements.
Planning Storage Configurations Creating a Storageset Profile Creating a profile for your storagesets, partitions, and devices can simplify the configuration process. Filling out a storageset profile helps you choose the storagesets that best suit your needs and to make informed decisions about the switches you can enable for each storageset or storage device that you configure in your subsystem. For an example of a storageset profile, see Table 2–6.
Planning Storage Configurations Table 2–6: Example of Storageset Profile Type of Storageset: _____ Mirrorset __X_ RAIDset _____ Stripeset _____ Striped Mirrorset ____ JBOD Storageset Name R1.
Planning Storage Configurations Planning Considerations for Storageset This section contains the guidelines for choosing the storageset type needed for your subsystem: • “Stripeset Planning Considerations,” page 2–18 • “Mirrorset Planning Considerations,” page 2–21 • “RAIDset Planning Considerations,” page 2–22 • “Striped Mirrorset Planning Considerations,” page 2–24 • “Storageset Expansion Considerations,” page 2–26 • “Partition Planning Considerations,” page 2–26 Stripeset Planning Considerat
Planning Storage Configurations The relationship between the chunk size and the average request size determines if striping maximizes the request rate or the data-transfer rate. You can set the chunk size or use the default setting (see “Chunk Size,” page 2–30, for information about setting the chunk size). Figure 2–6 shows another example of a three-member RAID 0 stripeset. A major benefit of striping is that it balances the I/O load across all of the disk drives in the storageset.
Planning Storage Configurations • Striping does not protect against data loss. In fact, because the failure of one member is equivalent to the failure of the entire stripeset, the likelihood of losing data is higher for a stripeset than for a single disk drive. For example, if the mean time between failures (MTBF) for a single disk is l hour, then the MTBF for a stripeset that comprises N such disks is l/N hours.
Planning Storage Configurations Mirrorset Planning Considerations Mirrorsets (RAID 1) use redundancy to ensure availability, as illustrated in Figure 2–7. For each primary disk drive, there is at least one mirror disk drive. Thus, if a primary disk drive fails, its mirror drive immediately provides an exact copy of the data. Figure 2–8 shows a second example of a Mirrorset.
Planning Storage Configurations Keep these points in mind when planning mirrorsets • Data availability with a mirrorset is excellent but comes with a higher cost—you need twice as many disk drives to satisfy a given capacity requirement. If availability is your top priority, consider using dual-redundant controllers and redundant power supplies. • You can configure up to a maximum of 20 RAID 3/5 mirrorsets per controller or pair of dual-redundant controllers. Each mirrorset may contain up to 6 members.
Planning Storage Configurations Virtual disk Operating system view Disk 1 Block 0 Block 5 Block 10 Block 15 Block 0 Block 1 Block 2 Block 3 Block 4 Block 5 etc.
Planning Storage Configurations • A RAIDset must include at least 3 disk drives, but no more than 14. • A storageset should only contain disk drives of the same capacity. The controller limits the capacity of each member to the capacity of the smallest member in the storageset. Thus, if you combine 9 GB disk drives with 4 GB disk drives in the same storageset, you waste 5 GB of capacity on each 9 GB member.
Planning Storage Configurations p t Mirrorset1 Mirrorset2 Disk 20000 Disk 10100 Disk 20200 A B C Disk 10000 Disk 20100 Disk 10200 B' C' A' Mirrorset3 CXO7289A Figure 2–10: Striped mirrorset (example 1) The failure of a single disk drive has no effect on the ability of the storageset to deliver data to the host. Under normal circumstances, a single disk drive failure has very little effect on performance.
Planning Storage Configurations Plan the mirrorset members, and plan the stripeset that will contain them. Review the recommendations in “Planning Considerations for Storageset,” page 2–18, and “Mirrorset Planning Considerations,” page 2–21. Storageset Expansion Considerations Storageset Expansion allows for the joining of two of the same kind of storage containers by concatenating RAIDsets, Stripesets, or individual disks, thereby forming a larger virtual disk which is presented as a single unit.
Planning Storage Configurations Defining a Partition Partitions are expressed as a percentage of the storageset or single disk unit that contains them: • Mirrorsets and single disk units—the controller allocates the largest whole number of blocks that are equal to or less than the percentage you specify. • RAIDsets and stripesets—the controller allocates the largest whole number of stripes that are less than or equal to the percentage you specify.
Planning Storage Configurations The following sections describe how to enable/modify switches. They also contain a description of the major CLI command switches. Enabling Switches If you use SWCC to configure the device or storageset, you can set switches from SWCC during the configuration process, and SWCC automatically applies them to the storageset or device. See the SWCC online help for information about using SWCC.
Planning Storage Configurations • Replacement policy • Reconstruction policy • Remove/replace policy For details on the use of these switches refer to SET RAIDSET and SET RAIDset-name commands in the StorageWorks HSG80 Array Controller ACS Version 8.7 CLI Reference Guide.
Planning Storage Configurations • Destroy/Nodestroy • Geometry Each of these switches is described in the following sections. NOTE: After initializing the storageset or disk drive, you cannot change these switches without reinitializing the storageset or disk drive. Chunk Size With ACS software, a parameter for chunk size (chunksize=default or n) on some storagesets can be set. However, unit performance may be negatively impacted if a non-default value is selected as the chunksize.
Planning Storage Configurations Request A Chunk size = 128k (256 blocks) Request B Request C Request D CXO-5135A-MC Figure 2–13: Large chunk size increases request rate Large chunk sizes also tend to increase the performance of random reads and writes. StorageWorks recommends that you use a chunk size of 10 to 20 times the average request size, rounded to the closest prime number.
Planning Storage Configurations Table 2–7 shows a few examples of chunk size selection. Table 2–7: Example Chunk Sizes Transfer Size Small Area of I/O (KB) Transfers Unknown Random Areas of I/O Transfers 2 41 59 79 4 79 113 163 8 157 239 317 e Increasing Sequential Data Transfer Performance RAID 0 and RAID 0+1 sets intended for high data transfer rates should use a relatively low chunk size (for example: 67 sectors).
Planning Storage Configurations Destroy/Nodestroy Specify whether to destroy or retain the user data and metadata when a disk is initialized after it has been used in a mirrorset or as a single-disk unit. NOTE: The DESTROY and NODESTROY switches are only valid for mirrorsets and striped mirrorsets. • DESTROY (default) overwrites the user data and forced-error metadata when a disk drive is initialized. • NODESTROY preserves the user data and forced-error metadata when a disk drive is initialized.
Planning Storage Configurations Creating Storage Maps Configuring a subsystem will be easier if you know how the storagesets, partitions, and JBODs correspond to the disk drives in your subsystem. You can more easily see this relationship by creating a hardcopy representation, also known as a storage map. To make a storage map, fill out the templates provided in Appendix A as you add storagesets, partitions, and JBOD disks to the configuration and assign them unit numbers.
Planning Storage Configurations Example Storage Map - Model 4310R Disk Enclosure Table 2–8 shows an example of four Model 4310R disk enclosures (single-bus I/O).
Planning Storage Configurations Model 4310R Disk Enclosure Shelf 2 (single-bus) 3 4 5 6 7 8 9 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 D1 S4 M5 D2 R3 D3 S5 D4 M7 Disk20800 Disk20500 Disk20400 Disk20300 Disk20200 Disk20100 DISK ID Disk20000 D100 D101 D102 D104 D106 D108 R1 S1 M3 S2 R2 S3 M1 Disk21200 2 Disk21100 1 Disk21000 Bay Model 4310R Disk Enclosure Shelf 3 (single-bus) 4 5 6 7 8 9 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 D1 S4 M6 D2 R3 D3 S5 s
Planning Storage Configurations • Unit D104 is 3-member stripeset named S2. S2 consists of Disk10300, Disk20300, and Disk30300. • Unit D105 is a single (JBOD) disk named Disk40300. • Unit D106 is a 3-member RAID 3/5 storageset named R2. R2 consists of Disk10400, Disk20400, and Disk30400. • Unit D107 is a single (JBOD) disk named Disk40400. • Unit D108 is a 4-member stripeset named S3. S3 consists of Disk10500, Disk20500, Disk30500, and Disk40500.
3 Preparing the Host System This chapter describes how to prepare your Tru64 UNIX host computer to accommodate the HSG80 controller storage subsystem.
Preparing the Host System CAUTION: Controller and disk enclosures have no power switches. Make sure the controller enclosures and disk enclosures are physically configured before turning the PDU on and connecting the power cords. Failure to do so can cause equipment damage. 1. Be sure the enclosures are empty before mounting them into the rack.
Preparing the Host System 4. Connect the six VHDCI UltraSCSI bus cables between the controller and disk enclosures as shown in Figure 3–1 for a dual bus system and Figure 3–2 for a single bus system. Note that the supported cable lengths are 1, 2, 3, 5, and 10 meters. 5. Connect the AC power cords from the appropriate rack AC outlets to the controller and disk enclosures. HSG80 ACS Solution Software Version 8.
Preparing the Host System 1 8 2 3 4 5 7 6 CXO7383A 1 3 5 7 SCSI Bus 1 Cable SCSI Bus 3 Cable SCSI Bus 5 Cable AC Power Inputs 2 4 6 8 SCSI Bus 2 Cable SCSI Bus 4 Cable SCSI Bus 6 Cable Fibre Channel Ports Figure 3–1: Dual-Bus Enterprise Storage RAID Array Storage System 3–4 HSG80 ACS Solution Software Version 8.
Preparing the Host System 6 5 4 8 1 7 2 3 CXO7382A 1 3 5 7 SCSI Bus 1 Cable SCSI Bus 3 Cable SCSI Bus 5 Cable AC Power Inputs 2 4 6 8 SCSI Bus 2 Cable SCSI Bus 4 Cable SCSI Bus 6 Cable Fibre Channel Ports Figure 3–2: Single-Bus Enterprise Storage RAID Array Storage System HSG80 ACS Solution Software Version 8.
Preparing the Host System Making a Physical Connection To attach a host computer to the storage subsystem, install one or more host bus adapters into the computer. A Fibre Channel (FC) cable goes from the host bus adapter to an FC switch. Preparing to Install Host Bus Adapter Before installing the host bus adapter, perform the following steps: 1. Perform a complete backup of the entire system. 2. Shut down the computer system or perform a hot addition of the adapter based upon directions for that server.
Preparing the Host System • Create the partitions on the LUN using disklabel • Create a filesystem on the LUN • Mount the filesystem to be able to access it Creating Partitions on a LUN Using disklabel Create the partitions on a LUN by issuing a disklabel command. The disklabel command partitions the LUN for access by the Tru64 UNIX Operating System. Tru64 UNIX defines only partitions a, b, c, and g for the HSG80 controller.
Preparing the Host System Creating a Filesystem on a LUN NOTE: The newfs command is given here as an example. For Advanced File System (ADVFS) and for making devices available for Logical Storage Manager (LSM), similar types of commands exist. For additional information, consult the related documentation.
Preparing the Host System The LUN is now accessible to the filesystem just as a disk device would be. The filesystem can not see the RAID functionality and number of physical devices attached to the HSG80 controller. This device appears as a single LUN or “disk” to the user as viewed by the filesystem.
Preparing the Host System File Utility You can use the Tru64 UNIX file utility to determine if a Controller Unit can be accessed from the host. The unit that you want to test must already have a character mode device special file and the correct disk label. The following example uses the HSG80 unit D101 on SCSI Bus-2. Run the file command and specify the character mode device special file, in the format described below. NOTE: SCSI-3 is NOT supported on V4.0X. For V4.
Preparing the Host System • 19 is the major number • 150 represents the minor number • 2 is the SCSI host-side bus number • 0 is the drive number as listed in the Configuration File • 1 is the Controller Target ID • 1 is the LUN number If the only output that is returned from the file command is the major and minor number, then either the device is not answering or the device special file does not have the correct minor number.
Preparing the Host System The scu command, scan edt, polls all devices on the host-side SCSI buses. This allows you to show what devices are available from all host-side SCSI buses. The device special files do not have to exist for scu to see the devices. For example, scan SCSI bus 2, where your Enterprise Storage RAID Array is connected: For V4.0G and V5.1x # /sbin/scu scan edt bus 2 # /sbin/scu show edt bus 2 V4.
Preparing the Host System and to get attributes: # hwmgr -get attribute. See the UNIX online help for more information. iostat utility You can use the iostat utility to view performance statistics on Enterprise Storage RAID Array storage units. (Set your terminal screen to 132 columns before running iostat.) The output from iostat shows the number of devices (LUNs) that have been defined in the configuration file.
Preparing the Host System For V5.1x # iostat dsk3 s t Where: • rznn or dsk3 is the device name • The s is optional and denotes the amount of time, in seconds, between screen updates • The t is optional and denotes the total number of screen updates The output from iostat shows all devices that have device name rznn. The information for LUN 0 is in the first column, the information for LUN 1 is in the second column, and so forth.
Preparing the Host System Solution Software Upgrade Procedures Use the following procedures for upgrades to your Solution Software. It is considered best practice to follow this order of procedures: 1. Perform backups of data prior to upgrade; 2. Verify operating system versions, upgrade operating systems to supported versions and patch levels; 3. Quiesce all I/O and unmount all file systems before proceeding; 4. Upgrade switch firmware; 5. Upgrade Solution Software 6.
Preparing the Host System 3. Select HSG80 Controller for ACS85 new and click Next. Follow the instructions on the screen. Installing Agent 1. Insert the Solution Software CD-ROM into the host computer. 2. Type one of the following commands at the command prompt, depending on your operating system version. For Tru64 UNIX Version 4.0x, type: # mount -r -t cdfs -o rrip /dev/rz6c /mnt Substitute for rz6c, if necessary, for your CD-ROM. For Tru64 UNIX Version 5.
Preparing the Host System d. From the SWCC Agent Configuration Utility choose 7) to disable the Agent (Steamd) 2. To delete the Agent configuration files, type the following: # setld -d SWCCXXX At the prompt: Answer Y - Your configuration files will be deleted. Answer N - The old configuration files will be kept. 3. To install the new Agent, follow the steps listed in"Installing Agent" above. NOTE: If in a Cluster, for Version 8.7, the Agent may be controlled by caa daemon. 4.
Preparing the Host System Installing the Client NOTE: You must have the SNMP service installed on your client computer before the installation. 1. Insert the Solution Software CD-ROM into a computer running Windows 2000 or Windows NT 4.0 with Service Pack 4 or later. 2. Using Microsoft Windows Explorer, go to the SWCC directory on the CD-ROM and double-click setup.exe. The SWCC Setup window will appear. 3. Select HSG80 Controller for ACS85 new and click Next. Follow the instructions on the screen.
Preparing the Host System Upgrading the Agent and ACS on Standalone Servers 1. For any Tru64 UNIX version supported, stop the Agent by using the following steps: a. Stop I/O to your FC drives. b. Unmount your FC drives using the umount command. c. Stop the Agent by typing the following commands: # cd /usr/opt/SWCCXXX/scripts # execute swcc_config d. From the SWCC Agent Configuration Utility choose 7) to disable the Agent (Steamd) 2.
Preparing the Host System To rescan the bus on Tru64 Version 4.x systems, type the following: # scu scan edt # scu show edt 9. Restart I/O (applications). 10. Start Client. New Features, ACS 8.7 for Tru64 The following are new features implemented in ACS 8.
Preparing the Host System The lock is maintained in the failover information (fi) section of each controller's NV. When the state of the lock is changed on one controller, the other controller is updated as well. The existing CLI command to ADD CONN is not affected by the state of the lock.
Preparing the Host System Example of Host Connection Table Unlock: (new output shown in bold) AP_Bot> show this Controller: HSG80 (C) DEC CX00000001 Software V87 Hardware 0000 NODE_ID = 5000-1FE1-FF00-0090 ALLOCATION_CLASS = 1 SCSI_VERSION = SCSI-3 Configured for dual-redundancy with ZG02804912 In dual-redundant configuration Device Port SCSI address 6 Time: 10-SEP-2001 15:45:54 Command Console LUN is lun 0 (IDENTIFIER = 99) Host Connection Table is NOT locked Host PORT_1: Reported PORT_ID = 5000-1FE
Preparing the Host System Example of Host Connection Table Locked: (new output shown in bold) AP_Bot> show this Controller: HSG80 (C) DEC CX00000001 Software XC21P-0, Hardware 0000 NODE_ID = 5000-1FE1-FF00-0090 ALLOCATION_CLASS = 1 SCSI_VERSION = SCSI-3 Configured for dual-redundancy with ZG02804912 In dual-redundant configuration Device Port SCSI address 6 Time: 10-SEP-2001 15:48:24 Command Console LUN is lun 0 (IDENTIFIER = 99) Host Connection Table is LOCKED Host PORT_1: Reported PORT_ID = 5000-1F
Preparing the Host System The state of the connection can be displayed using: CLI> SHOW CONN <<< LOCKED >>> appears in the title area when the connection table is locked. If unlocked, or not supported (HOST_FC only), the title area looks the same as it did for ACS version 8.6. The full switch displays the rejected hosts, with an index. Adding Rejected Host Connections to Locked Host Connection Table With ACS version 8.
Preparing the Host System • To Add a new Host to a SAN - A new host is added to the fabric that needs connectivity to the HSG80. Attempts to login are rejected because the connection table is locked. The system administrator is called, and manually adds an entry for the new host by creating a new connection from the rejected host. • To Delete a Host - While the connection table is locked, delete the connection for the selected host.
Preparing the Host System Display Enabled Management Agents The following command displays a list of the systems currently enabled to perform management functions.
Preparing the Host System In the event that all connections are enabled the display appears as follows.
Preparing the Host System Linking WWIDs for Snap and Clone Units LUN WWIDs (World Wide Identifiers) for snap and clone units are different each time they are created. This causes more system data records to keep track of the WWIDs as well as script changes at the customer sites. To eliminate this issue, a linked WWID scheme has been created, which keeps the WWIDs of these units constant each time they are created.
Preparing the Host System Implementation Notes Add Snap with Linked WWID - The user has a script that runs every night to create a snapshot, run a backup to tape from the snapshot, then delete the snapshot. Each time this is done, a new WWID is allocated. When the operating system runs out of room for all of these “orphaned” WWIDs, the host system must be rebooted.
Preparing the Host System SMART Error Eject When a SMART notification is received from a device, it is currently treated as a soft error - the notification is passed to the host and operations continue. A new CLI switch at the controller level changes this behavior. When this switch is enabled, drives in a normalized and redundant set that report a smart error are removed from that set.
Preparing the Host System CLI output - feature disabled: AP_TOP> show this Controller: HSG80 ZG02804912 Software V87S-0, Hardware E12 NODE_ID = 5000-1FE1-FF00-0090 ALLOCATION_CLASS = 1 SCSI_VERSION = SCSI-3 Configured for MULTIBUS_FAILOVER with ZG02804288 In dual-redundant configuration Device Port SCSI address 7 Time: 22-NOV-2001 01:14:32 Command Console LUN is lun 0 (IDENTIFIER = 99) Host Connection Table is NOT locked Smart Error Eject Disabled Host PORT_1: Reported PORT_ID = 5000-1FE1-FF00-0093 POR
Preparing the Host System Battery: NOUPS FULLY CHARGED Expires: WARNING: UNKNOWN EXPIRATION DATE! WARNING: AN UNKNOWN NUMBER OF DEEP DISCHARGES HAVE OCCURRED! 3–32 HSG80 ACS Solution Software Version 8.
Preparing the Host System CLI Output - feature enabled: AP_TOP> show this Controller: HSG80 ZG02804912 Software V87S-0, Hardware E12 NODE_ID = 5000-1FE1-FF00-0090 ALLOCATION_CLASS = 1 SCSI_VERSION = SCSI-3 Configured for MULTIBUS_FAILOVER with ZG02804288 In dual-redundant configuration Device Port SCSI address 7 Time: 22-NOV-2001 01:17:47 Command Console LUN is lun 0 (IDENTIFIER = 99) Host Connection Table is NOT locked Smart Error Eject Enabled Host PORT_1: Reported PORT_ID = 5000-1FE1-FF00-0093 PORT_
Preparing the Host System NOUPS FULLY CHARGED Expires: WARNING: UNKNOWN EXPIRATION DATE! WARNING: AN UNKNOWN NUMBER OF DEEP DISCHARGES HAVE OCCURRED! Error Threshold for Drives A new limit for driver errors can be set. Once the limit is reached, the drive is removed from any redundant sets to which it belongs and put into the failed set. Errors counted are medium and recovered errors - there is no need to add hardware errors to this count as the drive fails immediately if a hardware error is encountered.
4 Installing and Configuring HSG Agent StorageWorks Command Console (SWCC) enables real-time configuration of the storage environment and permits the user to monitor and configure the storage connected to the HSG80 controller.
Installing and Configuring HSG Agent To receive information about the devices connected to your HSG80 controller over a TCP/IP network, you must install the Agent on a computer that is connected to a controller. The Agent can also be used as a standalone application without Client. In this mode, which is referred to as Agent only, Agent monitors the status of the subsystem and provides local and remote notification in the event of a failure. A subsystem includes the HSG80 controller and its devices.
Installing and Configuring HSG Agent Table 4–2: Installation and Configuration Overview (Continued) Step Procedure 3 Verify that there is a LUN for communications. This can be either the CCL or a LUN that was created with the CLI. See “What is the Command Console LUN?” on page 1–11 in Chapter 1. 4 Install the Agent (TCP/IP network connections) on a system connected to the HSG80 controller. See Chapter 3 for agent installation.
Installing and Configuring HSG Agent 7 1 A T V A T -S H V T N E C O O A T V O 4 4 7 A T V A T -S H 2 V T N E C O O 5 4 3 6 CXO7240A Figure 4–1: An example of a network connection 1 Agent system (has the Agent 5 Hub or switch software) 2 TCP/IP Network 6 HSG80 controller and its device subsystem 3 Client system (has the Client 7 Servers software) 4 Fibre Channel cable 4–4 HSG80 ACS Solution Software Version 8.
Installing and Configuring HSG Agent Before Installing the Agent The Agent requires the minimum system requirements, as defined in the release notes for your operating system. The program is designed to operate with the Client version 2.5 on Windows 2000 or Windows NT. 1. Login as root (superuser). Agent installations on Tru64 UNIX must be done locally. Do not attempt to install the Agent over the network. 2. Remove previous versions of the Agent from your computer. 3. Read the release notes.
Installing and Configuring HSG Agent (Substituting cdrom0c if necessary for your CD-ROM) 4. Change directories on the CD-ROM by entering: # cd /mnt/agent CAUTION: The version of Compaq Tru64 UNIX that you are using, either V4.0x or V5.1x will determine which Device Special File Name format you will need to enter. 5. To run the installation program, enter the following at the command prompt: # setld -l NOTE: The -l is a lowercase L. You are asked if you want to install the listed subsets. 6.
Installing and Configuring HSG Agent Table 4–3: Client System Access Options Options SWCC Function 1 = Detailed Status ■ Can use the Client to open a Storage Window, but you cannot make modifications in that window 2 = Configuration and Status ■ Can use the Client to make changes in a Storage Window to modify a subsystem configuration 10. Press press the Enter key. A menu for selecting a client system notification scheme appears: 11.
Installing and Configuring HSG Agent You are asked for a password, which is required to do configurations within the Client software. If an old password is found, you are asked if you want to use it. Enter Subsystem Information 15. Enter your case-sensitive password that has 4 to 16 characters, and press the Enter key. You are asked to retype the password. 16. Retype the password and press the Enter key. Once the password has been entered, the system scans for subsystems.
Installing and Configuring HSG Agent Table 4–5: Definitions of Email Notification Options Term Definition Information Provides messages, but they do not indicate that something is broken. Examples of informational messages are the following: an Agent startup message or a message saying that an error has been resolved. 23. Press press the Enter key. The software asks if the displayed information is correct. 24. If the displayed information is correct, select option y and press the Enter key.
Installing and Configuring HSG Agent Reconfiguring the Agent You can change your configuration using the SWCC Agent Configuration menu. To access this menu, enter the following command: # /usr/opt/SWCC520/scripts/swcc_config The following is an example of the menu:. SWCC Agent Configuration Utility --------------------------------------Options Available Are: 1) Add/Delete Client PC Information. 2) Modify Storage Subsystem Information.
Installing and Configuring HSG Agent Table 4–6: Information Needed to Configure Agent Term/Procedure Description Adding a Client system entry For a client system to receive updates from the Agent, you must add it to the Agent’s list of client system entries. The Agent will only send information to client system entries that are on this list. In addition, adding a client system entry allows you to access the Agent system from the Navigation Tree on that Client system.
Installing and Configuring HSG Agent Table 4–6: Information Needed to Configure Agent (Continued) Term/Procedure Description Client system notification options 0 = No Error Notification−No error notification is provided over network. Note: For all of the client system notification options, local notification is available through an entry in the system error log file and Email (provided that Email notification in PAGEMAIL.COM has not been disabled).
Installing and Configuring HSG Agent Table 4–6: Information Needed to Configure Agent (Continued) Term/Procedure Password Description It must be a text string that has 4 to 16 characters. It can be entered from the client system to gain configuration access. Accessing the SWCC Agent Configuration menu can change it. Removing the Agent CAUTION: Do not uninstall the Agent if you want to preserve configuration information.
5 FC Configuration Procedures This chapter describes procedures to configure a subsystem that uses Fibre Channel (FC) fabric topology. In fabric topology, the controller connects to its hosts through switches.
FC Configuration Procedures Establishing a Local Connection A local connection is required to configure the controller until a command console LUN (CCL) is established using the CLI. Communication with the controller can be through the CLI or SWCC. The maintenance port, shown in Figure 5–1, provides a way to connect a maintenance terminal. The maintenance terminal can be an EIA-423 compatible terminal or a computer running a terminal emulator program. The maintenance port accepts a standard RS-232 jack.
FC Configuration Procedures Setting Up a Single Controller Power On and Establish Communication 1. Connect the computer or terminal to the controller as shown in Figure 5–1. The connection to the computer is through the COM1 or COM2 port. 2. Turn on the computer or terminal. 3. Apply power to the storage subsystem. 4. Verify that the computer or terminal is configured as follows: — 9600 baud — 8 data bits — 1 stop bit — no parity — no flow control 5. Press Enter.
FC Configuration Procedures 4 1 2 5 3 5 4 CXO6881B 1 Controller 4 Cable from the switch to the host Fibre Channel 2 Host port 1 adapter 3 Host port 2 5 FC switch Figure 5–2: Single controller cabling Configuring a Single Controller Using CLI To configure a single controller using CLI involves the following processes: • Verify the Node ID and Check for Any Previous Connections. • Configure Controller Settings. • Restart the Controller. • Set Time and Verify all Commands.
FC Configuration Procedures The node ID is located in the third line of the SHOW THIS result: HSG80> SHOW THIS Controller: HSG80 ZG80900583 Software V8.7, Hardware E11 NODE_ID = 5000-1FE1-0001-3F00 ALLOCATION_CLASS = 0 If the node ID is present, go to step 5. If the node ID is all zeroes, enter node ID and checksum, which are located on a sticker on the controller enclosure.
FC Configuration Procedures 6. Tru64 UNIX V5.1x can use both SCSI-2 or SCSI-3.Assign an identifier for the communication LUN (also called the command console LUN, or CCL). The CCL must have a unique identifier that is a decimal number in the range 1 to 32767, and which is different from the identifiers of all units. Use the following syntax: SET THIS IDENTIFIER=N Identifier must be unique among all the controllers attached to the fabric within the specified allocation class. 7.
FC Configuration Procedures 3. Set up any additional optional controller settings, such as changing the CLI prompt. See the SET THIS CONTROLLER/OTHER CONTROLLER command in the StorageWorks HSG80 Array Controller ACS Version 8.7 CLI Reference Guide for the format of optional settings. 4. Verify that all commands have taken effect. Use the following command: SHOW THIS Verify node ID, allocation class, SCSI version, failover mode, identifier, and port topology. HSG80 ACS Solution Software Version 8.
FC Configuration Procedures The following sample is a result of a SHOW THIS command, with the areas of interest in bold. Controller: HSG80 ZG94214134 Software V8.
FC Configuration Procedures 5. Turn on the switches, if not done previously. If you want to communicate with the Fibre Channel switches through Telnet, set an IP address for each switch. See the manuals that came with the switches for details. Plug in the FC Cable and Verify Connections 6. Plug the Fibre Channel cable from the first host bus adapter into the switch. Enter the SHOW CONNECTIONS command to view the connection table: SHOW CONNECTIONS 7.
FC Configuration Procedures Your host computer should report one CCL device special file for each HSG80 configured. Setting Up a Controller Pair Power Up and Establish Communication 1. Connect the computer or terminal to the controller as shown in Figure 5–1. The connection to the computer is through the COM1 or COM2 ports. 2. Turn on the computer or terminal. 3. Apply power to the storage subsystem. 4.
FC Configuration Procedures Figure 5–3 shows a controller pair with failover cabling showing one HBA per server with HSG80 controller in transparent failover mode. 5 6 1 3 4 2 6 5 CXO6887B 1 Controller A 4 Host port 2 2 Controller B 5 Cable from the switch to the host FC adapter 3 Host port 1 6 FC switch Figure 5–3: Controller pair failover cabling Configuring a Controller Pair Using CLI To configure a controller pair using CLI involves the following processes: • Configure Controller Settings.
FC Configuration Procedures The node ID is located in the third line of the SHOW THIS result: HSG80> show this Controller: HSG80 ZG80900583 Software V8.7, Hardware E11 NODE_ID = 5000-1FE1-0001-3F00 ALLOCATION_CLASS = 0 If the node ID is present, go to step 5. If the node ID is all zeroes, enter the node ID and checksum, which are located on a sticker on the controller enclosure.
FC Configuration Procedures 6. Set the topology for the controller. If both ports are used, set topology for both ports: SET THIS PORT_1_TOPOLOGY=FABRIC SET THIS PORT_2_TOPOLOGY=FABRIC If the controller is not factory-new, it may have another topology set, in which case these commands will result in an error message.
FC Configuration Procedures 12. Verify node ID, allocation class, SCSI version, failover mode, identifier, and port topology. The following display is a sample result of a SHOW THIS command, with the areas of interest in bold. Controller: HSG80 ZG94214134 Software V8.
FC Configuration Procedures 13. Turn on the switches if not done previously. If you want to communicate with the FC switches through Telnet, set an IP address for each switch. See the manuals that came with the switches for details. Plug in the FC Cable and Verify Connections 14. Plug the FC cable from the first host adapter into the switch. Enter a SHOW CONNECTIONS command to view the connection table: SHOW CONNECTIONS The first connection will have one or more entries in the connection table.
FC Configuration Procedures Verify Installation To verify installation for your Tru64_UNIX host, enterone of the following commands: For V4.0G Use file /dev/rrx*c | grep HSG80 For V5.1x Use file /dev/cport/ * Your host computer should report one CCL device special file for each HSG80 configured. Configuring Devices The disks on the device bus of the HSG80 can be configured manually or with the CONFIG utility. The CONFIG utility is easier.
FC Configuration Procedures • “Configuring a Mirrorset” on page 5–18 • “Configuring a RAIDset” on page 5–19 • “Configuring a Striped Mirrorset” on page 5–19 • “Configuring a Single-Disk Unit (JBOD)” on page 5–20 • “Configuring a Partition” on page 5–20 Containers Partition Stripeset (R0) Single devices (JBOD) Mirrorset (R1) Striped mirrorset (R0+1) RAIDset (R3/5) Storagesets CXO6677A Figure 5–4: Storage container types Configuring a Stripeset 1.
FC Configuration Procedures 4. Assign the stripeset a unit number to make it accessible by the hosts. See “Assigning Unit Numbers and Unit Qualifiers” on page 5–22. For example: The commands to create Stripe1, a stripeset consisting of three disks (DISK10000, DISK20000, and DISK10100) and having a chunksize of 128: ADD STRIPESET STRIPE1 DISK10000 DISK20000 DISK30000 INITIALIZE STRIPE1 CHUNKSIZE=128 SHOW STRIPE1 Configuring a Mirrorset 1.
FC Configuration Procedures Configuring a RAIDset 1. Create the RAIDset by adding its name to the controller's list of storagesets and by specifying the disk drives it contains. Optionally, you can specify RAIDset switch values: ADD RAIDSET RAIDSET-NAME DISKNNNNN DISKNNNNN DISKNNNNN SWITCHES NOTE: See the ADD RAIDSET command in the StorageWorks HSG80 Array Controller ACS Version 8.7 CLI Reference Guide for a description of the RAIDset switches. 2.
FC Configuration Procedures See “Specifying Initialization Switches” on page 2–29 for a description of the initialization switches. 4. Verify the striped mirrorset configuration: SHOW STRIPESET-NAME 5. Assign the stripeset mirrorset a unit number to make it accessible by the hosts. See “Assigning Unit Numbers and Unit Qualifiers” on page 5–22.
FC Configuration Procedures See “Specifying Initialization Switches” on page 2–29 for a description of the initialization switches. 2. Create each partition in the storageset or disk drive by indicating the partition's size. Also specify any desired switch settings: CREATE_PARTITION STORAGESET-NAME SIZE=N SWITCHES or CREATE_PARTITION DISK-NAME SIZE=N SWITCHES where N is the percentage of the disk drive or storageset that will be assigned to the partition.
FC Configuration Procedures Assigning Unit Numbers and Unit Qualifiers Each storageset, partition, or single (JBOD) disk must be assigned a unit number for the host to access. As the units are added, their properties can be specified through the use of command qualifiers, which are discussed in detail under the ADD UNIT command in the StorageWorks HSG80 Array Controller ACS Version 8.7 CLI Reference Guide.
FC Configuration Procedures Preferring Units In multiple-bus failover mode, individual units can be preferred to a specific controller. For example, to prefer unit D102 to “this controller,” use the following command: SET D102 PREFERRED_PATH=THIS RESTART commands must be issued to both controllers for this command to take effect: RESTART OTHER_CONTROLLER RESTART THIS_CONTROLLER NOTE: The controllers need to restart together for the preferred settings to take effect.
FC Configuration Procedures • To add one new disk drive to the list of known devices, use the following syntax: ADD DISK DISKNNNNN P T L • To add several new disk drives to the list of known devices, enter the following command: RUN CONFIG Adding a Disk Drive to the Spareset The spareset is a collection of spare disk drives that are available to the controller should it need to replace a failed member of a RAIDset or mirrorset.
FC Configuration Procedures Enabling Autospare With AUTOSPARE enabled on the failedset, any new disk drive that is inserted into the PTL location of a failed disk drive is automatically initialized and placed into the spareset. If initialization fails, the disk drive remains in the failedset until you manually delete it from the failedset.
FC Configuration Procedures Displaying the Current Switches To display the current switches for a storageset or single-disk unit, enter a SHOW command, specifying the FULL switch: SHOW STORAGESET-NAME or SHOW DEVICE-NAME NOTE: FULL is not required when showing a particular device. It is used when showing all devices, for example, SHOW DEVICES FULL.
FC Configuration Procedures Verifying Storage Configuration from Host This section briefly describes how to verify that multiple paths exist to virtual disk units under Tru64 UNIX V5.1x. After configuring units (virtual disks) through either the CLI or SWCC, access the new storage by using one of the following methods: Issuing the following command to rescan the bus: # hwmgr -scan scsi # dsfmgr -k or Restart the host After the host restarts, verify that the disk is correctly presented to the host.
6 Using CLI for Configuration This chapter presents an example of how to configure a storage subsystem using the Command Line Interpreter (CLI). The CLI configuration example shown assumes: • A normal, new controller pair, which includes: — NODE ID set — No previous failover mode — No previous topology set • Full array with no expansion cabinet • PCMCIA cards installed in both controllers A storage subsystem example is shown in Figure 6–1.
Using CLI for Configuration Figure 6–1 shows an example storage system map for the BA370 enclosure. Details on building your own map are described in Chapter 2. Templates to help you build your storage map are supplied in Appendix A.
Using CLI for Configuration TRU64_UNIX hosts. Port 1 link is separate from port 2 link (that is, ports 1 of both controllers are on one loop or fabric, and port 2 of both controllers are on another) therefore, each adapter has two connections. .
Using CLI for Configuration "RED" "GREY" "BLUE" D1 D0 D2 D101 D102 D120 CXO7110B Figure 6–4: Example, logical or virtual disks comprised of storagesets CLI Configuration Example Text conventions used in this example are listed below: • Text in italics indicates an action you take. • Text in THIS FORMAT, indicates a command you type. Be certain to press Enter after each command. • Text enclosed within a box, indicates information that is displayed by the CLI interpreter.
Using CLI for Configuration SET THIS SCSI_VERSION=SCSI-3 SET THIS PORT_1_TOPOLOGY=FABRIC SET THIS PORT_2_TOPOLOGY=FABRIC SET OTHER PORT_1_TOPOLOGY=FABRIC SET OTHER PORT_2_TOPOLOGY=FABRIC SET THIS ALLOCATION_CLASS=0 RESTART OTHER RESTART THIS SET THIS TIME=10-Mar-2001:12:30:34 RUN FRUTIL Do you intend to replace this controller's cache battery? Y/N [Y] Y Plug serial cable from maintenance terminal into bottom controller. NOTE: Bottom controller (B) becomes “this” controller.
Using CLI for Configuration NOTE: Connection table sorts alphabetically. Connection Name Operating System RED1A1 TRU64_UNIX Controll er Port OTHER 1 HOST_ID=XXXX-XXXX-XXXX-XXXX RED1B1 TRU64_UNIX THIS HOST_ID=XXXX-XXXX-XXXX-XXXX Address Status XXXXX OL other X Unit Offset 0 ADAPTER_ID=XXXX-XXXX-XXXX-XX XX 1 XXXXX X OL this 0 ADAPTER_ID=XXXX-XXXX-XXXX-XX XX Mark or tag both ends of Fibre Channel cables. Plug in the Fibre Channel cable from the second adapter in host “RED.
Using CLI for Configuration Connection Name Operating System Controlle r !NEWCON0 TRU64_U 2 NIX THIS Port Address Status Unit Offset 2 XXXXXX OL this 0 HOST_ID=XXXX-XXXX-XXXX-XX ADAPTER_ID=XXXX-XXXX-XXXX-XXX XX X !NEWCON0 TRU64_U OTHER 3 NIX 2 XXXXXX OL other 0 HOST_ID=XXXX-XXXX-XXXX-XX ADAPTER_ID=XXXX-XXXX-XXXX-XXX XX X RED1A1 TRU64_U OTHER NIX 1 XXXXXX OL other 0 ...
Using CLI for Configuration Connection Name Operating System Controll er RED1A1 TRU64_U OTHER NIX Port 1 Address Status XXXXX OL other X Unit Offset 0 HOST_ID=XXXX-XXXX-XXXX-XX ADAPTER_ID=XXXX-XXXX-XXXX-XX XX XX RED1B1 TRU64_U NIX THIS 1 XXXXX X OL this 0 HOST_ID=XXXX-XXXX-XXXX-XX ADAPTER_ID=XXXX-XXXX-XXXX-XX XX XX RED2A2 TRU64_U OTHER NIX 2 XXXXX OL other X 0 HOST_ID=XXXX-XXXX-XXXX-XX ADAPTER_ID=XXXX-XXXX-XXXX-XX XX XX RED2B2 TRU64_U NIX THIS 2 XXXXX X OL this 0 HOST_ID=XXXX-XX
Using CLI for Configuration Connection Name Operating System Controll er GREY1A1 TRU64_U OTHER NIX Port 1 Address Status XXXXX OL other X Unit Offset 0 HOST_ID=XXXX-XXXX-XXXX-XX ADAPTER_ID=XXXX-XXXX-XXXX-XX XX XX GREY1B1 TRU64_U NIX THIS 1 XXXXX X OL this 0 HOST_ID=XXXX-XXXX-XXXX-XX ADAPTER_ID=XXXX-XXXX-XXXX-XX XX XX GREY2A2 TRU64_U OTHER NIX 2 XXXXX OL other X 0 HOST_ID=XXXX-XXXX-XXXX-XX ADAPTER_ID=XXXX-XXXX-XXXX-XX XX XX GREY2B2 TRU64_U NIX THIS 2 XXXXX X OL this 0 HOST_ID=XXX
Using CLI for Configuration HOST_ID=XXXX-XXXX-XXXX-XX ADAPTER_ID=XXXX-XXXX-XXXX-XX XX XX RED1A1 TRU64_U OTHER NIX 1 XXXXX OL other X 0 HOST_ID=XXXX-XXXX-XXXX-XX ADAPTER_ID=XXXX-XXXX-XXXX-XX XX XX RED1B1 TRU64_U NIX THIS 1 XXXXX X OL this 0 HOST_ID=XXXX-XXXX-XXXX-XX ADAPTER_ID=XXXX-XXXX-XXXX-XX XX XX RED2A2 TRU64_U OTHER NIX 2 XXXXX OL other X 0 HOST_ID=XXXX-XXXX-XXXX-XX ADAPTER_ID=XXXX-XXXX-XXXX-XX XX XX RED2B2 TRU64_U NIX THIS 2 XXXXX X OL this 0 HOST_ID=XXXX-XXXX-XXXX-XX ADAPTER_I
Using CLI for Configuration RUN CONFIG ADD RAIDSET R1 DISK10000 DISK20000 DISK30000 DISK40000 DISK50000 DISK60000 INITIALIZE R1 ADD UNIT D102 R1 DISABLE_ACCESS_PATH=ALL SET D102 ENABLE_ACCESS_PATH=(RED1A1, RED1B1, RED2A2, RED2B2) ADD RAIDSET R2 DISK10100 DISK20100 DISK30100 DISK40100 DISK50100 DISK60100 INITIALIZE R2 ADD UNIT D120 R2 DISABLE_ACCESS_PATH=ALL SET D120 ENABLE_ACCESS_PATH=(BLUE1A1, BLUE1B1, BLUE2A2, BLUE2B2) ADD MIRRORSET MI DISK10200 DISK20200 ADD MIRRORSET M2 DISK30200 DISK40200 ADD STRIPESE
7 Backing Up, Cloning, and Moving Data This chapter includes the following topics: • “Backing Up Subsystem Configurations,” page 7–1 • “Creating Clones for Backup,” page 7–2 • “.Moving Storagesets,” page 7–5 Backing Up Subsystem Configurations The controller stores information about the subsystem configuration in its nonvolatile memory. This information could be lost if the controller fails or when you replace a module in the subsystem.
Backing Up, Cloning, and Moving Data Creating Clones for Backup Use the CLONE utility to duplicate the data on any unpartitioned single-disk unit, stripeset, mirrorset, or striped mirrorset in preparation for backup. When the cloning operation is complete, you can back up the clones rather than the storageset or single-disk unit, which can continue to service its I/O load. When you are cloning a mirrorset, CLONE does not need to create a temporary mirrorset.
Backing Up, Cloning, and Moving Data Use the following steps to clone a single-disk unit, stripeset, or mirrorset: 1. Establish a connection to the controller that accesses the unit you want to clone. 2. Start CLONE using the following command: RUN CLONE 3. When prompted, enter the unit number of the unit you want to clone. 4. When prompted, enter a unit number for the clone unit that CLONE will create. 5.
Backing Up, Cloning, and Moving Data The following example shows the commands you would use to clone storage unit D6. The clone command terminates after it creates storage unit D33, a clone or copy of D6. RUN CLONE CLONE LOCAL PROGRAM INVOKED UNITS AVAILABLE FOR CLONING: 98 ENTER UNIT TO CLONE? 98 CLONE WILL CREATE A NEW UNIT WHICH IS A COPY OF UNIT 98. ENTER THE UNIT NUMBER WHICH YOU WANT ASSIGNED TO THE NEW UNIT? 99 THE NEW UNIT MAY BE ADDED USING ONE OF THE FOLLOWING METHODS: 1.
Backing Up, Cloning, and Moving Data USE AVAILABLE DEVICE DISK20300(SIZE=832317) FOR MEMBER DISK10000(SIZE=832317) (Y,N) [Y]? Y MIRROR DISK10000 C_MB SET C_MB NOPOLICY SET C_MB MEMBERS=2 SET C_MB REPLACE=DISK20300 COPY IN PROGRESS FOR EACH NEW MEMBER. PLEASE BE PATIENT... . .
Backing Up, Cloning, and Moving Data CAUTION: Never initialize any container or this procedure will not protect data in the storageset. Use the following procedure to move a storageset, while maintaining the data the storageset contains: 1. Show the details for the storageset you want to move. Use the following command: SHOW STORAGESET-NAME 2. Label each member with its name and PTL location.
Backing Up, Cloning, and Moving Data 8. Recreate the storageset by adding its name to the controller's list of valid storagesets and by specifying the disk drives it contains. (Although you have to recreate the storageset from its original disks, you do not have to add the storagesets in their original order.) Use the following syntax to recreate the storageset: ADD STORAGESET-NAME DISK-NAME DISK-NAME 9. Represent the storageset to the host by giving it a unit number the host can recognize.
A Subsystem Profile Templates This appendix contains storageset profiles to copy and use to create your profiles. It also contains an enclosure template to use to help keep track of the location of devices and storagesets in your shelves. Four (4) templates will be needed for the subsystem. NOTE: The storage map templates for the Model 4310R and Model 4214R or 4314R reflect the physical location of the disk enclosures in the rack.
Subsystem Profile Templates Storageset Profile Type of Storageset: _____ Mirrorset __X_ RAIDset _____ Stripeset _____ Striped Mirrorset ____ JBOD Storageset Name Disk Drives Unit Number Partitions: Unit # Unit # Unit # Unit # Unit # Unit # Unit # Unit # RAIDset Switches: Reconstruction Policy ___Normal (default) Reduced Membership __ _No (default) Replacement Policy ___Best performance (default) ___Fast ___Yes, missing: ___Best fit ___None Mirrorset Switches: Replacement Policy Copy
Subsystem Profile Templates Unit Switches: Caching Read caching__________ Read-ahead caching_____ Write-back caching______ Write-through caching____ Access by following hosts enabled _________________________________________________ ___________ _________________________________________________ ___________ _________________________________________________ ___________ _________________________________________________ ___________ HSG80 ACS Solution Software Version 8.
Subsystem Profile Templates Storage Map Template 1 for the BA370 Enclosure Use this template for: • BA370 single-enclosure subsystems • first enclosure of multiple BA370 enclosure subsystems 1 2 Port 3 4 5 6 Power Supply Power Supply 3 D10300 D20300 D30300 D40300 D50300 D60300 Power Supply Power Supply 2 D20200 D30200 D40200 D50200 Targets D10200 D60200 Power Supply Power Supply 1 D10100 D20100 D30100 D40100 D50100 D60100 Power Supply Power Supply 0 D10000 A–4 D20000
Subsystem Profile Templates Storage Map Template 2 for the second BA370 Enclosure Use this template for the second enclosure of multiple BA370 enclosure subsystems.
Subsystem Profile Templates Storage Map Template 3 for the third BA370 Enclosure Use this template for the third enclosure of multiple BA370 enclosure subsystems.
Subsystem Profile Templates Storage Map Template 4 for the Model 4214R Disk Enclosure Use this template for a subsystem with a three-shelf Model 4214R disk enclosure (single-bus). You can have up to six Model 4214R disk enclosures per controller shelf.
Subsystem Profile Templates A–8 Bay 1 2 3 4 5 6 7 8 9 1 0 1 1 1 2 1 3 1 4 SCSI ID 0 0 0 1 0 2 0 3 0 4 0 5 0 8 0 9 1 0 1 1 1 2 1 3 1 4 1 5 DISK ID Disk30000 Disk30100 Disk30200 Disk30300 Disk30400 Disk30500 Disk30800 Disk30900 Disk31000 Disk31100 Disk31200 Disk31300 Disk31400 Disk31500 Model 4214R Disk Enclosure Shelf 3 (single-bus) HSG80 ACS Solution Software Version 8.
Subsystem Profile Templates Storage Map Template 5 for the Model 4254 Disk Enclosure Use this template for a subsystem with a three-shelf Model 4254 disk enclosure (dual-bus). You can have up to three Model 4254 disk enclosures per controller shelf.
Subsystem Profile Templates continued from previous page Model 4254 Disk Enclosure Shelf 3 (dual-bus) A–10 Bay 1 2 3 4 5 6 7 8 9 1 0 1 1 1 2 1 3 1 4 SCSI ID 0 0 0 1 0 2 0 3 0 4 0 5 0 8 0 0 0 1 0 2 0 3 0 4 0 5 0 8 DISK ID Disk50100 Disk50200 Disk50300 Disk50400 Disk50500 Disk50800 Disk60000 Disk60100 Disk60200 Disk60300 Disk60400 Disk60500 Disk60800 Bus B Disk50000 Bus A HSG80 ACS Solution Software Version 8.
Subsystem Profile Templates Storage Map Template 6 for the Model 4310R Disk Enclosure Use this template for a subsystem with a six-shelf Model 4310R disk enclosure (single-bus). You can have up to six Model 4310R disk enclosures per controller shelf.
Subsystem Profile Templates Model 4310R Disk Enclosure Shelf 4 (single-bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID Disk41200 9 Disk41100 8 Disk41000 7 Disk40800 6 Disk40500 5 Disk40400 4 Disk40300 3 Disk40200 2 Disk40100 1 Disk40000 Bay Model 4310R Disk Enclosure Shelf 1 (single-bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID Disk11200 9 Disk11100 8 Disk11000 7 Disk10800 6 Disk10500 5 Disk10400 4 Disk10300 3 Disk10200 2 Disk10100
Subsystem Profile Templates Model 4310R Disk Enclosure Shelf 3 (single-bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID HSG80 ACS Solution Software Version 8.
Subsystem Profile Templates Storage Map Template 7 for the Model 4350R Disk Enclosure Use this template for a subsystem with a three-shelf Model 4350R disk enclosure (single-bus). You can have up to three Model 4350R disk enclosures per controller shelf.
Subsystem Profile Templates Model 4350R Disk Enclosure Shelf 4 (single-bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID HSG80 ACS Solution Software Version 8.
Subsystem Profile Templates Storage Map Template 8 for the Model 4314R Disk Enclosure Use this template for a subsystem with a six-shelf Model 4314R disk enclosure. You can have a maximum of six Model 4314R disk enclosures with each Model 2200 controller enclosure.
Subsystem Profile Templates Model 4314R Disk Enclosure Shelf 4 (single-bus) 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID Disk41500 13 Disk41400 12 Disk41300 11 Disk41200 10 Disk41100 9 Disk41000 8 Disk40900 7 Disk40800 6 Disk40500 5 Disk40400 4 Disk40300 3 Disk40200 2 Disk40100 1 Disk40000 Bay continued from previous page Model 4314R Disk Enclosure Shelf 1 (single-bus) 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID
Subsystem Profile Templates Model 4314R Disk Enclosure Shelf 3 (single-bus) 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID A–18 Disk31500 13 Disk31400 12 Disk31300 11 Disk31200 10 Disk31100 9 Disk31000 8 Disk30900 7 Disk30800 6 Disk30500 5 Disk30400 4 Disk30300 3 Disk30200 2 Disk30100 1 Disk30000 Bay HSG80 ACS Solution Software Version 8.
Subsystem Profile Templates Storage Map Template 9 for the Model 4354R Disk Enclosure Use this template for a subsystem with a three-shelf Model 4354R disk enclosure (dual-bus). You can have up to three Model 4354R disk enclosures per controller shelf.
Subsystem Profile Templates Model 4354R Disk Enclosure Shelf 3 (dual-bus) SCSI Bus A SCSI Bus B 14 SCSI ID 00 01 02 03 04 05 08 00 01 02 03 04 05 08 DISK ID A–20 Disk60800 13 Disk60500 12 Disk60400 11 Disk60300 10 Disk60200 9 Disk60100 8 Disk60000 7 Disk50800 6 Disk50500 5 Disk50400 4 Disk50300 3 Disk50200 2 Disk50100 1 Disk50000 Bay HSG80 ACS Solution Software Version 8.
B Installing, Configuring, and Removing the Client The following information is included in this appendix: • “Why Install the Client?,” page B–2 • “Before You Install the Client,” page B–2 • “Installing the Client,” page B–4 • “Installing the Integration Patch,” page B–5 • “Troubleshooting Client Installation,” page B–8 • “Adding Storage Subsystem and its Host to Navigation Tree,” page B–10 • “Removing Command Console Client,” page B–12 • “Where to Find Additional Information,” page B–13 HS
Installing, Configuring, and Removing the Client Why Install the Client? The Client monitors and manages a storage subsystem by performing the following tasks: • Create mirrored device group (RAID 1) • Create striped device group (RAID 0) • Create striped mirrored device group (RAID 0+1) • Create striped parity device group (3/5) • Create an individual device (JBOD) • Monitor many subsystems at once • Set up pager notification Before You Install the Client 1.
Installing, Configuring, and Removing the Client 7. If you have Command Console Client version 1.1b or earlier, remove the program with the Windows Add/Remove Programs utility. 8. If you have a previous version of Command Console, you can save the Navigation Tree configuration by copying the SWCC2.MDB file to another directory. After you have installed the product, move SWCC2.MDB to the directory to which you installed SWCC. 9. Install the HS-Series Agent. For more information, see Chapter 4.
Installing, Configuring, and Removing the Client Installing the Client The following restriction should be observed when installing SWCC on Windows NT 4.0 Workstations. If you select all of the applets during installation, the installation will fail on the HSG60 applet and again on one of the HSG80 applets. The workaround is to install all of the applets you want except for the HSG60 applet and the HSG80 ACS 8.5 applet. You can then return to the setup program and install the one that you need. 1.
Installing, Configuring, and Removing the Client Installing the Integration Patch The integration patch determines which version of firmware the controller is using and launches the appropriate StorageWorks Command Console (SWCC) Storage Window within Insight Manager (CIM) version 4.23. Should I Install the Integration Patch? Install this patch if your HSG80 controller uses ACS 8.6 or later. This patch enables you to use the controller’s SWCC Storage Window within CIM to monitor and manage the controller.
Installing, Configuring, and Removing the Client Integrating Controller’s SWCC Storage Window with CIM You can open the controller’s Storage Window from within the Windows-based CIM version 4.23 by doing the following: 1. Verify that you have installed the following by looking in Add/Remove Programs in Control Panel: • The HSG80 Storage Window for ACS 8.6 or later (Required to open the correct Storage Window for your firmware). • The HSG80 Storage Window version 2.1 (StorageWorks HSG80 V2.
Installing, Configuring, and Removing the Client Insight Manager Unable to Find Controller’s Storage Window If you installed Insight Manager before SWCC, Insight Manager will be unable to find the controller’s Storage Window. To find the controller’s Storage Window, perform the following procedure: 1. Double-click the Insight Agents icon (Start > Settings > Control Panel). A window appears showing you the active and inactive Agents under the Services tab. 2.
Installing, Configuring, and Removing the Client Troubleshooting Client Installation This section provides information on how to resolve some of the problems that may appear when installing the Client software: • Invalid Network Port Assignments During Installation • “There is no disk in the drive” Message Invalid Network Port Assignments During Installation SWCC Clients and Agents communicate by using sockets.
Installing, Configuring, and Removing the Client The following shows how the network port assignments appear in the services file: spgui 4998/tcp #Command Console ccdevmgt 4993/tcp #Device Management Client and Agent kzpccconnectport 4991/tcp #KZPCC Client and Agent kzpccdiscoveryport 4985/tcp #KZPCC Client and Agent ccfabric 4989/tcp #Fibre Channel Interconnect Agent spagent 4999/tcp #HS-Series Client and Agent spagent3 4994/tcp #HSZ22 Client and Agent ccagent 4997/tcp #RA200 Client
Installing, Configuring, and Removing the Client Adding Storage Subsystem and its Host to Navigation Tree The Navigation Tree enables you to manage storage over the network by using the Storage Window. If you plan to use pager notification, you must add the storage subsystem to the Navigation Tree. 1. Verify that you have properly installed and configured the HS-Series Agent on the storage subsystem host. 2. Click Start > Programs > Command Console > StorageWorks Command Console.
Installing, Configuring, and Removing the Client Figure B–2: Navigation window showing storage host system “Atlanta” 6. Click the plus sign to expand the host icon. When expanded, the Navigation Window displays an icon for the storage subsystem. To access the Storage Window for the subsystem, double-click the Storage Window icon. Figure B–3: Navigation window showing expanded “Atlanta” host icon HSG80 ACS Solution Software Version 8.
Installing, Configuring, and Removing the Client NOTE: You can create virtual disks by using the Storage Window. For more information on the Storage Window, refer to StorageWorks Command Console Version 2.5, User Guide. Removing Command Console Client Before you remove the Command Console Client (CCL) from the computer, remove AES. This will prevent the system from reporting that a service failed to start every time the system is restarted. Steps 2 through 5 describe how to remove the CCL.
Installing, Configuring, and Removing the Client Where to Find Additional Information You can find additional information about SWCC by referring to the online Help and to StorageWorks Command Console Version 2.5, User Guide. About the User Guide StorageWorks Command Console Version 2.5, User Guide contains additional information on how to use SWCC.
C SWCC Agent in TruCluster Environment This appendix describes how to set up two different versions of SWCC Agent on TruClusters: • Tru64 UNIX Version 4.0G • Tru64 UNIX Version 5.X SWCC Overview The SWCC is a graphical user interface (GUI) for managing StorageWorks Redundant Array of Independent Disks (RAID) array products from a client (console) running on Microsoft Windows NT 4.0 with Service pack 4 or later, or Windows 2000.
SWCC Agent in TruCluster Environment Running the SWCC Agent on a V4.0G Cluster This first section presents an example of how to set up the StorageWorks Command Console (SWCC) Agent on Tru64 UNIX V4.0G running TruCluster V1.6. It acts as a guide for setting up the Agent for High Availability. The best way to accomplish high availability failover capability is by creating an ASE service to accomplish a failover. StorageWorks recommends that the Agent run on only one node in a cluster.
SWCC Agent in TruCluster Environment 2. Create the service. You can accomplish both steps by creating the Start/Stop scripts and then running the asemgr to finish creating the service. NOTE: When the ASE Director is started (for example during boot up) ASE Services run the STOP scripts for all defined services by default. It then runs the Start scripts for all defined services.
SWCC Agent in TruCluster Environment An example of the Stop script follows which can be located in the /usr/opt/SWCC520/scripts directory. The keyword “steam” is searched for and if found the stream editor “sed” is invoked to remove it. The init daemon is then sent the “q” option to reload the inittab file. Since the “steam” line has been removed, initd will kill the steamd program. /usr/opt/SWCC520/scripts/steamstop #!/bin/sh # # Simple stop script for steamd. # PATH=.
SWCC Agent in TruCluster Environment TruCluster Production Server (ASE) Enter your choice: m / Select “m” to create a new service. / ASE Main Menu a) Managing the ASE --> m) Managing ASE Services --> s) Obtaining ASE Status --> x) Exit ?) Help Managing ASE Services Enter your choice [q]: c / Select “c” for a new service. / Service Configuration Enter your choice [q]: a Adding a service Enter your choice [1]: 3 HSG80 ACS Solution Software Version 8.
SWCC Agent in TruCluster Environment When adding a service pick “3” even though the tendency would be to pick “2” a disk service which reserves a specific disk device. Since the Communication LUN may not always be the same, reserving a specific LUN would not be a good idea.
SWCC Agent in TruCluster Environment You are now adding a new user-defined service to ASE. User-defined Service Name The name of a user-defined service must be a unique service name within the ASE environment. Enter your choice [x]: 1 As a minimum this is needed.
SWCC Agent in TruCluster Environment Modifying the start action script for `steam`: f) Replace the start action script e) Edit the start action script g) Modify the start action script arguments [steam] t) Modify the start action script timeout [60] r) Remove the start action script x) Exit - done with changes Modifying user-defined scripts for `steam`: 1) Start action 2) Stop action 3) Add action 4) Delete action 5) Check action x) Exit - done with changes Enter your choice [x]: f Enter the full pathnam
SWCC Agent in TruCluster Environment Modifying the stop action script for `steam`: f) Replace the stop action script e) Edit the stop action script g) Modify the stop action script arguments [steam] t) Modify the stop action script timeout [60] r) Remove the stop action script x) Exit - done with changes Modifying user-defined scripts for `steam`: 1) Start action 2) Stop action 3) Add action 4) Delete action 5) Check action x) Exit - done with changes HSG80 ACS Solution Software Version 8.
SWCC Agent in TruCluster Environment Selecting an Automatic Service Placement (ASP) Policy Enter your choice [b]: b Selecting the Balanced option relocates steam to the least busy node should it be necessary. You may want to specify favored nodes. Do you want ASE to consider relocating this service to another member if one becomes available while this service is running (y/n/?): n If “b” was NOT picked for the previous question, “y” should be picked for this one.
SWCC Agent in TruCluster Environment If everything was entered correctly the steamd should be running on one of the nodes in the cluster. If that node should go down, the steamd will automatically be relocated to another node in the cluster. NOTE: If the SWCC Agent is running on a node which has a LUN reserved to it, and that Agent fails over to another node, you will no longer be able to monitor that LUN.
SWCC Agent in TruCluster Environment Running the SWCC Agent on a V5.x Cluster This section is intended for users of Tru64 UNIX V5.x. Running SWCC on a V5.x Cluster has the following requirements: Table C–1: Configuration Requirements Requirement Description Operating System Tru64 UNIX Version 5.0 or higher and TruCluster Server Version 5.0A or higher.
SWCC Agent in TruCluster Environment In addition, the SWCC configuration script (/usr/opt/SWCCx/scripts/swcc_config) and initialization file (/usr/opt/SWCCx/etc/storage.ini) are not context-dependent symbolic links (CDSLs) and are shared by all cluster members. Therefore, installing and configuring the SWCC agent on one cluster member configures the agent using the storage information for that cluster member, but the configuration information is then used on all cluster members.
SWCC Agent in TruCluster Environment # hostname rye.zk3.dec.com # hwmgr -view devices HWID: Device Name Mfg Model Location ---------------------------------------------------------------------4: /dev/kevm 44: /dev/disk/floppy0c 3.
SWCC Agent in TruCluster Environment Problems with Running the Agent on Multiple Clusters Running the agent on multiple cluster members can cause problems in operational ease-of-use. For example, if you make changes with the agent on one cluster member, those changes are not immediately reflected by the agents on other cluster members, so what you see in the Navigation Window might not be what you expect.
SWCC Agent in TruCluster Environment • Configure the SWCC client to use a cluster alias as the address for the SWCC agent system to avoid having to change the client if the CAA swcc resource fails over to another member. You can use the default cluster alias, or any alias to which all of the cluster members belong. Configure the Controller The Hardware Configuration manual describes how to configure the hardware in a TruCluster Server environment.
SWCC Agent in TruCluster Environment NOTE: When you enter the command to set multiple-bus failover and copy the configuration information to the other controller, the other controller will restart. The restart may set off the audible alarm (which is silenced by pressing the button on the environmental monitoring unit (EMU)). The command line interpreter (CLI) will display an event report, and continue reporting the condition until you clear the condition with the clear cli command.
SWCC Agent in TruCluster Environment Verify That the HSG80/HSG60 Unit Offsets Are Zero In multiple-bus failover mode, the default offset is 0 for all host connections. However, if a controller pair is switched from transparent failover mode to multiple-bus failover mode, the unit offsets for transparent mode remain in effect, so a connection can have a non-zero unit offset. For each connection to your cluster, verify that the unit offset is 0.
SWCC Agent in TruCluster Environment Use SCSI-3 Mode A logical unit number (LUN) is an address of a logical unit on a virtual disk. The Command Console LUN (CCL), also called a communications LUN, is a special logical unit number that is used to communicate with the controller to set up the SWCC. We recommend that you use the controller's SSCI-3 mode, which forces the CCL to be always enabled, at LUN 0, and does not allow it to float.
SWCC Agent in TruCluster Environment Install and Run the Agent on One Cluster Member We recommend that you run the SWCC agent on only one cluster member and use CAA for failover. The SWCC Navigation Window treats cluster members as standalone systems. That is, the Navigation Window does not include a cluster management object; you initiate all management tasks at the system (member) level.
SWCC Agent in TruCluster Environment Table C–2: Required Configuration Details (Continued) Property Notification schemes Description The mechanism by which to notify you of a change in status. The possible options are: 0 No notification over the network. Local notification through e-mail and an entry in the system error log file. 1 Notification sent to the client via TCP/IP. Local notification sent through e-mail and an entry in the system error log file.
SWCC Agent in TruCluster Environment Table C–2: Required Configuration Details (Continued) Property E-mail destination for notification Description An e-mail account that will receive error and status information. Specifying an e-mail account is optional, and requires that mail be already configured on the TruCluster Server cluster. See the Cluster Administration manual for information on configuring mail.
SWCC Agent in TruCluster Environment Example of Installing the Agent on a Cluster Member The example of Installing the Agent, using Version SWCC5xx shows a complete log of the agent installation on a TruCluster Server cluster member. Example of Installing the Agent, Using Version SWCC5xxx # mount -r -t cdfs /dev/disk/cdrom0c /mnt # cd /mnt/agent # ls SWCC5xx instctrl # setld -l .
SWCC Agent in TruCluster Environment Checking file system space required to install selected subsets: File system space checked OK. 1 subsets will be installed. Loading subset 1 of 1 ... SWCC Agent 2.3 For Tru64 Unix Copying from . (disk) Verifying 1 of 1 subsets installed successfully. Setting up daemon for V5.x. NOTE: 'catman' is currently running, so I can't update your 'whatis' database. After this installation run 'catman -w' if you want to update 'whatis'. ------------------ CLIENT.
SWCC Agent in TruCluster Environment The Agent server can notify a client when an error condition occurs. Notification schemes available are: 0 = No Error Notification 1 = Notification via a TCP/IP Socket 2 = Notification via the SNMP protocol 3 = Notification via both TCP/IP and SNMP Enter Error Notification Level (0, 1, 2, 3) : 1 Review Client Information-name: adminpc.zk3.dec.
SWCC Agent in TruCluster Environment ------------------ STORAGE.INI -----------------The 'storage.ini' file stores information about RAID devices connected to your server. The SWCC agent reads this file at startup. If this installation supersedes an older version and the storage.ini file is found; you will be given the option to use it. If not; the following information will be needed for each RAID subsystem you want the agent to interact with. * RAID subsystem name. * Monitoring Interval in seconds.
SWCC Agent in TruCluster Environment ------------------ NOTIFY.INI -----------------The 'notify.ini' file stores mail notification information. If you and/or anyone else should be notified via Email when a RAID subsystem event occurs, enter their Email address in the next screen. If this installation supersedes an older version and an existing notify.ini is found; you will be given the option to use it. If not; the following information will be needed for each person you want notified.
SWCC Agent in TruCluster Environment installing this Agent in a TruCluster environment answer 'Y' for yes. If you are not running TruCluster answer 'N' for no. REMEMBER: Only one instance of the Agent should run in a cluster unless all HSZs and HSGs in the cluster are running firmware versions that support the SWCC lock bit. See your HSx documentation for more information. For more information on switches see man page steamd.8.
SWCC Agent in TruCluster Environment Create the CAA Action Script You can have CAA control the steamd daemon and fail it over to another member when necessary. The CAA action script invokes the startup and shutdown functions in the shell script. If the cluster member on which the steamd daemon is running fails or is shut down, CAA relocates the daemon entry to another member. The action script is called swcc.scr here but you can name it after the CAA resource name of your choice.
SWCC Agent in TruCluster Environment Complete CAA Action Script #!/usr/bin/ksh -p # svcName=”swcc”# Servicename CAA_ADMIN=”root”# Account to receive CAA mail CAALOGDIR=”/var/cluster/caa/log”# Directory for logfiles ACTION=$1# Action (either start or stop) LOG=”${CAALOGDIR}/${ACTION}_${svcName}.
SWCC Agent in TruCluster Environment export START_APPCMD STOP_APPCMD APPDIR ADVFSDIRS PROBE_PROCS # # 8<--------------8<----------- End Custom variables 8<----------8<---------# # Static variables # PATH=/sbin:/usr/sbin:/usr/bin:/usr/lbin TERM=vt100 SHELL=/usr/bin/sh HOME=/ USER=root LOGNAME=root HOST=`/bin/hostname` umask 117 cd ${DIR} OLDPWD=`pwd` export ACTION DIR PATH TERM SHELL HOME USER LOGNAME HOST OLDPWD # # Frequently used procedures # checkdaemon () { R=`ps -o command -A | grep $1 | grep -v grep
SWCC Agent in TruCluster Environment # Kill (-9) a given process using brutal force # zapdaemon () { for i in ${1} do kill -9 `ps -o pid,command -A | grep ${i} | grep -v grep | awk '{print $1}'` checkdaemon ${i} if [ $? -ne 0 ]; then echo "Retrying to kill process ${i} " kill -9 `ps -o pid,command -A | grep ${i} | \ grep -v grep | awk '{print $1}'` checkdaemon ${i} if [ $? -ne 0 ]; then echo "Process ${i} (PID: `ps -o pid,command -A | grep ${i} | grep -v grep | awk '{print $1}'`) \ seems to be stubborn, pl
SWCC Agent in TruCluster Environment # Probe for a running process/application # probeapp () { ps -o command -A | grep $1 | grep -v grep > /dev/null 2>&1 if [ $? -ne 0 ]; then echo "Cannot probe process ${1} . Posting EVM event.
SWCC Agent in TruCluster Environment # All done ... # ${EVMPOST} "Start action script for service ${svcName} DONE" echo ""Start action script for service ${svcName} DONE, \ `/bin/date +"%A %d %B %H:%M:%S"` "" >> ${LOG} echo "" >> ${LOG} exit 0 # ;; # # Stop section # 'stop') echo "" >> ${LOG} echo ""Stop action script for service : ${svcName} \ `/bin/date +"%A %d %B %H:%M:%S"` "" >> ${LOG} # # Stop SWCC # echo "Stopping SWCC ...
SWCC Agent in TruCluster Environment echo "" >> ${LOG} exit 0 ;; # # Probe if application is still alive # 'check') echo ""Probing SWCC daemons at \ `/bin/date +"%A %d %B %H:%M:%S"`"" >> ${LOG} for i in ${PROBE_PROCS} do probeapp ${i} >> ${LOG} done echo ""Probing SWCC daemons DONE at \ `/bin/date +"%A %d %B %H:%M:%S"`"" >> ${LOG} exit 0 ;; *) echo "usage: $0 {start|stop|check}" exit 1 ;; esac HSG80 ACS Solution Software Version 8.
SWCC Agent in TruCluster Environment Create the CAA Resource Use the caa_profile command to create a CAA resource profile, and use the caa_register command to register the resource after you have created it. This procedure uses swcc as the resource name for the purpose of the example. Use application as the resource type and specify the location of the action script.
SWCC Agent in TruCluster Environment If the caa_profile -create command completes successfully, use the caa_profile -print resource_name command to verify the profile is as you intended: # caa_profile -print swcc NAME=swcc TYPE=application ACTION_SCRIPT=swcc.
SWCC Agent in TruCluster Environment Example of Creating, Registering, and Starting the CAA Resource # caa_profile -create swcc -a swcc.scr -t application -o as=1 # caa_profile swcc -print NAME=swcc TYPE=application ACTION_SCRIPT=swcc.
SWCC Agent in TruCluster Environment See caa_profile(8) and caa_register(8) for additional information. Viewing Events Posted by the Action Script The action script shown in Complete CAA Action Script posts events to indicate state transitions and failures. You can use the event viewer to monitor these events. You can launch the event viewer through SysMan Menu or SysMan Station. See sysman(8) for additional information.
SWCC Agent in TruCluster Environment Edit the Startup Script After you are satisfied that swcc is operating correctly under CAA control, edit the beginning of the /sbin/init.d/swcc file to test whether the swcc resource is registered with CAA, and exit if it is. (If you did not name the resource swcc when you created it, use the name you chose.
SWCC Agent in TruCluster Environment 2. If the program does not start automatically, run the setup file D:\SWCC\CLIENT\INTEL\SETUP.EXE, where D: is the location of your CD-ROM drive. The installation program is self-extracting and by default stores the client software in the C:\Program Files\Compaq\SWCC directory. 3. You are presented with a list of available client applications. Choose the applications appropriate for your environment.
SWCC Agent in TruCluster Environment NOTE: The recommended edit to the /sbin/init.d/swcc file prevents the /usr/opt/SWCCx/scripts/swcc_config utility from starting, stopping, or restarting SWCC when the swcc resource is registered with CAA. However, swcc_config is not aware of this change, and still attempts to start, stop, and restart SWCC if you choose these options. This action results in a problem-status message. If you did not modify the /sbin/init.
SWCC Agent in TruCluster Environment To verify that swcc was removed, use the ps command to determine if the steamd daemon is running on the member. There should not be a steamd process. # ps agx | grep steamd 540850 pts/1 S + 0:00.00 grep steamd To start the CAA resource on another member, use the caa_start swcc command with the -c option. To verify that the swcc agent is accessible from the client, use the client running on a PC to display information about the storage configuration.
Glossary This glossary defines terms pertaining to the ACS solution software. It is not a comprehensive glossary of computer terms. 8B/10B A type of byte definition encoding and decoding to reduce errors in data transmission patented by the IBM Corporation. This process of encoding and decoding data for transmission has been adopted by ANSI. adapter A device that converts the protocol and hardware interface of one bus type into another without changing the function of the bus.
Glossary array controller See controller. array controller software Abbreviated ACS. Software contained on a removable ROM program card that provides the operating system for the array controller. association set A group of remote copy sets that share selectable attributes for logging and failover. Members of an association set transition to the same state simultaneously.
Glossary block Also called a sector. The smallest collection of consecutive bytes addressable on a disk drive. In integrated storage elements, a block contains 512 bytes of data, error codes, flags, and the block address header. bootstrapping A method used to bring a system or device into a defined state by means of its own action. For example, a machine routine whose first few instructions are enough to bring the rest of the routine into the computer from an input device.
Glossary command line interface CLI. A command line entry utility used to interface with the HS-series controllers. CLI enables the configuration and monitoring of a storage subsystem through textual commands. concat commands Concat commands implement storageset expansion features. configuration file A file that contains a representation of a storage subsystem configuration. container 1) Any entity that is capable of storing data, whether it is a physical device or a group of physical devices.
Glossary data striping The process of segmenting logically sequential data, such as a single file, so that segments can be written to multiple physical devices (usually disk drives) in a round-robin fashion. This technique is useful if the processor is capable of reading or writing data faster than a single disk can supply or accept the data. While data is being transferred from the first disk, the second disk can locate the next segment. DDL Dual data link.
Glossary DWZZA A StorageWorks SCSI bus signal converter used to connect 8-bit single-ended devices to hosts with 16-bit differential SCSI adapters. This converter extends the range of a single-ended SCSI cable to the limit of a differential SCSI cable. DWZZB A StorageWorks SCSI bus signal converter used to connect a variety of 16-bit single-ended devices to hosts with 16-bit differential SCSI adapters. ECB External cache battery.
Glossary failedset A group of failed mirrorset or RAIDset devices automatically created by the controller. failover The process that takes place when one controller in a dual-redundant configuration assumes the workload of a failed companion controller. Failover continues until the failed controller is repaired or replaced. The ability for HSG80 controllers to transfer control from one controller to another in the event of a controller failure. This ensures uninterrupted operation.
Glossary FCC Class B This certification label appears on electronic devices that can be used in either a home or a commercial environment within the United States. FCP The mapping of SCSI-3 operations to Fibre Channel. FDDI Fiber Distributed Data Interface. An ANSI standard for 100 megabaud transmission over fiber optic cable. FD SCSI The fast, narrow, differential SCSI bus with an 8-bit data transfer rate of 10 MB/s. See also FWD SCSI and SCSI. fiber A fiber or optical strand.
Glossary FRU Field replaceable unit. A hardware component that can be replaced at the customer location by service personnel or qualified customer service personnel. FRUTIL Field Replacement utility. full duplex (n) A communications system in which there is a capability for 2-way transmission and acceptance between two sites at the same time. full duplex (adj) Pertaining to a communications method in which data can be transmitted and received at the same time.
Glossary host adapter A device that connects a host system to a SCSI bus. The host adapter usually performs the lowest layers of the SCSI protocol. This function may be logically and physically integrated into the host system. HBA Host bus adapter host compatibility mode A setting used by the controller to provide optimal controller performance with specific operating systems. This improves the controller performance and compatibility with the specified operating system.
Glossary initiator A SCSI device that requests an I/O process to be performed by another SCSI device, namely, the SCSI target. The controller is the initiator on the device bus. The host is the initiator on the host bus. instance code A four-byte value displayed in most text error messages and issued by the controller when a subsystem error occurs. The instance code indicates when during software processing the error was detected.
Glossary link A connection between two Fibre Channel ports consisting of a transmit fibre and a receive fibre. local connection A connection to the subsystem using either its serial maintenance port or the host SCSI bus. A local connection enables you to connect to one subsystem controller within the physical range of the serial or host SCSI cable. local terminal A terminal plugged into the EIA-423 maintenance port located on the front bezel of the controller. See also maintenance terminal.
Glossary Mbps Approximately one million (106) bits per second—that is, megabits per second. maintenance terminal An EIA-423-compatible terminal used with the controller. This terminal is used to identify the controller, enable host paths, enter configuration information, and check the controller status. The maintenance terminal is not required for normal operations. See also local terminal. member A container that is a storage element in a RAID array.
Glossary nonparticipating mode A mode within an L_Port that inhibits the port from participating in loop activities. L_Ports in this mode continue to retransmit received transmission words but are not permitted to arbitrate or originate frames. An L_Port in non-participating mode may or may not have an AL_PA. See also participating mode. nominal membership The desired number of mirrorset members when the mirrorset is fully populated with active devices.
Glossary offset A relative address referenced from the base element address. Event Sense Data Response Templates use offsets to identify various information contained within one byte of memory (bits 0 through 7). other controller The controller in a dual-redundant pair that is connected to the controller serving the current CLI session. See also this controller. outbound fiber One fiber in a link that carries information away from a port.
Glossary pluggable A replacement method that allows the complete system to remain online during device removal or insertion. The system bus must be halted, or quiesced, for a brief period of time during the replacement procedure. See also hot-pluggable. point-to-point connection A network configuration in which a connection is established between two, and only two, terminal installations. The connection may include switching facilities.
Glossary RAID Redundant Array of Independent Disks. Represents multiple levels of storage access developed to improve performance or availability or both. RAID level 0 A RAID storageset that stripes data across an array of disk drives. A single logical disk spans multiple physical disks, enabling parallel data processing for increased I/O performance. While the performance characteristics of RAID level 0 is excellent, this RAID level is the only one that does not provide redundancy.
Glossary read ahead caching A caching technique for improving performance of synchronous sequential reads by prefetching data from disk. read caching A cache management method used to decrease the subsystem response time to a read request by allowing the controller to satisfy the request from the cache memory rather than from the disk drives. reconstruction The process of regenerating the contents of a failed member data.
Glossary RFI Radio frequency interference. The disturbance of a signal by an unwanted radio signal or frequency. replacement policy The policy specified by a switch with the SET FAILEDSET command indicating whether a failed disk from a mirrorset or RAIDset is to be automatically replaced with a disk from the spareset. The two switch choices are AUTOSPARE and NOAUTOSPARE. SBB StorageWorks building block.
Glossary SCSI-P cable A 68-conductor (34 twisted-pair) cable generally used for differential bus connections. SCSI port (1) Software: The channel controlling communications to and from a specific SCSI bus in the system. (2) Hardware: The name of the logical socket at the back of the system unit to which a SCSI device is connected. serial transmission A method transmission in which each bit of information is sent sequentially on a single channel rather than simultaneously as in parallel transmission.
Glossary storage unit The general term that refers to storagesets, single-disk units, and all other storage devices that are installed in your subsystem and accessed by the host. A storage unit can be any entity that is capable of storing data, whether it is a physical device or a group of physical devices. StorageWorks A family of modular data storage products that allow customers to design and configure their own storage subsystems.
Glossary tape A storage device supporting sequential access to variable sized data records. target (1) A SCSI device that performs an operation requested by an initiator. (2) Designates the target identification (ID) number of the device. target ID number The address a bus initiator uses to connect with a bus target. Each bus target is assigned a unique target address. this controller The controller that is serving your current CLI session through a local or remote terminal.
Glossary UPS Uninterruptible power supply. A battery-powered power supply guaranteed to provide power to an electrical device in the event of an unexpected interruption to the primary power supply. Uninterruptible power supplies are usually rated by the amount of voltage supplied and the length of time the voltage is supplied. VHDCI Very high-density-cable interface. A 68-pin interface. Required for Ultra-SCSI connections.
Glossary write hole The period of time in a RAID level 1 or RAID level 5 write operation when an opportunity emerges for undetectable RAIDset data corruption. Write holes occur under conditions such as power outages, where the writing of multiple members can be abruptly interrupted. A battery backed-up cache design eliminates the write hole because data is preserved in cache and unsuccessful write operations can be retried.
Index A ADD CONNECTIONS multiple-bus failover 1–19 transparent failover 1–17 ADD UNIT multiple-bus failover 1–19 transparent failover 1–17 adding virtual disks B–13 adding a disk drive to the spareset configuration options 5–24 adding disk drives configuration options 5–23 Agent functions 4–1 installing 4–5 array of disk drives 2–15 assigning unit numbers 1–16 assignment unit numbers fabric topology 5–22 unit qualifiers fabric topology 5–22 assignment of unit numbers fabric topology partition 5–22 single d
Index using to increase request rate 2–30 using to increase write performance 2–32 CHUNKSIZE 2–30 CLI commands installation verification 5–9, 5–16 CLI configuration example 6–4 CLI configurations 6–1 CLI prompt changing fabric topology 5–23 Client removing B–12 uninstalling B–12 CLONE utility backup 7–2 cloning backup 7–2 command console LUN 1–11 SCSI-2 mode 1–20 SCSI-3 mode 1–20 comparison of container types 2–15 configuration backup 7–1 fabric topology devices 5–16 multiple-bus failover cabling 5–10 mult
Index fabric topology 5–26 devices changing switches fabric topology 5–25 configuration fabric topology 5–16 creating a profile 2–16 disk drives adding fabric topology 5–23 adding to the spareset fabric topology 5–24 array 2–15 corresponding storagesets 2–34 dividing 2–26 removing from the spareset fabric topology 5–24 disklabel creating partitions on a LUN 3–7 displaying the current switches fabric topology 5–26 dividing storagesets 2–26 E enabling switches 2–28 erasing metadata 2–33 establishing a local
Index geometry 2–33 NOSAVE_CONFIGURATION 2–32 SAVE_CONFIGURATION 2–32 Insight Manager B–13 installation controller verification 5–9, 5–16 invalid network port assignments B–8 there is no disk in the drive message B–9 installation verification CLI commands 5–9, 5–16 installing Agent 4–5 integrating SWCC B–13 invalid network port assignments B–8 iostat Tru64 UNIX utility 3–13 J JBOD 2–15 L LOCATE find devices 2–34 location cache module 1–2, 1–3 controller 1–2, 1–3 LUN IDs general description 1–29 LUN prese
Index transparent fafilover 1–23 SCSI version factor 1–18 online help SWCC B–13 options for mirrorsets 2–29 for RAIDsets 2–28 initialize 2–30 other controller 1–3 preferring units multiple-bus failover fabric topology 5–23 profiles creating 2–16 description 2–16 storageset A–1 example A–2 P RAIDset switches changing fabric topology 5–26 RAIDsets choosing chunk size 2–30 maximum membership 2–24 planning considerations 2–22 important points 2–23 switches 2–28 read caching enabled for all storage units 1–1
Index saving configuration 2–32 SCSI version offset 1–18 SCSI-2 assigning unit numbers 1–19 command console lun 1–20 SCSI-3 assigning unit numbers 1–19 command console lun 1–20 scu Tru64 UNIX utility 3–11 Second enclosure of multiple-enclosure subsystem storage map template 2 A–5 selective storage presentation 1–21 SET CONNECTIONS multiple-bus failover 1–19 transparent failover 1–17 SET UNIT multiple-bus failover 1–19 setting controller configuration handling 2–32 single disk (JBOD) assigning a unit number
Index stripesets distributing members across buses 2–20 planning 2–19 planning considerations 2–18 important points 2–19 subsystem saving configuration 2–32 subsystem configuration backup 7–1 SWCC 4–1 additional information B–13 integrating B–13 online help B–13 SWCC Agent set up C–1 switches changing 2–28 changing characteristics 2–27 CHUNKSIZE 2–30 enabling 2–28 mirrorsets 2–29 NOSAVE_CONFIGURATION 2–32 RAIDset 2–28 SAVE_CONFIGURATION 2–32 switches for storagesets overview 2–28 T templates subsystem pro
Index W where to start 1–1 worldwide names 1–27 NODE_ID 1–27 REPORTED PORT_ID 1–27 restoring 1–28 write performance 2–32 write requests Index–8 improving the subsystem response time with write-back caching 1–10 placing data with with write-through caching 1–10 write-back caching general description 1–10 write-through caching general description 1–10 HSG80 ACS Solution Software Version 8.