Acer | HDS AMS200 User and Reference Guide MK-95DF713-03
2006 Hitachi Data Systems Corporation, ALL RIGHTS RESERVED Notice: No part of this publication may be reproduced or transmitted in any form or by any electronic or mechanical means, including photocopying and recording, or stored in a database or retrieval system for any purpose without the express written permission of Hitachi Data Systems Corporation (hereinafter referred to as “Hitachi Data Systems”).
VERITAS is a trademark of VERITAS Software Corp. All other brand or product names are or may be registered trademarks, trademarks, or service marks of and are used to identify products or services of their respective owners. Notice of Export Controls Export of technical data contained in this document may require an export license from the United States government and/or the government of Japan. Please contact the Hitachi Data Systems Legal Department for any export compliance questions.
Changed Table 4.1 Changed Figure 4.16 Changed Table 4.11 Changed Figure 4.25 Changed Table 4.20 Changed the introduction to Chapter 5 Changed section 4.5.1 Changed section 5.4 Added section 5.10 Changed Figure 5.1 Added Figure 5.2 Changed Figure 5.3 Changed section 5.5.2 Changed section 5.6.2 Changed the introduction to Chapter 6 Changed section 6.1 Changed section 6.1.1 Changed Figure 6.1 Added section 6.1.4 Added Figure 6.
vi Preface
Preface This document describes the physical, functional, and operational characteristics of the AMS200 subsystem. This document also provides operation instructions, installation details, and configuration planning information for the AMS200 subsystem. This User and Reference Guide assumes: The user is familiar with the Acer | HDS AMS200™ array subsystem, and The user is familiar with the Windows® 95, Windows® 98, Windows® 2000, or Windows NT® operating systems.
If trouble occurs in a different configuration, the user may be requested to take appropriate preventive measures.
Acer | HDS AMS200 User and Reference Guide ix
Contents Chapter 1 Overview of the AMS200 Subsystem ....................................................................................1 1.1 1.2 1.3 Chapter 2 Planning for Installation and Operation................................................................................9 2.1 2.2 2.3 2.4 Chapter 3 User Responsibilities ........................................................................... Safety Precautions ............................................................................. 2.
4.2 4.3 4.4 4.5 4.6 4.7 Chapter 5 46 52 58 59 60 62 62 64 65 65 65 66 68 70 71 73 75 78 82 Functional and Operational Characteristics ......................................................................93 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 xii 4.1.1 AMS200 Rack-Mount Model ........................................................... 4.1.2 AMS200 Floor Model ................................................................... Redundant Power Supplies ..............................................
5.10 iSCSI Features and Functions ................................................................ 107 5.10.1 CHAP Authentication................................................................. 107 5.10.2 iSNS Client ............................................................................. 107 Chapter 6 Configuring the AMS200 Subsystem ................................................................................109 6.1 6.2 6.3 6.4 6.5 6.6 Chapter 7 Overview of Configuration ....................
7.8 7.9 7.10 7.11 7.12 7.13 7.14 7.15 7.16 7.17 Chapter 8 Troubleshooting .................................................................................................................205 8.1 8.2 8.3 8.4 xiv 7.7.3 Deleting the CHAP User ............................................................. 167 7.7.4 Changing the Two-Way Authentication Information ............................ 168 Transferring Configurations from One Array to Another ................................
8.5 8.6 8.7 8.8 Chapter 9 Determining the Failure of the Network Side in the NAS System ...................... 254 Connecting Failure in Connection with the Web ......................................... 256 8.6.1 Collecting Simple Trace ............................................................. 256 8.6.2 NAS Log Collection ................................................................... 258 8.6.3 NAS Dump Generation ...............................................................
List of Figures Figure 2.1 Figure 2.2 Figure 2.3 Figure 2.4 Figure 2.5 Figure 3.1 Figure 4.1 Figure 4.2 Figure 4.3 Figure 4.4 Figure 4.5 Figure 4.6 Figure 4.7 Figure 4.8 Figure 4.9 Figure 4.10 Figure 4.11 Figure 4.12 Figure 4.13 Figure 4.14 Figure 4.15 Figure 4.16 Figure 4.17 Figure 4.18 Figure 4.19 Figure 4.20 Figure 4.21 Figure 4.22 Figure 4.23 Figure 4.24 Figure 4.25 Figure 4.26 Figure 4.27 Figure 4.28 Figure 4.29 Figure 4.30 Figure 4.31 Figure 4.32 Figure 4.33 Figure 4.34 Figure 4.35 Figure 4.
Figure 5.1 Figure 5.2 Figure 5.3 Figure 5.4 Figure 6.1 Figure 6.2 Figure 6.3 Figure 6.4 Figure 6.5 Figure 6.6 Figure 7.1 Figure 7.2 Figure 7.3 Figure 7.4 Figure 7.5 Figure 7.6 Figure 7.7 Figure 8.1 Figure 8.2 Figure 8.3 Figure 8.4 Figure 8.5 Table 8.6 Table 8.7 Figure 8.8 Figure 8.9 Figure 8.10 Figure 8.11 Figure 8.12 Figure 8.13 Figure 8.14 Figure 8.15 Figure 8.16 Figure 8.17 Figure 8.18 Figure 8.19 Logical Units (Without the FC interface board addition to the control unit) ...
xviii Contents
List of Tables Table 2.1 Table 2.2 Table 2.3 Table 2.4 Table 2.5 Table 2.6 Table 2.7 Table 2.8 Table 2.9 Table 2.10 Table 2.11 Table 2.12 Table 2.13 Table 2.14 Table 2.15 Table 2.16 Table 4.1 Table 4.2 Table 4.3 Table 4.4 Table 4.5 Table 4.6 Table 4.7 Table 4.8 Table 4.9 Table 4.10 Table 4.11 Table 4.12 Table 4.13 Table 4.14 Table 4.15 Table 4.16 Table 4.17 Table 4.18 Table 4.19 Table 4.20 Table 4.21 Table 4.22 Table 4.23 Table 4.24 Table 5.1 Table 5.2 Table 5.3 Table 5.4 Table 7.1 Table 7.2 Table 7.
Table 7.5 Table 8.1 Table 8.2 Table 8.3 Table 8.4 Table 8.5 Table 8.6 Table 8.7 Table 8.8 Table 8.9 Table 8.10 xx Contents Capacity Restriction of System LU.................................................... 193 Web Operational Environment ........................................................ 214 AMS200 WEB Function Supported Browser/Version ................................ 215 Network Parameters .................................................................... 217 Message Code Types .............
Chapter 1 Overview of the AMS200 Subsystem This chapter includes the following: Overview Features (see section 1.1) Rack-Mount Model (see section 1.2) Floor Model (see section 1.3) This chapter provides information on the Fibre, NAS, and iSCSI models. The following table illustrates sections that provide an explanation for each model. Fibre model: Connects disk array subsystem to a host computer with Fibre Channel interface.
1.1 Overview Features The Acer | HDS Adaptable Modular Storage AMS200 subsystem (hereafter referred to as the AMS200) is available in two models: the floor model and the rack-mount model. There are two types of the AMS200 floor model. The first type is a combination of the DF700-RKS (hereafter referred to as the RKS) and the floor standing kit DF-F700-H1J (hereafter referred to as the Floor [RKS+H1J] Model).
Ethernet: With the 1 G bps Ethernet connection, the subsystem can transfer data between host computer and the subsystem at a maximum speed of 100 M bytes/s per port via a network. Enough throughput can be obtained even when having multiple access to the multiple devices connected to the same network loop. Cable Fibre Channel: With Fibre Channel, the subsystem can be located up to 300 meters from the host. Ethernet: With Ethernet, the subsystem can be located up to 100 meters from the host.
1.1.5 Reliability, Availability, and Serviceability The AMS200 subsystem is not expected to fail in any way that would interrupt user access to data. The AMS200 can sustain single component failures and still continue to provide full access to all stored user data. Note: While access to user data will not normally be compromised, the failure of any single key component may degrade performance.
1.1.6 Hitachi Freedom Storage™ and Hitachi Freedom Data Networks™ Hitachi Data Systems’ end-to-end Storage Solutions give you the freedom to locate storage wherever it makes the greatest business sense to do so and protect your investment in currently installed components. Made possible by the advent and proliferation of high-speed technologies, storage area networks break the traditional server/storage bond and enable total connectivity.
1.2 Rack-Mount Model The rack-mount model is composed of a single RKS or a combination of the RKS, RKAJ/RKAJAT, and RKNAS mounted on a rack frame. The RKS is capable of mounting up to 15 disk drives; a controller to perform RAID control on the drives is included. The RKAJ/RKAJAT is capable of mounting up to 15 disk drives and controls the drives through a connection with an RKS. The RKAJ/RKAJAT is provided with no controller.
1.3 Floor Model There are two floor model styles: Floor (RKS+H1J) Model. Floor (RKS+RKAJ+H2J) Model. The Floor (RKS+H1J) Model is capable of mounting up to 15 disk drives and include a controller to perform RAID control on the drives. The Floor (RKS+RKAJ+H2J) model is capable of mounting up to 30 disk drives and includes a controller to perform RAID control on the drives. Note: For the specifications of the Floor model, refer to Chapter 2.
8 Chapter 1 Overview of the AMS200 Subsystem
Chapter 2 Planning for Installation and Operation This chapter provides information for planning and preparing a site before and during installation of the Acer | HDS AMS200 subsystem. Please read this chapter carefully before beginning your installation planning. Note: The general information in this chapter is provided to assist in installation planning and is not intended to be complete.
This chapter provides information on the Fibre, NAS, and iSCSI models. The following table illustrates sections that provide explanations for each model. Fibre model: Connects disk array subsystem to a host computer with Fibre Channel interface. NAS model: Connects NAS Unit connected to disk array subsystem to a host computer with LAN interface. iSCSI model: Connects disk array subsystem to a host computer with iSCSI interface. Sections NAS iSCSI 2.1 User Responsibilities { { { 2.2 2.2.
2.
2.2 Safety Precautions When using the AMS200 disk array subsystem, follow these cautionary procedures: Perform operations in accordance with the instructions or procedures described in this manual. Follow the cautionary notes written on labels affixed to the equipment. Follow the cautionary notes written in this manual. This disk array is a class 1 laser system which does not emit a hazardous laser beam.
2.2.1 Symbol Marks The warning labels which appear on the subsystem and/or in this guide indicate potential safety hazards. When you see these symbols, observe the safety instructions that follow: This is the safety alert symbol. It is used to alert you to potential personal injury hazards. Obey all safety messages that follow this symbol to avoid possible injury or death. DANGER Indicates an imminently hazardous situation which, if not avoided, will result in death or serious injury.
2.2.3 Precautions for Using Equipment Use special precautions for the following: 2.2.3.1 2.2.3.2 2.2.3.3 14 Equipment Cables Air vents Battery unit Nickel-Hydride rechargeable battery instructions Miscellaneous and other Equipment If you notice unusual heat generation, odors, or smoke emission, shut off the power feed to the equipment and contact the Customer Engineer. Leaving such conditions unattended may result in hazardous physical conditions and equipment failure.
2.2.3.4 Battery Unit Observe the following when handling the battery: Do not disassemble or tamper with the battery. Do not allow the battery to be physically damaged. If the battery is physically damaged, have it replaced as soon as possible. Do not connect the two terminals of the battery directly to each other; this will create a short circuit. Do not tamper with cable insulation. Do not connect the battery to any equipment other than the AMS200 subsystem.
2.2.3.5 Nickel-Hydride Rechargeable Battery Instructions These instructions explain what you must observe when you use a nickel-hydride rechargeable battery (hereafter it is referred to as the battery). If you use the battery incorrectly, it can overheat ignite, burst or explode, damaging and deteriorating its performance/life. Read and follow the instructions below: Danger 1. Do not disassemble the case; do not modify it or peel off the label.
11. Do not nail or hammer the battery. The battery may be broken or dented and a short circuit may occur inside. As a result, the battery may become overheated, burst or ignite. 12. Do not solder directly to the battery. If you do so, heat will melt the insulator and damage the safety fuse/mechanism. As a result, the battery may leak or may become overheated, burst or ignite. Warning 1.
2.2.4 18 Inspection and Cleaning Precautions If a maintenance activity requires that the unit be powered off, make sure that the power-off sequence described in the manual is performed before proceeding with maintenance. Do not work on the unit in a damp or flooded environment. Do not obstruct access to the unit with parts or tools. When performing the work with the door open, take off metal watches or jewelry to prevent electric shock.
2.2.5 Emergency Precautions Follow these emergency precautions for the following: 2.2.5.1 2.2.5.2 Electric Shock Fire Electric Shock Do NOT immediately touch the person struck by electricity. You could be the second victim. To shut off the electric flow to a victim, disconnect the power feed cable of the equipment. In spite of this action, electricity may not be shut off. Separate the victim from the current source by using a non-conductive material such as dry wooden bar.
2.2.6 2.2.6.1 Warning Notices Caution Statements Caution statements described in this manual and the pages where they appear are listed below. Caution statements are indicated by the caution symbol: Table 2.1 2.2.7 Caution Statements Warning Statement Corresponding Page Cooling fans rotate at a high speed. Keep body parts and loose clothing away from the cooling fans. 18 When cleaning, take care not to touch electrically charged parts. Electric shock may result.
Figure 2.
Figure 2.
Figure 2.
Figure 2.
Figure 2.
2.3 General Specifications and Requirements This section describes the general specifications and requirements for the AMS200 subsystem. The following are included: 2.3.1 Dimensions and weight Service clearance requirements Floor load rating Internal logic specifications Cable requirements Dimensions and Weight The following table illustrates the dimensions and weight of the AMS200 rack-mount model and the AMS200 floor model. Table 2.3 Item Physical Specifications Table 2.
Table 2.
2.3.2 Service Clearance Requirements The following figure shows the floor area required for installing the equipment. Install the equipment in a place with the area shown in the figure to avoid problems such as inadequate service clearance or insufficient ventilation. All distances in the following figure are stated in millimeters (mm).
2.3.3 Floor Load Rating This section includes: 2.3.3.1 Floor load rating for the AMS200 rack-mount model Floor load rating for the AMS200 floor model Floor Load Rating for AMS200 Rack-Mount Model In the maximum configuration, the rack-mount model can be configured with 1 RKS and 6 additional units (RKAJs/RKAJATs and RKNAS). The total weight of the subsystem in this configuration is 530 kg.
2.3.4 Internal Logic Specifications The following table lists the internal logic specifications of the AMS200. Table 2.6 Internal Logic Specification of AMS200 Rack-Mount Model Item Specification RKS RKNAS Internal logic Control CPU Power PC7447A (500 MHz) Intel LV-Xeon 2.
2.4 Environmental Specifications and Requirements To maintain optimal AMS200 performance, the AMS200 subsystem must be installed in a proper environment. This section discusses the following necessary environmental specifications and requirements: 2.4.
2.4.2 Temperature and Humidity Requirements Table 2.8 lists the temperature and humidity requirements for the AMS200 subsystem. Table 2.8 Environmental Specifications Item Specification Temperature Humidity Altitude 2.4.
Table 2.10 Input Power and Insulation Performance Specifications for the Floor Model Item Model Floor Model Floor (RKS+H1J) Model Floor (RKS+RKAJ+H2J) Model Input voltage (V) AC 100/200 (100-120/200-240) Frequency (Hz) 50/60 ± 1 Number of phases, cabling Single-phase with protective grounding Steady-state current (A) (Note 1) (Note 2) 4.0×2/2.0×2 Breaking current (A) 16.
2.4.4 Air Flow Requirements The AMS200 subsystem is air-cooled. Air must enter the subsystem through the airflow intakes at the front of each subsystem and must be exhausted out of the back. 2.4.5 Vibration and Shock Tolerances Table 2.12 lists the vibration and shock tolerance data for the AMS200 subsystem. The AMS200 can tolerate vibration and shock within these limits and continue to perform normally.
2.4.6 Reliability The reliability of the AMS200 is described in the following tables. The following reliability does not change even when the RKNAS is connected to the system. Table 2.
Table 2.
Table 2.
Table 2.
Chapter 3 Powering On/Off Procedure The disk drive may emit audible mechanical sounds when the disk drive is started (spun up), immediately after the subsystem is powered on and powered off (spun down). However, this does not indicate a problem if the WARNING or ALARM LED of the basic frame is off; you may use the subsystem.
3.1 AMS200 Rack-Mount Model The following steps describe power on/off procedures for the AMS200 rack-mount model. Note: For information about the global rack-mount model, refer to the Acer | HDS AMS200 and WMS100 Global Global 19-Inch Rack Reference Guide. 3.1.1 Subsystem Power On Note: The EALM lamp (red) of the controller (on the rear side of the subsystem) may come on between subsystem power-on and Ready status. However, it is not a problem if the EALM lamp (red) goes out during this period of time.
3.1.2 Subsystem Power Off To power off the subsystem: 1. Turn off the main switch. 2. Verify that the POWER LED (green) on the panel of the RKS is off. 3. Turn off the AC power unit switch of the power unit. 4. When the RKNAS is mounted on the rack, turn off the AC Power Unit Switch of the power unit on RKNAS. 5. Turn off the circuit breaker (CB1) of the PDB.
NNC Status: It displays the status of NAS OS. {: Operation enabled -:Operation disabled Image Status Stop Start Restart NEW The NAS OS is not installed. - - - INST The NAS OS is in installation process. - - - ACTIVE The NAS OS is in operation and the Node is in operation. ○ ○ ○ STOP The NAS OS is normally stopped - ○ ○ DOWN The NAS OS is abnormally stopped - ○ ○ BOOT The NAS OS is in start process. ○ - ○ SHUTDOWN The NAS OS is in stop process.
3.2 AMS200 Floor Model The following steps describe power on/off procedures for the AMS200 floor model. 3.2.1 Subsystem Power On Note: The EALM lamp (red) of the controller (on the rear side of the subsystem) may come on between subsystem power-on and Ready status. However, it is not a problem if the EALM lamp (red) goes out during this period of time. To power on the subsystem: 1. Verify that the main switch is turned off. 2. Verify that the AC power unit switch of the power unit is turned off. 3.
3.2.2 Subsystem Power Off To power off the subsystem: 1. Turn off the main switch. 2. Verify that the POWER LED (green) on the panel is off. 3. Turn off the AC power unit switch of the power unit. Note: When storing the subsystem without turning on the power for long periods, request that the Customer Engineer turn off the battery of the subsystem. For details on how to store the subsystem, refer to Chapter 9.
Chapter 4 Subsystem Architecture and Components This chapter includes the following: Configuration Block Diagrams Redundant Power Supplies Fibre Channel Interface NAS Interface iSCSI Interface Array Frames Component Names, Locations, and Functions This chapter provides information on the Fibre, NAS, and iSCSI models. The following table illustrates sections that provide an explanation for each model.
4.1 Configuration Block Diagrams This section includes block diagrams for the following: 4.1.1 AMS200 rack-mount model AMS200 floor model AMS200 Rack-Mount Model The configuration block diagrams of the Rack-Mount models are shown below. The RKS/RKAJ/RKAJAT can mount up to 15 disk drives. (The RKS has a controller that can controller that can control up to 105 disk drives as RAID.) The disk drives can be assigned to data disk(s), parity disk(s) (mirror disk(s)) depending on the RAID level.
Note: Disk drive: DF-F700-AGF72, DF-F700-AGH72, DF-F700-AGF146, DF-F700-AGH146, DF-F700-AGF300. : Basic component and indispensable optional part : Option (additional) part Host computer Power Unit (RKS) #0 Power Unit (RKS) #1 FC Interface FC Interface Control Unit #0 LAN AC100/200 V (Single phase) AC100/200 V (Single phase) Backup Battery Unit Control Unit #1 Cache Unit Cache Unit LAN Fan Assembly 0 1 Fan Assembly 2 Panel Assembly 3 14 Disk Drive (Note) to RKAJ/RKAJAT Figure 4.
: Basic component and indispensable optional part : Option (additional) part RKNAS Memory (DIMM) Network Fan Assembly NAS Interface RKNAS Memory (DIMM) Power Unit Fan Assembly NAS Interface Power Unit Power Unit (RKS) #0 Power Unit (RKS) #1 NAS Interface NAS Interface Control Unit #0 LAN AC100/200 V (Single phase) AC100/200 V (Single phase) Backup Battery Unit Control Unit #1 Cache Unit Cache Unit LAN Fan Assembly 0 1 Fan Assembly 2 3 14 Panel Assembly Disk Drive (Note) to RKAJ/RKAJA
: Basic component and indispensable optional part : Option (additional) part RKNAS Memory (DIMM) Network Fan Assembly NAS Interface RKNAS Memory (DIMM) Power Unit Fan Assembly NAS Interface Power Unit Power Unit (RKS) #0 Power Unit (RKS) #1 NAS Interface NAS Interface Control Unit #0 LAN AC100/200 V (Single phase) AC100/200 V (Single phase) Backup Battery Unit Control Unit #1 Cache Unit Cache Unit LAN Fan Assembly 0 1 Fan Assembly 2 Panel Assembly 3 14 Disk Drive (Note) to RKAJ/RKAJA
: Basic component and indispensable optional part : Option (additional) part from RKS, RKAJ, or RKAJAT ENC Unit (RKAJ) #0 ENC Unit #1 Power Unit (RKAJ) #0 AC100/200 V (Single phase) Power Unit (RKAJ) #1 AC100/200 V (Single phase) 0 1 2 14 Disk Drive (Note) to RKAJ or RKAJAT Figure 4.5 RKAJ System Configuration Note: Disk drive: Disk drive: DF-F700-AGF72, DF-F700-AGH72, DF-F700-AGF146, DF-F700AGH146, DF-F700-AGF300.
: Basic component and indispensable optional part : Option (additional) part From RKS, RKAJ or RKAJAT SENC Unit #0 Disk Drive (Note) SENC Unit #1 Power Unit (RKAJAT) #0 AC100/200 V (Single Phase) Power Unit (RKAJAT) #1 AC100/200 V (Single Phase) Path CTL 0 Status Panel Path CTL 1 FC-SATA Conversion Path CTL FC-SATA Conversion 2 Path CTL 3 Path CTL 4 Path CTL 5 Path CTL 14 to RKAJ or RKAJAT Figure 4.6 RKAJAT System Configuration Note: Disk drive: DF-F700-ATE250R and DF-F700-ATE400R.
4.1.2 AMS200 Floor Model The configuration block diagrams of the Floor Models are shown below. The Floor (RKS+H1J) Model accommodates up to 15 disk drives. The Floor (RKS+RKAJ+H2J) Model accommodates up to 30 disk drives. The disk drives can be assigned to data disk(s), parity disk(s) and (mirror disk(s)) depending on the RAID level. Up to 15 spare disks (Floor [RKS+H1J] Model: up to 1) can be mounted in any locations within the configuration.
: Basic component and indispensable optional part : Option (additional) part Host computer Power Unit (RKS) #0 Power Unit (RKS) #1 FC Interface FC Interface Control Unit #0 LAN AC100/200 V (Single phase) AC100/200 V (Single phase) Backup Battery Unit Control Unit #1 Cache Unit Cache Unit LAN Fan Assembly 0 1 Fan Assembly 2 Panel Assembly 14 Disk Drive (Note) Figure 4.
: Basic component and indispensable optional part : Option (additional) part Host computer Power Unit (RKS) #0 Power Unit (RKS) #1 Interface Interface Control Unit #0 LAN AC100/200 V (Single phase) AC100/200 V (Single phase) Backup Battery Unit Control Unit #1 Cache Unit Cache Unit LAN Fan Assembly 0 1 Fan Assembly 2 Panel Assembly 14 Disk Drive (Note) Figure 4.
: Basic component and indispensable optional part : Option (additional) part Host computer Fan Assembly Fan Assembly Panel Assembly Interface (Mini-HUB) LAN Interface (Mini-HUB) Control Unit #0 Control Unit #1 Cache Unit Cache Unit LAN Power Unit (RKS) #0 0 1 AC100/200 V (Single phase) Power Unit (RKS) #1 2 Backup Battery Unit 14 Disk Drive (Note) AC100/200 V (Single phase) Power Unit (RKAJ) #0 0 1 Power Unit (RKAJ) #1 2 AC100/200 V (Single phase) AC100/200 V (Single phase) 14 ENC Un
: Basic component and indispensable optional part : Option (additional) part Host computer Fan Assembly Fan Assembly Panel Assembly FC Interface LAN FC Interface Control Unit #0 Control Unit #1 Cache Unit Cache Unit LAN Power Unit (RKS) #0 0 1 AC100/200 V (Single phase) Power Unit (RKS) #1 2 Backup Battery Unit 14 Disk Drive (Note) AC100/200 V (Single phase) Power Unit (RKAJ) #0 0 1 Power Unit (RKAJ) #1 2 AC100/200 V (Single phase) AC100/200 V (Single phase) 14 ENC Unit #0 Disk Driv
: Basic component and indispensable optional part : Option (additional) part Host computer Fan Assembly Fan Assembly Panel Assembly Interface LAN Interface Control Unit #0 Control Unit #1 Cache Unit Cache Unit LAN Power Unit (RKS) #0 0 1 AC100/200 V (Single phase) Power Unit (RKS) #1 2 Backup Battery Unit 14 Disk Drive (Note) AC100/200 V (Single phase) Power Unit (RKAJ) #0 0 1 Power Unit (RKAJ) #1 2 AC100/200 V (Single phase) AC100/200 V (Single phase) 14 ENC Unit #0 Disk Drive (Not
4.2 Redundant Power Supplies Each AMS200 unit is powered by its own set of redundant power supplies, and each power supply is able to provide power for the entire RKS unit, should it become necessary. Because of this redundancy, the AMS200 subsystem can sustain the loss of a power supply and still continue operation.
4.3 Fibre Channel Interface The AMS200 subsystem supports open system operations. The AMS200 subsystem supports up to 2 fibre-channel ports. Each AMS200 Fibre Channel interface is capable of operating at data transfer speeds of up to 200 MB/sec. The AMS200 extends up to 4 Fibre Channel ports by adding optional an FC interface board. The AMS200 supports shortwave multimode optical cables.
4.3.2 4.3.2.1 Connection Specifications When the FC Interface Board is Not Added The host connector that can be used varies, depending on the topology setting of the AMS200 and the destination of the Fibre Channel cable connection. The following table shows available host connector of each topology setting and connection method. Table 4.1 No.
4.3.2.2 When the FC Interface Board is Added The available Fibre Channel connection configuration varies, depending on the topology setting of the AMS200 and the destination of the Fibre Channel cable connection. The following table shows available Fibre Channel connection of each topology setting and connection method. Table 4.2 No.
4.3.3 4.3.3.1 Fibre Channel Configuration When the FC Interface Board is Not Added The following Fibre Channel information is not set for each host connector that connects to the AMS200 Fibre Channel. 4.3.3.2 Port Address Topology Transfer Rate Adding a Host Group Host Group Options LU Mapping Information When the FC Interface Board is Added The host connectors that connect the AMS 200 Fibre Channel interface cable configure the respectively independent port.
4.3.4.2 When the FC Interface Board is Added One host connector configures one port. Exclusive access to Logical Unit LU mapping can be processed for each port. Therefore, set the accessible logical unit for each port using the LU mapping function. Transfer rate of host The transfer rate for the AMS200 is set for each port. The host connector (side A and side B) can be connected respectively to hosts with different transfer rates.
4.4 NAS Interface The AMS200 provides up to 8 LAN ports and supports 1000 BASE-T for Gigabit LAN and 100 Mbps-BASE-TX. The AMS200 supports transfer rates of 100 Mbytes/s and 10 Mbytes/s, and controls data transmission using the CSMA/CD method. Note: Refer to D.2 Ethernet Connection Specifications for the supported conditions of switch and so on in this subsystem.
4.5 ISCSI Interface The AMS200 provides 4 iSCSI ports by adding an optional iSCSI interface board. The iSCSI interface is capable of operating at data transfer speed of up to 100 M bytes/s. The AMS200 supports Ethernet (1000Base-TX). With the HBA for iSCSI, Generic NIC + Software initiator, and Network Switch, the AMS200 subsystem can be located up to 100 meters. Connect the switch based on the 1000BASE-T(full-duplex). Use the LAN cable with the following types and shapes.
4.6.1 AMS200 Rack-Mount Model Each RKS unit contains the physical disk drives, including the disk array groups and the dynamic spare disk drives. Each rack frame has dual power plugs, which should be attached to two different power sources or power panels. The AMS200 can be configured with 1 RK and up to 6 RKAJ units for a total of 105 GB disk drives at a maximum of 28.1 Tbytes RAID5(14D+1P) (using the 287.6 G disk drive).
Note 1: This value of storage capacity is calculated as 1 Gbyte = 1,000,000,000 bytes. (This definition is different from 1 Kbyte = 1,024 bytes.) Note 2: When FC interface board is not added, one port configures one Mini-HUB, and extends to two host connectors. When FC interface board is added, control unit implements two ports and two host connectors. One port configures FC interface independent of another port, and implements one host connector.
4.6.2 Floor Model Each floor model contains physical disk drives, including the disk array groups and the dynamic spare disk drives. Additionally, each floor model has dual power plugs, which should be attached to two different power sources or power panels. Floor (RKS+H1J) Model can be configured with 15 disk drives at a maximum of 4.0 Tbytes RAID5 (using the 287.6 Gbyte disk drive). Floor (RKS+RKAJ+H2J) Model can be configured with 30 disk drives at a maximum of 8.0 Tbytes RAID5 (using the 287.
Note 2: When the FC interface board is not added, one port configures one Mini-HUB, and extends to two host connectors. When the FC interface board is added, control unit implements two ports and two host connectors. One port configures FC interface independent of another port, and implements one host connector. Note 3: When the FC interface board is added, the interface type supports 4 Gbps Fibre Channel Optical (Non-OFC).
4.
4.7.1 Front Bezel Component Locations and Functions This section illustrates and describes the locations and functions for the front bezel. READY LED (green) POWER LED (green) WARNING LED (orange) BUZZER OFF SW POWER LED (green) ALARM LED (red) WARNING LED (orange) Main switch OFF Main switch ON RKA/RKAJAT RKS Figure 4.14 RKS, RKA, and RKA/RKAJAT Front Bezel Component Locations Table 4.
Note1: Low-speed blinking: Blinking (One time/1 s) (1 s) Note2: High-speed blinking: Blinking (Eight times/1 s) (1 s) Blinking (Four times/500 ms) Off (500 ms) (1 s) WARNING LED (orange) READY LED (green) ALARM LED (red) POWER LED(green) RKNAS Figure 4.15 RKNAS Front Bezel Component Locations Table 4.6 RKNAS Front Bezel Component Functions Name Function ALARM LED (red) Indicates that a failure has occurred which makes the RKNAS inoperable.
4.7.2 Component Locations The locations of the RKS, RKAJ, and RKAJAT components are shown in the following diagrams: Fan Assembly Disk Drive Backup Battery Unit Control Unit Panel Assembly Power Unit RKS (rear) RKS (front) Figure 4.16 RKS Component Locations Disk Drive Subsystem Identification Switch (Note2) ENC Unit ID Switch (Note1) RKAJ (front) Power Unit (RKAJ) RKAJ (rear) Figure 4.
Note2: The switch has been set on the S side. Note3: Sets the device ID of the RKAJAT.
4.7.3 Switch Locations and Functions This section illustrates and describes the locations and functions for switches in the following hardware components: 4.7.3.1 Panel assembly Backup battery unit Power unit RKNAS Panel Assembly BUZZER OFF SW Mode switch Main switch Figure 4.19 Panel Assembly Switch Location Table 4.
4.7.3.2 Backup Battery Unit Battery Switch Figure 4.20 Backup Battery Unit Switch Location Table 4.8 4.7.3.3 Backup Battery Unit Switch Functions Switch Function Battery Switch Turns on/off the battery power: When this switch is set to the off, the WARN LED comes on and the buzzer sounds. Power Unit AC Power Unit Switch AC Power Unit Switch Power Unit (RKS) Power Unit (RKAJ/RKAJAT) Figure 4.21 Power Unit Switch Locations Table 4.
4.7.3.4 RKNAS AC Power Unit Switch RESET Figure 4.22 RKNAS Switch Locations Table 4.10 RKNAS Switch Functions Switch Function AC Power Unit Switch Controls the power applied to the RKNAS RESET Used to reset the RKNAS.
4.7.4 Connector Locations and Functions This section illustrates and describes the locations and functions for connectors in the following hardware components: 4.7.4.
4.7.4.2 Power Unit Receptor (J1) Receptor (J1) Power Unit (RKAJ/RKAJAT) (Note) Power Unit (RKS) Figure 4.24 Power Unit Connector Locations Note: The additional battery unit is only available in Japan. Table 4.
4.7.4.3 Control Unit Control Unit LASER KLASSE 1 CLASS LASER 1 PRODUCT FC connector (Port 0A-1/Port 1A-1) LAN FC connector (Port 0A-0/Port 1A-0) NAS Interface Board FC Interface Board FC connector (Port 0B-0/Port 1B-0) FC connector (Port 0A-0/Port 1A-0) Figure 4.25 Table 4.
4.7.4.4 RKNAS FC Port PCI-E gbe 3 gbe 4 gbe 1 Other NNC gbe 2 CTRL mng 1 Figure 4.26 Table 4.14 Receptor mtp 1 Disk Array RKNAS Connector Locations RKNAS Connector Functions Connector Function Receptor (J1) Power cable receptacle on the RKNAS side. gbe 1 Connector used to connect LAN cable: gbe 1. gbe 2 Connector used to connect LAN cable: gbe 2. gbe 3 Connector used to connect LAN cable: gbe 3. gbe 4 Connector used to connect LAN cable: gbe 4.
4.7.5 LED Locations and Functions This section illustrates and describes the locations and functions of LEDs in the following hardware components: 4.7.5.1 Disk drive display Battery backup unit ENC unit SENC unit Power unit Fan assembly Control unit RKNAS Disk Drive Display (RKS) WARN LED (orange) PWR LED (green) HDD ACTIVE LED (green) HDD ALARM LED (red) Figure 4.
Table 4.15 Disk Drive Display LED Functions LED Function HDD ACTIVE LED (green) When on or flashing, it indicates that the disk drive is operational. HDD ALARM LED (red) When on, it indicates that a failure occurred in the disk drive; the disk drive is inoperable. ALARM LED (red) Lighting: When on, it indicates that a failure occurred in the unit; the unit is inoperable. Blinking: Low-speed blinking (Note 1): Indicates that a serious failure has occurred while the power is on.
4.7.5.2 Disk Drive Display (RKAJ, RKAJAT) HDD ACTIVE LED (green) PWR LED (green) WARN LED (orange) HDD ALARM LED (red) Figure 4.28 Table 4.16 4.7.5.3 Disk Drive Display (RKAJ., RKAJAT) LED Locations Disk Drive Display (RKAJ, RKAJAT) LED Functions LED Function HDD ACTIVE LED (green) When on or flashing, it indicates that the disk drive is operational. HDD ALARM LED (red) When on, it indicates that a failure occurred in the disk drive; the disk drive is inoperable.
4.7.5.4 ENC Unit ENC Unit(RKAJ) CHK LED (red) P1 LED (green) P0 LED (green) ALM LED (red) Figure 4.30 Table 4.18 ENC Unit LED Locations ENC Unit LED Functions LED Function P1 LED (green) When on, it indicates that the link status of FC-AL (loop 1 side) is normal. P0 LED (green) When on, it indicates that the link status of FC-AL (loop 0 side) is normal. ALM LED (red) When on, it indicates that a failure occurs in the ENC Unit, so the ENC Unit is inoperable.
4.7.5.5 SENC Unit SENC Unit CHK LED (red) P0 LED / P1 LED (green) ALM LED (red) Figure 4.31 SENC Unit LED Locations Table 4.19 SENC Unit LED Functions LED Function P0 LED/P1 LED (green) When on, it indicates that the link status of FC-AL (loop 0 or loop 1 side) is normal. ALM LED (red) When on, it indicates that a failure occurs in the SENC Unit. CHK LED (red) After turning on the power, it blinks for about 10 seconds (while CUDG is being executed).
4.7.5.6 Power Unit READY LED (green) ALARM LED (red) Power Unit (RKS) Figure 4.32 Table 4.20 4.7.5.7 ALARM LED (red) READY LED (green) Power Unit (RKAJ/RKAJAT) Power Unit LED Locations Power Unit LED Functions LED Function READY LED (green) When on, it indicates the operating normally. ALARM LED (red) When on, it indicates the abnormal or in a stop state. Fan Assembly ALARM LED (red) Figure 4.33 Fan Assembly LED Locations Table 4.
4.7.5.8 Control Unit Control Unit CACHE POWER LED(Green) GP1 LED(Green) P1 LED(Green) CHKSTP LED(Red) CHK LED(Red) FC Interface Board GP1 LED (Green) iSCSI Interface Board Active (Yellow) LASER KLASSE 1 CLASS 1 LASER PRODUCT CALM LED (Red) P0 LED(Green) RST LED (Orange) GP0 LED(Green) EALM LED(Red) Figure 4.34 Table 4.
Table 4.22 Control Unit LED Functions LED Function CHKSTP LED (red) When on, it indicates that a failure occurs in the controller (CTL side), so the controller is inoperable. Active (yellow) When on, it indicates that the link status is normal. Link (green) When on, it indicates that data is being transferred. Note1: Normal blinking: On (500 ms) Off (500 ms) Note2: High-speed blinking (EALM LED): On (100 ms) Off (100 ms) Note3: Low-speed blinking: On (500 ms) Off (500 ms) Blinks n times.
4.7.5.9 RKNAS READY LED (green) POWER LED (green) POWER LED (green) WARNING LED (orange) READY LED (green) ALARM LED (red) WARNING LED (orange) Mask A (Note) Mask B (Note) ALARM LED (red) RKNAS (front) Note: Mask A or Mask B has been affixed. Figure 4.35 Table 4.23 90 RKNAS LED Locations (front) RKNAS LED Functions (front) LED Function ALARM LED (red) Indicates that a failure has occurred which makes the RKNAS inoperable.
LINK/ACT 10/100/1000 LINK/ACT LINK/ACT 10/100/1000 GP1 LED (green) READY LED (green) 10/100 ALARM LED (red) LINK/ACT 10/100 10/100/1000 GP0 LED (green) Note: Mask A or Mask B has been affixed. Figure 4.36 Table 4.24 RKNAS LED Locations (rear) RKNAS LED Functions (rear) LED Function LINK/ACT Indicates that the LAN for management is linked or transferring the data.
92 Chapter 4 Subsystem Architecture and Components
Chapter 5 Functional and Operational Characteristics This chapter includes a description of the following: New AMS200 Features and Capabilities RAID Implementations Cache Management Logical Units Open System Features and Functions Data Management Features and Functions Copy Solution Features and Functions Performance Management Features and Functions NAS Features and Functions iSCSI Features and Functions Acer | HDS AMS200 User and Reference Guide 93
This chapter provides information on the Fibre, NAS, and iSCSI models. The following table illustrates sections that provide an explanation for each model. Fibre model: Connects disk array subsystem to a host computer with Fibre Channel interface. NAS model: Connects NAS Unit connected to disk array subsystem to a host computer with LAN interface. iSCSI model: Connects disk array subsystem to a host computer with iSCSI interface. Sections Fibre NAS iSCSI 5.
5.1 New AMS200 Features and Capabilities The Hitachi AMS200 subsystem offers the following new or improved features and capabilities, which distinguish the AMS200 subsystem from the 9200 subsystem: Up to 15 spare disks installable (floor RKS+H1JModel: up to 1). 512 logical unit numbers maximum. Multiple parity groups allocatable for one RAID group. 25 RAID groups maximum. The drive interface supports 2 Gbps Fibre Channel.
5.2 Raid Implementations The AMS200 subsystem supports RAID0, RAID1, RAID 5, RAID 6, and RAID1+0. RKAJAT does not support RAID0. RAID0 group stripes data across all disk drives in the group to attain higher throughput. There is no sparing disk drive function with this configuration.
The RAID specification is shown in the following table: Table 5.1 Item Rack-Mount and Floor Model RAID Specifications Rack-Mount Model Model RKS RAID specifications RAID level RAID Configuration (unit of addition) RKAJ RKAJAT 0/1/5/6/1+0 RAID0 2D~16D RAID1 1D+1D RAID 5 2D+1P~15D+1P RAID 6 2D+2P~15D+2P RAID 1+0 2D+2D~8D+8D 1/5/6/1+0 Note: For information about the global rack-mount model, refer to the Acer | HDS AMS200 and WMS100 Global Global 19-Inch Rack Reference Guide. Table 5.
5.3 Cache Management Cache management features include the following: Data is stored in cache when reading and writing; it is dynamically managed, depending on the workload read and write I/O characteristics. A high percent cache hit rate is expected, due to transaction processing (data is updated after it is referenced). System throughput is increased by the reduced data writing time.
5.4 Logical Units (LUs) The AMS200 supports up to 512 LUNs. Each LU is identified by fibre-channel port ID and LUN number. However, up to 256 LUs can be assigned to a host group. Host Other Fibre subsystem Each Port ID must be unique and within the range from 0 to EF (hexadecimal). Port ID Fibre Channel port (Mini-HUB) LUN 0 to LUN 255 (256LUs per host group and port in the range of LUN0 to 511 at the time of LU mapping) Figure 5.
Host Other subsystem IP address IP address iSCSI port Each Port ID must be unique and within the range from 0 to EF (hexadecimal). iSCSI port LUN 0 to LUN 255 (256LUs per host group and port in the range of LUN0 to 511 at the time of LU mapping) Figure 5.3 Logical Units (With the iSCSI interface board addition to the control unit) When the AMS200 is used as a NAS model, nine logical units need to be assigned as system LUs. A maximum of 503 logical units can be set for a user LU.
5.5 Open Systems Features and Functions The AMS200 subsystem offers many features and functions specifically for the open-systems environment. The AMS200 subsystem also supports important open-system functions, such as fibre-channel arbitrated-loop (FC-AL) and fabric topologies, command tag queuing, multi initiator I/O, and most industry-standard software and middleware products which provide host fail-over, I/O path fail-over, and logical volume management functions. 5.5.
5.6 Data Management Features and Functions These features include: 5.6.1 Cache Residency Manager Function LUN Manager Function Data Retention Utility Function LUN Expansion Function Password Protection Function Cache Residency Manager Function The Cache Residency Manager function ensures that all data in an LU is stored in cache memory. All read/write commands to the LU can be executed by cache hit 100% without accessing the drive.
5.6.4 LUN Expansion Function The LUN Expansion function expands the size of a logical unit (volume) accessed by a host computer by combining multiple logical units (volumes) internally. 5.6.5 Password Protection Function The Password Protection function restricts the number of Storage Navigator Modular users who are allowed to access a disk array subsystem; it also prevents simultaneous access from multiple users. 5.
5.7.3 NAS Backup Restore Modular Function The NAS Backup Restore Modular function protects data that is shared in the NAS Modular system. The NAS Backup Restore Modular function provides the following functions to protect data: 5.7.4 Snapshot Function Backup Restore Function NAS SyncImage Modular Function The NAS SyncImage Modular function creates a snapshot which enables data to recover to the state that existed, prior to changes.
5.8 Performance Management Features and Functions This feature includes: 5.8.1 Performance Monitor functions Cache Partition Manager functions Performance Monitor Function The Performance Monitor acquires information about the performance of RAID groups and logical units, etc. of the subsystem. It also acquires utilization rates of resources such as hard disk drives and processors built in the subsystem. This information is displayed with line graphs in the monitor. 5.8.
5.9 NAS Features and Functions The AMS200 and RKNAS combination enables different servers connected via the LAN to share data easily using the NFS/CIFS protocol of the LAN (GbE) interface. The AMS200 supports the following functions: 5.9.
5.10 iSCSI Features and Functions 1 Gbps iSCSI is supported by adding iSCSI interface to the AMS200. The AMS200 supports the following functions: 5.10.1 CHAP Authentication iSNS Client CHAP Authentication User authtnetication is performed for each target. 5.10.2 iSNS Client The iSNS client function enables you to use iSCSI device discovery, state change notification on the network easily.
108 Chapter 5 Functional and Operational Characteristics
Chapter 6 Configuring the AMS200 Subsystem This chapter includes the following: Overview of Configuration Configuring the LAN Interface of the AMS200 Subsystem Configuring the AMS200 Subsystem Registering the AMS200 Subsystem for Control by Storage Navigator-Modular Configuring the AMS200 Subsystem for the Desired Application General Configuration of the AMS200 Subsystem This chapter provides information on the Fibre, NAS, and iSCSI models.
6.1 Overview of Configuration This section includes the following information on configuration: 6.1.
6.1.3 Fibre Channel Interface Addressing One Fibre Channel port is assigned a target ID by addressing port ID. The AMS200 can address up to 256 logical unit numbers for one port. Host computer accesses to the logical unit with the required logical unit number by identifying the port of disk array subsystem using target ID. The following figure illustrates fibre channel port addressing and logical unit number assignment. 6.1.3.
6.1.3.2 When the FC Interface Board is Added The AMS200 host interface implements two ports and two host connectors. One Fibre Channel port configures the FC interface independent of another Fibre Channel port, and implements one host connector. Host Other Fibre subsystem Target ID (Port ID) Fibre Channel port Target ID (Port ID) Each Port ID must be unique and within the range from 0 to EF (hexadecimal).
6.1.4 iSCSI Interface Addressing The AMS200 supports 4 iSCSI ports by adding iSCSI interface board. The iSCSI port is assigned a target ID by addressing port ID. The AMS200 can address up to 256 logical unit numbers for one port. Host computer accesses to the logical unit with the required logical unit number by identifying the port of disk array subsystem using target ID. The following figure illustrates iSCSI port addressing and logical unit number assignment.
6.1.5 Alternate Pathing The user should plan for alternate pathing to ensure the highest data availability. The AMS200 provides up to 2 fibre channel ports or to accommodate alternate pathing for host attachment. The following figure shows a sample of alternate pathing. 6.1.5.
6.1.5.2 When the FC Interface Board is Added to the Control Unit LAN Host A (active) Host B (standby) Host switching is not required Host capable of switching the path Automatic Path Switching Fibre Fibre Adapter 0 Failure Occurrence Fibre Fibre Adapter 1 Fibre Cable 0A 0B 1B 1A LU0 LU1 AMS200 Figure 6.5 6.1.5.
6.1.6 NAS Configuration NAS Modular system operation management software includes management software on the PC (Storage Navigator Modular, NAS Setup, and WEB browser) and NAS Manager Modular. Operation management (NAS OS, disk array subsystem) on the disk array side from the NAS OS (NAS File Sharing Modular, NAS Data Control Modular) is performed by maintenance software on the PC. Operation management on the host side from the NAS OS (file system, fail-over function) is performed by NAS Manager Modular.
6.2 Configuring LAN Interfaces of the AMS200 Subsystem The negotiation mode (10M/100M/half-duplex/full-duplex) of LAN port for user management (used for Hi-Command, Storage Navigator Modular, etc.)and for maintenance of the control Unit supports auto negotiation mode only. Therefore, connect the control Unit to the network device which supports auto negotiation mode and set the auto negotiation mode for the network card and the network switch connected to the control Unit.
6.3 Configuring the AMS200 Subsystem The following steps must be performed to configure the disk array: 1. Verify that the subsystem is connected to the LAN. 2. Install Storage Navigator-Modular on the system that will be used as the management PC/Server. See the Acer | HDS Adaptable Modular Storage and Workgroup Modular Storage Storage Navigator Modular Graphical User Interface (GUI) User’s Guide to use a GUI interface.
6.4 Registering the AMS200 Subsystem for Control by Storage Navigator-Modular To operate the array unit from Storage Navigator-Modular, register the array unit. You cannot temporarily register a non-existing array unit: 1. From the Edit menu, click Add Automatically. 2. On the Add Array Unit Automatically dialog box, enter the IP address for the From: and To: boxes of the IP Addresses to Search of Search Array Unit. Click Start. 3. The result of the search displays.
6.5 Configuring the AMS200 Subsystem for the Desired Application Before configuring the AMS200 make sure that you know the following: 120 The required RAID level, based on performance and pricing criteria. The number and size of LUNs you wish to create. The controller path you wish to use to access the data on the LUNs. If there are any special options that need to be set that are specific to the host platform(s) being used.
6.6 AMS200 Subsystem General Configuration Activating Management mode in Storage Navigator-Modular will enable you to do a general configuration of the AMS200 subsystem. Before it is possible to configure the AMS200, management mode must be enabled in Storage Navigator-Modular. Otherwise, it is only possible to monitor the status of the AMS200. To enable Management Mode: 1. From the Tools menu, click Operation Mode, and then click Set Password on the Main screen. 2.
122 Chapter 6 Configuring the AMS200 Subsystem
Chapter 7 Configuring Storage on the AMS200 Subsystem The process of configuring storage on the AMS200 subsystem involves the following sub-processes: Software Composition Setting Fibre Channel Information Setting iSCSI Information Determining Space and RAID Level Requirements Setting Host Group Information Setting Target Information Setting CHAP Authentication Information Transferring Configurations from One Array to Another Storing Configuration Data Applying Confi
This chapter provides information on the Fibre, NAS, and iSCSI models. The following table illustrates sections that provide an explanation for each model. Fibre model: Connects disk array subsystem to a host computer with Fibre Channel interface. NAS model: Connects NAS Unit connected to disk array subsystem to a host computer with LAN interface. iSCSI model: Connects disk array subsystem to a host computer with iSCSI interface. Sections 7.1 Fibre NAS iSCSI 7.1.1 Microprogram { { { 7.1.
7.1 Software Composition This section includes the following: 7.1.1 Microprogram System parameters Configuration information SNMP information Storage for parameters Microprogram A microprogram controls basic hardware operations which accompany the execution of given instructions performed by a CPU. The version of the microprogram is controlled by the following numerical format: xxxxx/xx. The microprogram 07xxx/xx (x is optional), is available.
7.1.5 Storage for Parameters The storage areas where the parameters on the controller are stored are described in the following table: Table 7.1 No. 1 Storage for Parameters Parameter Storage Description Fixed Part Program Flash Memory (and backup FD) The parameters are stored in flash memory. No provision of storage against a power shut off is required for the parameters because flash memory can retain information when power is shut off.
7.2 Setting Fibre Channel Information Follow the steps below to set and display fibre channel information: The Fibre Channel information setting is performed in Management mode of the Storage Navigator Modular. Therefore, the operation mode of Storage Navigator Modular needs to be switched from Normal mode to Management Mode. (Refer to section 6.6.) In Normal mode, you can only monitor the status of AMS200, but you cannot change the settings. Back up all data before performing this procedure.
Figure 7.
7.3 7.3.1 Setting iSCSI Information Setting iSCSI Port Information To set iSCSI port information, follow these steps: Back up all data before performing this procedure. (If a mistake in operation is made, user data in the subsystem can be lost.) 1. Turn on the power supply. Note: If the power supply has already been turned on, proceed to the next step. 2. Start the Storage Navigator - Modular program and set the operation mode in Management Mode. 3.
8. Click the Apply button. 9. A confirmation message appears. After verifying that the I/O operation initiated by the host has stopped, click the OK button. 10. A message appears, stating that the setting is completed. Click the OK button. 7.3.2 Setting the iSNS Server Information iSNS (Internet Storage Name Service) provides the same function as the Name Server of the Fabric Switch on the Fibre Channel interface. The disk array subsystem registers the iSCSI port information on the iSNS Server.
4. A confirmation message is displayed. Click the OK button. Ð 7.3.3 Sending a Ping To send the ping to the initiator (host) and display the result of the sending, follow these steps: 1. On the Tools menu, click the Configuration Settings or click the Configuration Settings button on the toolbar. 2. Click the Ping tab. – Port: Select the port to send ping. – Destination IP Address: Specifiy the IP Address of the initiator. 3. Click the Start button.
4. The The following message appears. Click the OK button. Ð The result is displayed. 5. As necessary, select the Refresh button to display the latest information.
7.4 Determining Space and RAID Level Requirements This process will depend on the customer requirements, however Acer | HDS recommends certain configuration guidelines that will provide good performance and adequate protection of data integrity in most circumstances. This function can be used in the device ready state (Read/Write cannot be executed from the host in operation. When a host command is received, Not Ready is reported to the host computer).
7.4.1 Setting a Spare Disk To set a spare disk, follow these steps: 1. Turn on the power supply. Note: If the power supply has been turned on, proceed to the next step. 2. Start the Storage Navigator-Modular program, and set the operation mode to Management Mode. 3. Double-click the icon of an array unit in the Main window. Once the array unit information displays, select the Settings menu. 4. On the Settings menu, select Display Details or click Display Details on the toolbar. 5.
7. Click Set. The Spare Drives dialog box displays: 8. Select the HDU that you want to set as a spare drive from the Available Drives list and button. click the The selected HDU is moved to the Drives to Set list: 9. Click OK. 10. A message indicating that the setting is complete displays. Click OK.
11. A Result window displays indicating the setting is completed.
7.4.2 Canceling a Spare Disk Setting To cancel a spare disk setting: 1. Select the Logical Unit tab on the Unit window. 2. Select the Spare Drives. 3. Select the spare drive to be canceled, and then click Release. 4. The confirmation message for spare drive canceled displays. Click OK. 5. A message displays stating the setting is complete. Click Close.
7.4.3 Setting a RAID Group Note: It is recommended that you set at least four RAID Groups for the RAID group used when the NAS unit is connected. This sets the NAS system LU for the usual operation, the NAS system LU for backup, and the NAS user LU into another RAID Group. To set a RAID group, follow these steps: 1. Turn on the power supply. Note: If the power supply has been turned on, proceed to the next step. 2. Start the Storage Navigator-Modular program, and set the operation mode to Management Mode.
Figure 7.3 Logical Status Tab (NAS) Figure 7.4 RAID Group Dialog Box Figure 7.
Figure 7.
7.4.4 Deleting a RAID Group All user data on all LUNs will be lost if all RAID groups are deleted. Back up the user data before performing this operation. The unified LU cannot be unified or split unless the LU unifying function (a priced option) is validated. When a unified LU is defined, the RAID group cannot be deleted. Delete the RAID group after splitting all the unified LUs in the RAID group. For the procedure for splitting a unified LU, refer to the LU Unifying Function User's Guide.
When a logical unit exists in the RAID Group: Ð Ð Ð 142 Chapter 7 Configuring Storage on the Thunder 9530™ V Series Subsystem
7.4.5 Setting a Logical Unit Note: You can create settings for the only system LU when the AMS200 is connected to the NAS in accordance with the following restrictions: No.
2. A created logical unit number displays for the Logical Unit No. and the RAID group number in which logical units are defined for the RAID Group. Additionally, a logical unit capacity that can be created displays. Note: To specify a size explicitly in figures, select a unit to specify the size from among the GB, MB, and Block. Specify the size to be allocated in decimal number. The subsystem can be divided into a maximum of 2,048 logical units.
When no formatted logical unit exists: Ð When formatted logical unit exists: Ð Ð Ð Acer | HDS AMS200 User and Reference Guide 145
7.4.7 Formatting a Logical Unit Note 1: For the logical unit that is being formatted in the background, it is best to perform operations to the host installation. If a volatile failure for data in cache memory occurs due to subsystem power-off during formatting, the logical unit will be unformatted and data can be lost. Therefore, host installation operation should be performed from the first step for the logical unit.
To format a logical unit, follow these steps: 1. Click the icon of a logical unit in the Unit window. On the Settings menu, select Logical Unit, and then click Format. Note 1: When you select multiple logical units, hold down the Ctrl key and click the icons of the logical units to format. When a logical unit is incorrectly specified, press the Cancel button and redo processing by selecting a logical unit to be reformatted.
Ð Ð The progress rate of formatting process in the background displays in the Status box. The progress rate of formatting process is not displayed automatically.
(NAS) To confirm the latest progress rate, refresh the display by clicking Refresh. 3. Normal displays in the Status box. (When the formatting is in execution, the progress status displays.) If formatting is terminated abnormally, review the results. The formatted logical information is updated and the window displays. Table 7.2 Formatting Message Displayed Action to be Taken 02-xxxx, 03-xxxx, 04-xxxx or 0B-xxxx For the above code, a hardware fault is assumed.
7.4.8 Changing the Format Mode This mode enables the subsystem to set the priority of host access and the format for the format in the background. To set the Format Priority Mode, follow these steps: 1. On the Tools menu, select Configuration Settings or click Configuration Settings button on the toolbar. 2. Click the Format Mode tab. 3. Click the desired radio button from the Format Priority Mode. The following table lists and describes the operation of each mode. No.
In the following cases, do not set the Format Priority Mode to Format; it may cause a significant deterioration in host access performance or a command time out.
7.4.9 Changing the Default Controller in Charge of an LU Note: The controller in charge of a default LU can be changed only for the dual active mode configuration of a dual system. To change the controller in charge of a default LU, follow these steps: 1. Turn on the power supply. Note: If the power supply has already been turned on, proceed to the next step. 2. Start the Storage Navigator-Modular program and set the operation mode in Management Mode. 3.
7.5 Setting Host Group Information In the AMS200, the Host Connection Mode, the mapping information of Logical Unit, and LUN security information are set to the group of hosts, not to the host. This enables you to select the host computer to which the subsystem is connected depending on each group of hosts. For host groups, only the 000:G000 is supported. Up to 128 host groups can be set when the LUN Manager is used. The host group Information does not need to be set for the NAS system. 7.5.
5. A confirmation message displays. Click OK. Ð Ð 6. On the Unit window, double click Host Groups, and then double-click the Port which you want to set for the connection mode with the host. Display 000:G000 by doubleclicking Port. 7.
8. Click Modify Mapping. The Mapping dialog box displays: 9. Select one H-LUN from the H-LUN list, select an LUN that you want to map for the Hbutton. LUN from the Available Logical Units list, and then click the The selected H-LUN and LUN will be moved to the Mapping Information list. 10. Repeat step 9 to complete the Mapping Information list. 11. A confirmation message displays. Click OK.
The mapping information is updated and the following window displays: 12. Make the setting for the other ports in the same way as described previously.
7.6 Setting Target Information In the AMS200, the Host Connection Mode, the mapping information of Logical Unit, and LUN security and iSCSI User information for authentication are set to the targets, not to the ports at the time of iSCSI interface addition. This enables you to select the host computer to which the subsystem is connected, depending on each target. For targets, only the 000:T000 is supported. Up to 128 targets can be set when the LUN Manager (an extra cost optional feature) is used. 7.6.
The Target dialog is displayed. 5. In the Target dialog, enter the alias and iSCSI Name. 6. Select the authentication method from the drop-down list. – Alias: Enter the alias of the Target with 32 or less alphanumeric character. (Excluding \, /, : , , , ;, *, ?, “, <, >, | and ‘) Spaces at the top or end are ignored. An identical name cannot be used in an identical Port. – Authentication Method: Select the CHAP, None, or CHAP, None.
7.6.2 Initializing the Target 0 1. Click the Logical Status tab on the Unit screen. 2. Click the Port. 3. Select the Target to be initialized from the Target list. 4. Select the Initialize button. 5. The confirmation message is displayed. Select the OK button.
Ð 7.6.3 Setting Mapping Information 1. Click the Logical Status tab on the Unit screen. 2. Double-click the Access Mode, and select the Mapping Mode. 3. On the Mode list, select the Disable. Click the Modify button. Mapping Mode dialog box is displayed.
4. On the Mapping Mode dialog, click the Enable radio button, and click the OK button. 5. A confirmation message appears, click the OK button. Ð Ð 6. On the Unit window, double click the Target, and double-click the Port which you want to set for the connection mode with the host. Display 000:T000 by double-clicking the Port.
7. Display the Options and Logical Unit by clicking 000:T000, then click the Logical Unit. 8. Click the Modify Mapping button. Mapping dialog is displayed. 9. Select one H-LUN from the H-LUN list, select an LUN that you want to map for the Hbutton. LUN from the Available Logical Units list, and click The selected H-LUN and LUN will be moved to the Mapping Information list. 10. Repeat step 9 to complete the Mapping Information list.
11. A confirmation message appears, click the OK button. Ð The mapping information is updated and the following window is displayed. 12. Set the settings for the other ports using the same procedure.
7.7 Setting CHAP Authentication The disk array subsystem can authenticate both the iSCSI User Initiator Authentication and Two-Way Authentication(Target Authentication) with the CHAP(Challenge Handshake Authentication Protocol). Set the same iSCSI User information(User Name/ Secret) on both the host side and the disk array subsystem side for Initiator Authentication.
The CHAP User dialog is displayed. 4. In the CHAP User dialog, enter the User Name, Secret, and Secret Confirmation. – User Name: Enter the name of the User with 256 or less alphanumeric character. The following symbols can be used: (. - + @ _ = : / [ ] , ~ (space)) – Secret: Enter the Secret from 12 through 32 alphanumeric characters. The following symbols can be used: (. - + @ _ = : / [ ] , ~ (space)) – Secret Confirmation: Enter the characters that enter into the Secret. 5.
4. Select the Modify button. The CHAP User dialog is displayed. 5. As necessary, enter the User Name, Secret, and Secret Confirmation. 6. As necessary, change the assigned Target, and then select the OK button.
7. The confirmation message is displayed. Select the OK button. 7.7.3 Deleting the CHAP User 1. Click the Logical Status tab on the Unit screen. 2. Double-click the Port of which you want to delete the CHAP User and select CHAP User. 3. Select the CHAP User to be deleted from the CHAP User list. 4. Select the Delete button. 5. The confirmation message is displayed. Select the OK button.
7.7.4 Changing the Two-Way Authentication Information 1. Click the Logical Status tab on the Unit screen. 2. Click the Port. 3. Select the Target to be changed about Two-Way Authentication from the Target list. 4. Select the Modify button.
The Target dialog is displayed. 5. In the Target dialog, select the Two-Way Authentication radio button. – User Name: Enter the name of the User with 256 or less alphanumeric character. The following symbols can be used: (. - + @ _ = : / [ ] , ~ (space)) – Secret: Enter the Secret from 12 through 32 alphanumeric characters. The following symbols can be used. 6. Select the OK button. 7. The confirmation message is displayed. Select the OK button.
7.8 Transferring Configurations from One Array to Another Output the configuration information of the array unit in a text file or set configuration using a text file. The configuration information output in a text file includes the status of the system parameters, RAID group/logical unit and the constituent parts of the array unit. The configuration to be set includes the system parameters and RAID group/logical unit. The status of the constituent parts of the array unit cannot be set.
7.9 Storing Configuration Data This section includes the following: 7.9.1 System parameter information RAID group/LU information Port/host group information System Parameter Information To output the setting of the system parameters for an array unit in text form to a specified file: 1. On the Tools menu, select Configuration Settings, or click Configuration Settings on the toolbar. 2. Click the Constitute tab. 3. Check the System Parameters in the Select Configuration Information box: 4.
7.9.2 RAID Group/LU information To output the RAID group/logical unit definition information already set in an array unit to a specified file in a text format: 1. From the Tools menu, select Configuration Settings or click Configuration Settings on the toolbar. 2. Click the Constitute tab. 3. Check the RAID Group/Logical Unit in the Select Configuration Information box: 4. Click Browse, and then specify the directory and file name to output the file of the configuration. 5. Click Apply. 6.
7.9.3 Port/Host Group Information This setting is not required for the NAS system. To output Port/Host group definition information previously set in an array unit to a specified file in a text format: 1. From the Tools menu, select Configuration Settings or click Configuration Settings on the toolbar. 2. Click the Constitute tab. 3. Check the Port/Host Group in the Select Configuration Information box: 4.
7.9.4 NAS System LU/User LU Information This setting is required only for the NAS system. To output NAS System LU/User LU information already set in an array unit to a specified file in a text format: 1. From the Tools menu, select Configuration Settings or click Configuration Settings on the toolbar. 2. Click the Constitute tab. 3. Check the NAS: System LU/User LU in the Select Configuration Information box. 4.
7.10 Applying Configuration Data to another AMS200 Subsystem This section includes the following: System parameters RAID group/logical unit Port/host group 7.10.1 System Parameters Use the modes discussed in this section only when recommended by a Acer | HDS Host or Optional Product installation guide. Set the system parameters in the array unit with the information described in the file.
Note: To validate the set system parameters, restart the array unit. The previous settings stay valid until restarting. The array unit cannot access the host until the reboot is completed and the system restarts. Therefore, be certain the host has stopped accessing data before beginning the restart process. 7.10.2 RAID Group/Logical unit Ensure you back up all data before performing this procedure. All user data is lost when the logical unit is deleted.
7.10.3 Port/Host Group This setting is not required for the NAS system. 1. Edit the file for which you will set system parameters to set the array unit. This file has a specified format. The format of the file is the same as that of the file output by the array unit. To format a file, refer to the file output in section 7.9. 2. From the Tools menu, select Configuration Settings or click Configuration Settings on the toolbar. 3. Click the Constitute tab. 4. Select the Input radio button in the Operation box.
7.11 Setting Host Connection Parameters There are two methods for setting options: Simple Setting for Connecting to the Host Computer When using the simple setting, select the environmental elements of the host computer to be connected. When the selection is made, the host group options (host connection mode 1 and 2) necessary for the host computer to be connected are set automatically.
4. Click Simple Setting. The Options (Simple) dialog box displays: 5. Select Platform, Alternative Path, and Fail-Over according to an environment of the host to be connected. 6. Click Additional Parameters. The Additional Parameters Property dialog box displays.
7. Click the Detail button as needed. Select Host Connection Mode 1 and Host Connection Mode 2, and then click OK: 8. On the Option (Simple) dialog box, click OK. 9. A confirmation message box displays. Click OK.\: 10. A message box displays requesting you to verify an I/O requested by the host has been stopped. Stop it and click OK. (If the system administrator has not stopped I/O on the host side, clicking this button will stop all I/O processes.
11. A message box displays stating the setting is complete. Click OK: 12. The setting displays. Verify that the selected host environment (platform, alternative path, and fail-over) and the Additional Parameter are reflected in the display. When you have set host connection mode 1 and 2 directly, verify that the mode that has been set is reflected in the display: Check if the setting made is reflected in the display The necessary host connection modes 1 and 2 have been selected. 13.
Table 7.3 No.
Table 7.4 No.
7.11.2 Detailed Setting for Each Host Connection This operation is performed using the Storage Navigator-Modular. The following describes the procedure for setting a subsystem when the host group option setting is required for a combination that simple setting does not have. Follow these steps: 1. On the unit window, click the Logical Status tab. 2. Double-click the Host Groups or Target (when iSCSI interface board is added).
7.12 Setting the Subsystem when using Special Mode This operation (using the subsystem in drive blockade mode) is performed using Storage Navigator-Modular. Note: If the special mode setting operation is performed for the array unit connected to the NAS unit, the cluster between the NAS Units stops. When the special mode setting operation for the array subsystem is unavoidably performed, execute it after stopping the cluster between the NAS Units and stopping the NAS OS of both NAS Units.
Ð When restarting the array unit, the time the array unit restarts will be displayed. It takes approximately four to 15 minutes to restart the array unit. Note: Depending on the status of array unit, the array unit may take time to respond. If the array unit does not respond after 15 minutes or more, check the status of the array unit.
7.13 Changing the Network Parameter Set a network parameter from the Storage Navigator-Modular. Note: If the network parameter is changed for the array unit connected to the NAS unit, the cluster between the NAS Units stops. When the network parameter for the array subsystem is unavoidably changed, execute it after stopping the cluster between the NAS Units and stopping the NAS OS of both NAS Units.
4. From the Tools menu, select Configuration Settings or click Configuration Settings on the tool bar: 5. On the Configuration settings screen, click the LAN tab: 6. Set the Network parameter for the Network. Note: For the head value of the IP Address, ‘0’, ‘127’, ‘255’ cannot be specified. If any one of these values is set, an error will be caused when clicking the Apply button in the Parameter window. 7. Click Apply. 8. A confirmation message box displays. Click OK.
7.14 Changing the IP Address for the Maintenance Port This operation is performed using the Storage Navigator Modular. Note: The IP address for the maintenance port is used for the maintenance work performed by the maintenance personnel when a failure occurs. Keep one of the following network addresses as an address for the maintenance work; "10.0.0.xxx", "192.168.0.xxx", "192.168.233.xxx", "172.23.211.xxx", "10.197.181.xxx". Do not use this designated address for other purposes.
4. From the Tools menu, select Configuration Settings or click Configuration Settings on the tool bar: 5. Click the Maintenance LAN tab on the Configuration Settings screen. 6. Set the Maintenance LAN Information. Note: The IP address which has the same network address as the IP address currently set for the user-managed port or the NNC management port has cannot be set. In the event that this IP address is set, an error will occur when clicking the Apply on the Parameter screen. 7.
8. A confirmation message box displays. Click OK. 9. When all the Current values are the same as the set value on the Configuration Settings screen, and the Normal is displayed on the Result, the setting is completed. If the Setting is displayed in the Result, click the Refresh on the Configuration Settings screen after a brief interval. When the setting does not terminate correctly, the following messages will be displayed in the Result. No.
192 No. Display in the Result Failure and Measure 5 Not Specified The setting has not completed yet because the NAS unit is activating. Wait until the NAS unit becomes normal, press the Refresh button again to refresh the display. If the same information is displayed no matter how many times the update is repeated, contact the maintenance personnel. 6 Setting Reserved The setting has not been completed because the NAS unit stops. The setting will be made when the NAS unit is restarted.
7.15 Setting the System LU and User LU in the NAS System This operation is performed using the Storage Navigator Modular. Refer to the Acer | HDS Adaptable Modular Storage and Workgroup Modular Storage Navigator Modular Graphical User Interface (GUI) User’s Guide. The capacity of system LU is restricted to the capacity listed in the following table: Table 7.5 No.
7.15.1 Setting the System LU To set the System LU: 1. On the Unit screen, click the Logical Status tab. 2. Display the NNC0/2 by double-clicking the NAS. Display the System and User by double-clicking the NNC0/2, and select the System. A list of system LU is displayed. 3. Click the Set button in the lower right portion of the screen. The System LU dialog box is displayed. 4. Click the Select button for the system LU that you want to set.
5. The Select Logical Unit dialog box is displayed. Select the LUNs to be assigned, click the OK button. 6. Verify that the selected LU(s ) was(were) reflected to the System LU dialog box, and click the OK button. 7. A confirmation message appears, click the OK button.
7.15.2 Setting the User LU To set the User LU: 1. On the Unit screen, click the Logical Status tab. 2. Display the NNC0/2 by double-clicking the NAS. Display the System and User by double-clicking the NNC0/2, and select the User. A list of system LU is displayed. 3. Click the Set button in the lower right portion of the screen. The User LU dialog box is displayed.
4. Select one H-LUN from the H-LUN list in the User LU dialog box, select an LUN that you want to assign for the H-LUN from the Available Logical Units list, and click button. The selected H-LUN and LUN will be moved to the Logical Unit for User Volume list. 5. Repeat step 4 until all the LUNs that you want to assign are moved to the Logical Unit for User Volume, click the OK button. 6. A confirmation message appears, click the OK button.
7.16 Setting the NNC Management LAN Port Information in the NAS System This operation is performed using the Storage Navigator Modular. To set the NNC management port in the network, follow these steps: 1. Turn on the power supply. 2. Start the Storage Navigator Modular and set the operation mode in the Management Mode (Refer to the Acer | HDS Adaptable Modular Storage and Workgroup Modular Storage Navigator Modular Graphical User Interface (GUI) User’s Guide. 3.
5. Click the NNC LAN tab on the Configuration Settings screen. 6. Set the LAN Information. LAN information: Refer to and set the network setting of NNC management port. – IP Address: Displays the current value of IP address and specifies the setting value. – Subnet Mask: Displays the current value of subnet mask and specifies the setting value. – MTU: Displays the current value of MTU and specifies the setting value. The setting value can be specified in the range of 1500 to 16110.
Ð 200 Chapter 7 Configuring Storage on the Thunder 9530™ V Series Subsystem
7.17 Setting the Time Zone This operation is performed using the Storage Navigator Modular. When connecting the NAS unit, this operation is necessary. This operation is not necessary after the NAS unit is connected. If an NTP server is onsite and you wish to synchronize the clock of the array unit to the NTP server, execute this operation. To set the time zone, follow these steps: 1. Turn on the power supply. 2. Start the Storage Navigator Modular and set the operation mode in Management Mode.
4. On the Tools menu, select Configuration Settings or click Configuration Settings in the tool bar. 5. Click the Time Zone tab on the Configuration Settings screen. 6. Set the IP addresses of Time Zone and NTP Server. – Time Zone: Refers to and sets the time zone. Default value: (GMT+09:00) Osaka/ Sapporo/ Tokyo Automatically adjust clock for daylight saving changes: Specifies whether to use the summer time. – NTP Server: Refers to and sets the IP address of the NTP server.
Note: When the NAS OS stops, or is not installed, click the Cancel button.
204 Chapter 7 Configuring Storage on the Thunder 9530™ V Series Subsystem
Chapter 8 Troubleshooting This chapter includes the following: Troubleshooting Based on LED Indications Web Overview Web Operational Procedures Troubleshooting Using a Web Connection Determining Failure of Network Side in the NAS System Collecting Failure Information in Connection with Web Determining Failure on the Network Side of an iSCSI System Calling the Acer | HDS Support Center Acer | HDS AMS200 User and Reference Guide 205
This chapter provides information on the Fibre, NAS, and iSCSI models. The following table illustrates sections that provide an explanation for each model. Fibre model: Connects disk array subsystem to a host computer with Fibre Channel interface. NAS model: Connects NAS Unit connected to disk array subsystem to a host computer with LAN interface. iSCSI model: Connects disk array subsystem to a host computer with iSCSI interface. Sections Fibre NAS iSCSI 8.1.
Acer | HDS AMS200 User and Reference Guide 207
8.1 Troubleshooting Based on LED Indications This section includes the following: The POWER LED does not turn on The POWER LED is turned off The READY LED does not turn on or the READY LED has turned on once, and then turned off The ALARM LED has turned on The WARNING LED has turned on or blinks Note: If the array subsystem connected to the NAS Unit is restarted, the cluster between the NAS Units stops.
9. Is the READY LED on the RKS (and RKNAS) on? Yes: Continue to use the equipment in its current operational state . When the READY LED (green) blinks continuously, it blinks for up to 15 minutes because the download of the ENC/SENC firmware is executed. The subsystem is operational even though the READY LED (green) is blinking. No: Verify that READY LED does not turn on, or that READY LED has turned on once and then turned off. 10.
8.1.3 If the READY LED Does Not Turn On or has Turned On Once then Off If the READY LED does not turn on, or the READY LED has turned on once and then turned off, follow these steps: 1. Is the POWER LED on the RKS on? Yes: Go to step 2. No: Go to step 4 in The POWER LED Does Not Turn on. 2. Is the ALARM LED on the RKS on? Yes: Refer to: The ALARM LED Has Turned on. No: Go to step 3. 3. Is the RKNAS connected? Yes: Go to step 4. No: Go to step 5. 4.
11. Call your Customer Engineer. 12. End of the procedure 8.1.4 The Alarm LED Turned On When the ALARM LED has turned on, follow these steps: 1. Identify in which components the failure resides. Note: Refer to section 4.7.5 to identify failed components whose LEDs indicate their respective failures. 2. Call your Customer Engineer and allow the equipment to remain in the present state. 8.1.5 The WARNING LED has Turned on or Blinks When the WARNING LED has turned on or blinks, follow these steps: 1.
8. Is the RKNAS connected? Yes: Identify in which components a failure resides in the RKNAS. No: Go to step 10. 9. Continue to use the equipment, and contact the Customer Engineer. 10. End of the procedure.
8.1.6 The WARNING LED Has Turned On or Blinks When the WARNING LED has turned on or blinks, follow these steps: 1. Does the WARNING LED blink? Yes: Call your Customer Engineer. Go to step 5. No: Go to step 2. 2. Is the READY LED on? Yes: Go to step 3. No: Refer to section 8.1.3. 3. Identify in which components a failure resides. Note: Refer to section 4.7.5 to identify failed components whose LEDs indicate their respective failures. 4. Continue to use the equipment and contact the Customer Engineer. 5.
8.2 Web Overview This section includes the following: 8.2.1 Operational environment Characteristics of network functions Operational Environment The Web operational environment and the necessary requirements are shown in the following tables. Table 8.1 214 Web Operational Environment No. Item Description 1 OS Microsoft® Windows® 98/NT 4.0/2000/XP/2003, Solaris™ 8, IRIX 6.5.
Table 8.2 No. 1 2 3 AMS200 WEB Function Supported Browser/Version Platform WS PC PC OS Version Browser Version (see Note) System Version : 0730/A-S Supported or Not Supported Java™ Applet Supported/Not supported (see Note2 andNote3) IRIX 6.5 Netscape Navigator® 4.76 { × Solaris™ 2.6 Netscape Navigator® 4.76 { × 8 Netscape Navigator® 4.76 { × 98 Internet Explorer 6.0 { { NT/2000 Internet Explorer 6.0 { { XP Internet Explorer 6.
Notes on the Supported Browser: For Windows® 2003, the strict security level is set by default; therefore, the Web function is disabled. Change the security setting to enable the browser by following these steps: 1. On the browser (Internet Explorer), click the Tools menu and click the Internet Options. 2. Click the Security tab, and then click Custom Level. 3. Specify Medium or lower for the Custom Setting Reset and click Reset. Specifying a Medium setting solves the problem.
8.2.2 When collecting memory dump (Full Dump) using Netscape Navigator® 4.7x, pay attention to free space on the PC because information to be downloaded will not be compressed. Characteristics of Network Functions When Connecting with the Web This section discusses the following network function characteristics when connecting with the Web: LAN Interface The connector for 10Base-T/100Base-TX is equipped with the controller. 10BaseT/100Base-TX is selected automatically.
8.3 Web Operational Procedures This section contains the following information: 8.3.1 Connecting to the network using a LAN interface. Screen outlines. Main screen in normal mode. Status display of replaceable components. Information message. Setting the buzzer sound volume. Connecting to the Network using a LAN Interface To use a LAN interface, follow these steps: The connector for 10Base-T/100Base-TX is equipped with the controller.
8.3.2 Screen Outlines If the Normal Mode function is displayed and clicked in the menu screen, the chosen function can be executed. The following figure displays the main screen outline of Normal Mode. Version frame Main frame Menu frame Figure 8.
Version frame Main frame Menu frame Figure 8.3 Main Screen Outline (NAS) Version frame Main frame Menu frame Figure 8.
This section includes the following: 8.3.2.1 Menu frame Main frame Version frame Menu Frame If the Normal Mode is displayed with the menu frame, then clicked, the proper function is executed. The main frame displays the following information: Main: The main screen of Normal Mode is displayed. Parts Information: The status of exchange parts is displayed. Disk drive: The status of disk drives is displayed.
8.3.2.2 Main frame The main frame displays the following information: 8.3.2.3 Subsystem Status: The device status and the exchange parts status are displayed. Progress Condition: The Progress Condition as the device is booting is displayed. Version Frame The version frame displays the following information: 222 Web title: The Web title set by a user is displayed. When it is not set, nothing is displayed. Serial No: The subsystem serial number is displayed.
8.3.3 Main Screen in Normal Mode The main screen of normal mode consists of the following: Patrol lamp. Summary of exchange parts status. Progress condition display. Page refresh button. Patrol lamp Summary of exchange parts status Progress condition display Page refresh mode Figure 8.
8.3.3.1 Patrol Lamp While monitoring the device, the status is displayed. Table 8.
8.3.3.2 Summary of Exchange Parts Status The summary of exchange parts status displays the condition of the exchange parts by changing the color. Detailed information of the specific part is displayed by clicking the part icon. Table 8.
8.3.3.3 Progress Condition Display The progress condition, as the device is booting, is displayed in the progress condition display box. 8.3.3.4 Page Refresh Button This button sets the on/off function of the automatic redisplay function. If clicked, the on/off mode changes: 8.3.4 OFF display: The screen is not refreshed. ON display: The screen of the mainframe is refreshed every 5 seconds. The refresh time currently (RTC) is displayed on the right top.
Figure 8.9 Component Status Screen (Controller/Battery/Cache/Loop/Host Computer) Figure 8.
Figure 8.
Disk Drive (FC): Image Status Normal. A fault has occurred to the disk drive. The disk drive port where the fault occurred is not implementing the disk drive. No display The disk drive is not implemented (the disk drive where the fault occurred was drawn out). Disk Drive (S-ATA): Image Status Normal. blue Fault has occurred to the disk drive. red red and black No display Disk drive port that the fault occurred is not implementing the disk drive.
Cache Unit: Image Status Normal. Fault (status when the unit is not implemented and the extracted fault cache unit is included). Battery Backup Unit: Image Status Normal. There is a fault or the unit is not implemented. Fan Assembly: Image Status Normal. There is a fault or the fan assembly is not implemented. red (The condition that an AC power supply is not supplied is contained.) Power Unit/Power Unit: Image Status Normal. A fault occurred or the unit is not implemented.
SENC Unit: Image Status Normal. A fault occurred or the unit is not implemented. Fibre Channel Loop: Image Status Normal. Fault. Host Connector: Image Status Normal. gray Fault. red Patrol Lamp: Monitoring the device, the status is displayed.
NAS OS Condition: The NAS OS condition is displayed. Display Status NEW NAS OS has not been installed. INST NAS OS is being installed. ACTIVE NAS OS is in operation, and node is in operation. STOP NAS OS stops normally. DOWN NAS OS stops abnormally. BOOT NAS OS is in boot processing. SHUTDOWN NAS OS is in stop processsing. INACTIVE NAS OS is in operation, and node stops. DUMP NAS Dump is being collected. HUNGUP Hung-up status.
NNC FAN: Image Status Normal black Fault red NNC Host Connector: Image Status Normal white Fault red To check the parts status by message, select “Warning Information” from the menu frame in the main screen. Figure 8.
8.3.5 Information Message Fault information and status information of the device that detected it during the device operation are displayed. Fault information and status information after the device completes a boot are displayed in the Controller 0/1 Common box. Fault information and status information as the device is booting are displayed in the box of Controller 0 and Controller 1. Figure 8.
8.3.6 Setting the Buzzer Sound Volume Note: Set the buzzer volume for the environment in which I/Os from a host are not issued while the system is maintained or before the host is started up. The buzzer volume can be adjusted for 5 stages. Click the Buzzer Volume of the menu frame to enter into the buzzer volume-setting screen. If the buzzer volume is designated with the radio button and the OK button is clicked, the buzzer volume is changed. Figure 8.
8.3.7 Clear Specified Factors of NNC Partial Alarm The Warning status of the control unit may not be released when the WARNING LEDs on the array subsystem and the NAS unit are lighting up even though the recovery works for some specific NNC partial alarms. Release the Warning status of the control unit and turn off each WARNING LED according to the following procedures. The “Clear specified factors of NNC partial alarm” can be executed only to the NNC (NAS unit) connected to the control unit. 1.
3. Verify that either of the lists where the failure factors are displayed. 4. Clear specified factors of NNC partial alarm. 5. Click the Recovery button.
6. A message appears, asking you to verify the setting is displayed. Click the OK button. 7. Check the NNC partial alarm recovery. When the “Clear specified factors of NNC partial alarm” is completed normally. 8. Click the OK button. 9. Click Warning Information on the menu window, and check that the indication of the partial alarm is turned off. If the array subsystem was booting at the time of clicking the Recovery button: 10. Click the OK button.
8.4 Troubleshooting Using a Web Connection This section includes the following: 8.4.1 Checking subsystem status Checking the progress condition display Checking component status Checking log messages Troubleshooting by using messages Reading failure information Checking Subsystem Status Check the position of the failed part of the unit on the main window in the normal mode of the Web. Subsystem Status Figure 8.
8.4.2 Checking the Progress Condition Display If Booting... is indicated in the window (the controller is being started up), the progress of the start-up operation can be confirmed according to the following procedure: 1. Turn on the page refresh mode (click the ON button). The window is updated automatically at 5-second intervals. (If the OFF button of the page refresh mode is activated, this operation is not necessary.
8.4.3 Checking Component Status Click each part of Replace Part Summary in the main window; the following window displays and the state of the part is displayed. In this example, the selected (clicked) part is at the head of the window. You can also select this window by clicking the Parts Information menu in the main window. In this window, you can confirm the state of each part in detail. If a part fails, its corresponding icon turns red. Figure 8.
The following screen is displayed only when used in the NAS unit. Figure 8.
8.4.4 Checking Log Messages To check log messages: 1. Click the Information Message menu in the main window. The Information Message window displays: 2. In the Information Message window, identify the cause of the failure and confirm the recovery measures. Information about detected failures and the state of the unit display in the above window. Information about failures and the state at the start-up time of the unit displays for each Controller in the Controller 0 and Controller 1 boxes.
The contents of each message are shown in the following examples: CUDG (Self-test at power-on) Detection Message.
In this example, the latest message is also indicated at the top.
8.4.5.2 Flash Detected Messages When the following Flash detected messages are displayed, follow the instructions to resolve the problem. Table 8.5 Flash Detected Messages Message Code Message Text Recovery Measures RA00xx Microprogram error [FLS] Restarting the equipment. RA7000 Microprogram revision mismatch RB0000 Upload system error Check the microprogram you want to install and install it over again. RB0600 No micro program Perform the new installation upgrade.
8.4.5.3 Progress Messages When the following Progress messages are displayed, follow the instructions to resolve the problem. Table 8.6 Progress Messages (continues on next page) Message Code Message Text Recovery Measures I031xy Path recovered automatically The path recovered automatically. (x: Remote DF# (0), y: Path # (0 or 1)) I10000 Subsystem is ready The unit is ready. I11000 All raid group initialized All RAID groups were deleted.
Table 8.6 248 Progress Messages (continued) Message Code Message Text Recovery Measures I1B100 Forced parity correction completed Forced parity recovery processing was finished. I1C0xy Loop diagnostic start (Path-x, Loop-y) Loop diagnosis was started. (x: Path# (0 or 1), y: Loop# (0 or 1)) I1C1xy Loop diagnostic end (Path-x, Loop-y) Loop diagnosis was finished.
Table 8.6 Progress Messages (continued) Message Code Message Text Recovery Measures IA2V00 (Note1) NNC Some integrated link of Data LAN failed [xy] (NNC-z) Identify failed part by checking the LED beside the data LAN port. Confirm if the LAN cable is firmly connected to the LAN port and if there is no failure in network switch. If there are some failures, get rid of them.
8.4.5.4 Warning Messages When the following Warning messages are displayed, follow the instructions to resolve the problem. Table 8.7 Warning Messages Message Code Message Text Recovery Measures W03200 Battery SW off Turn on the battery unit switch. When a Warning messages other than those shown above is displayed, inform your Customer Engineer of the message code. 8.4.5.5 Failure Messages When the following Failure messages are displayed, follow the instructions and resolve the problem. Table 8.
8.4.6 Reading Failure Information The history of the unit, after it is turned on, is displayed in the Information Message. The Subsystem is Ready message displays the time when the unit is ready. Messages sent after the power is turned on until the unit is ready are displayed prior to this message. Messages sent after the unit is ready are displayed after this message. Carefully observe the following: Wxxxxx (Warning message), Hxxxxx (Failure message), and Rxxxxx (Flash detection message).
Table 8.9 No.
Table 8.9 How to Read Failure Information (Continued) No.
8.5 Determining the Failure of the Network Side in the NAS System When a failure occurs in the LAN environment between the host computer and the NAS Modular subsystem, or the NAS Modular subsystem, determine whether there is a failure in the NAS Modular subsystem according to the following flow. START Is the READY LED of the NAS Unit lit? No Yes Start the NAS OS. No Was the failure recovered? Yes Is there any response of the ping from the host computer to the optional PC or the router, etc.
Continued from the previous page. Acquire a “Network Info” file from the NAS Manager Modular (Note). Does the problem attributed to the network environment occur from the message of the output file of “Network Info.”? (Note) Refer to NAS Manager Modular User's Guide appendix D. Yes Take actions to the connection or the setting with the problem.
8.6 8.6.1 Connecting Failure in Connection with the Web Collecting Simple Trace This function is used to download current trace information. To perform the download, a free capacity of approximately 20 Mbytes is required in the PC. Simple Trace of both the Control Units can be collected through one Control Unit. It is not necessary to collect from both the controllers. (When it is collected from Control Unit #0, File name is “smpl_trc0.dat”. When collected from Control Unit #1, file name is “smpl_trc1.dat.
3. When the OK button is clicked, the following window is displayed. 4. The following window is displayed. Click the Download button. 5. Click Save, if it is continued. Click Cancel, if it is stopped. 6. If the following window is displayed. Click Save after file name is setting, if it is continued. Click Cancel, if it is stopped. Note: There may be a case where the default file name is given as “ctla_trc0.dat.dat” depending on the setting of the PC. In this case, reset the file name to “ctla_trc0.
Click Cancel, if it is stopped. 7. The following window is displayed during execution download. 8. The progress message window is closed when the download is completed. 9. Click the Close button. 8.6.2 NAS Log Collection This function downloads the log information on the present NAS OS. The free capacity of approximately 4 M bytes in the Normal Mode, 12 M bytes in the Detail Mode and 150 M bytes in the Full Mode is required on the PC for downloading.
The NAS Log can collect only the information on the NNC (NAS unit) connected to the Control Unit. When no special instruction is given, collect the NAS log in Detail Mode when a failure occurs. Table 8.10 Collection Mode Collection Mode Use Normal Mode Detail Mode Full Mode Collect it only when there is a special instruction. (It is limited to when the remote collection of Detail Mode is impossible on the capacity side.) Collect "Detail Mode" uniformly unless otherwise instructed.
3. The confirmation message is displayed. Click the OK button. The following dialog is displayed. 4. The following dialog is displayed when ending. Click the Download button. Note: Do not close this dialog while you download the NAS Log into the service PC. The NAS Log may not be able to be collected when closing it. 5. The following dialog is displayed. Click Save.
6. Specify the storage location of the file and the file name, and click Save. A file name can be changed to “optional file name.tar.gz”. A default file name is as follows in the collection mode. Collection Mode Normal Mode Detail Mode Full Mode NNC 0 fast_naslog_nnc0[1].tar.gz naslog_nnc0[1].tar.gz full_naslog_nnc0[1].tar.gz NNC 2 fast_naslog_nnc2[1].tar.gz naslog_nnc2[1].tar.gz full_naslog_nnc2[1].tar.gz The downloading is started and the progress indicating message window is displayed.
8.6.3 NAS Dump Generation This function generates the full memory information on the present NNC (NAS unit) and collects it in the Disk Drive. The full memory information on the NNC (NAS unit) is not downloaded in the PC at the NAS Dump generation opportunity. The generation of the NAS Dump can be executed only in the NNC (NAS unit) connected to the Control Unit. 8.6.3.1 Generating NAS Dump 1. Click NAS Dump in Trace of the menu frame. 2. Select the Collecting the NAS Dump. 3.
4. Input the registered password, and click the OK button. (The default password is “user=NAS”) 5. The following window is displayed. Click the OK button.
8.6.3.2 Suspension of the NAS Dump Generation 1. Select the Canceling collection of the NAS Dump. 2. Specify the NNC(NAS Unit), which suspends the NAS Dump flushing, as “NNC Number”, and click the Set button.
3. The confirmation message is displayed. Click the OK button. 4. A suspension completion window is displayed. Click the OK button.
8.6.3.3 Registration and Change of the Password 1. Select the Change password. 2. Input the Old Password, New Password and Re-enter New Password (one to eight digits in half size alphanumeric character) and then click the OK button.
3. The window, indicating the completion of the password registration, is displayed. Click the OK button.
8.7 Determining Failure on the Network Side of an iSCSI System One of the following or two or more items are considered to be the causes that the host computer cannot communicate with the disk array subsystem. Check the validity of each item, and take necessary actions if there is any problem. 268 The link status of LAN port in host computer is normal. All the Network peripherals (Switch, router, and NIC etc.) are powered on. if not, turn on the power.
The Target name (example: [000:T000]) of the Target is registered in the CHAP User of the Initiator at the disk array subsystem when the Initiator authentication of the CHAP authentication is applied to the iSCSI System. The User Name and its Secret of the Target are set correctly in the host computer when the Initiator authentication of the CHAP authentication is applied to the iSCSI System.
8.
Chapter 9 Periodic Maintenance If the subsystem is not energized for more than three months, the battery may overdischarge and unrecoverable damage may result. In this situation, the battery must be energized more than 6 hours at least once every three months or, alternatively, the subsystem can be stored with the switch of the battery turned off. However, even when the switch is turned off, the battery discharges naturally.
272 Chapter 9 Periodic Maintenance
The following Appendices provide information on the Fibre, NAS, and iSCSI models. The following table illustrates sections that provide an explanation for each model. Fibre model: Connects disk array subsystem to a host computer with Fibre Channel interface. NAS model: Connects NAS Unit connected to disk array subsystem to a host computer with LAN interface. iSCSI model: Connects disk array subsystem to a host computer with iSCSI interface.
274 Chapter 9 Periodic Maintenance
Appendix A Glossary Cache backup Because a cache memory uses DRAM, information stored in it is lost when the subsystem power is shut off. To provide against unexpected power failure, the subsystem has an setup to maintain data in the cache memory by batteries. Cache backup is a state in which the data is protected by the batteries. CHAP (Challenge Handshake Authentication Protocol) One of authentication methods.
276 FC-AL Fibre Channel Arbitrated Loop FC-SW Fibre Channel-Switch Topology Fibre channel (FC) A set of standards of interfaces that are connected through optical fibre, etc. to achieve high-speed data transfer between devices. Fibre Channel HBA Fibre Channel Host Bus Adapter Fibre Channel HUB An apparatus to connect and relay Fibre Channel cables, each connected to a Fibre Channel device in order to form an arbitrated loop of the Fibre Channel.
Remote Maintena nce Function (SNMP) The SNMP agent support function reports failures to the workstation which monitors the network via the SNMP of the open platform. R/W Read/Write. SATA(Seri al-ATA) S-ATA is an abbreviation for Serial Advanced Technology Attachment, and it is one of the extended specifications of IDE (Integrated Drive Electronics) standard for connecting a memory device such as HDDs.
Write cache 278 Appendix A Glossary When data is written from a host computer onto a disk array subsystem, it is not written directly on the disk drive but written in cache memory. In this way, the disk array subsystem can return a writing completion report promptly. This writing method using cache memory is called write cache.
Appendix B System Parameter Settings List The following table lists the parameter settings using the Storage Navigator-Modular.
280 Appendix B System Parameter Settings List
Table B.1 Host Connection Parameters Host Group Option- Simple Setting HP-UX® Platforms Alternate Path None PV Link VxVM (see Note1) Fail Over None MC/ Service Guard None MC/ Service Guard None 9 9 Detail Setting: The following parameters will be selected automatically according to simple setting.
Note: When making the simple setting of the host group options, select items shown on gray backgrounds. Only when using the combination not described in the simple setting, select the required parameter from detail settings. Note 1: When using VERITAS™ Volume Manager (VxVM), Array Support Library (ASL) for AMS/WMS Series is required. Please download from the Web screen of VERITAS™. Note 2: Up to 256 logical units from logical unit number 0 to logical unit number 255 can be mapped for each host group.
Table B.2 Host Connection Parameters (continues on the next page) Host Group Option- Simple Setting Solaris™ Platforms Alternate Path None Fail Over None Sun Cluster (Note4) VCS Note3 HDLM VxVM (Note1) (Note2) None Sun Cluster (Note4) MpxIO (Note5) None VCS Note3 None Sun Cluster (Note4) 9 9 9 9 Detail Setting: The following parameters will be selected automatically according to simple setting.
Table B.2 Host Connection Parameters (continued) Host Group Option- Simple Setting VERITAS™ Database Edition/Advanced Cluster for ® Oracle RAC (Solaris™) is used ® Not Selected Unique Reserve Mode 1 (Note6) ® Egenera BladeFrame Is used Not Selected Unique Reserve Mode 1 (Note6) Not Selected Not Selected 9: Parameter that is selected automatically by simple setting. blank: Parameter that is selected manually if needed.
Table B.3 Host Connection Parameters Host Group Option- Simple Setting ® Platforms AIX Alternate Path None Fail Over None VxVM (Note2) HDLM (Note1) HACMP None HACMP None VCS (Note3) Detail Setting: The following parameters will be selected automatically according to simple setting.
Note: When making the simple setting of the host group options, select items shown on gray backgrounds. Only when using the combination not described in the simple setting, select the required parameter from detail settings. Note 1: When using Hitachi Dynamic Link Manager (HDLM), take notice of the following: (1) Vendor ID: If this text is changed, it cannot be managed by HDLM. Do not change this text.
Table B.4 Host Connection Parameters Host Group Option- Simple Setting ® Platforms Windows 2000/2003 Alternate Path None Fail Over None VxVM (Note2) HDLM (Note1) MSCS None MSCS None 9 9 Detail Setting: The following parameters will be selected automatically according to simple setting.
Note 1: When using Hitachi Dynamic Link Manager (HDLM), take notice of the following: (1) Vendor ID: If this text is changed, it cannot be managed by HDLM. Do not change this text. (2) Serial Number: When the multiple storages of same types exist, a different Serial Number needs to be allocated for each disk array subsystem. Note 2: When using VERITAS™ Volume Manager (VxVM), Array Support Library (ASL) for AMS/WMS Series is required. Please download from the Web screen of VERITAS™.
Table B.5 Host Connection Parameters Host Group Option- Simple Setting ® Platforms ® Linux Alternate Path None Fail Over None Tru64 Others Not specified None None VxVM (Note1) VCS (Note2) None VCS (Note2) None Tru Cluster None 9 9 9 Detail Setting: The following parameters will be selected automatically according to simple setting.
Note: When making the simple setting of the host group options, select items shown on gray backgrounds. Only when using the combination not described in the simple setting, select the required parameter from detail settings. Note 1: When using VERITAS™ Volume Manager (VxVM), Array Support Library (ASL) for AMS/WMS Series is required. Please download from the Web screen of VERITAS™. Note 2: VERITAS™ Cluster Server.
Appendix C Basic Specifications of the Subsystem Basic specifications of AMS200 are shown in this chapter. The basic specifications of AMS200 is listed and described in Table C.1, and the basic specifications of RKNAS is listed and described in Table C.2. Table C.
Note 2: When FC interface board is not added, one port configures one Mini-HUB, and extends to two host connectors. When FC interface board is added, control unit implements two ports and two host connectors. One port configures FC interface independent of another port, and implements one host connector. Note 3: When FC interface board is added, the interface type supports 4 Gbps Fibre Channel Optical (Non-OFC).
Table C.
Note 2: Although the subsystem with a configuration of RAID6, RAID 5, RAID 1, or RAID 1+0 provides data reliability enhanced by means of redundancy, a possibility remains that user data is lost owing to an unexpected failure of a host computer or hardware/software of the subsystem itself. Therefore, users are requested to back up all data for restoration in case where the original data is lost. RAID 0+1 is described in place of RAID 1+0 in some places, however, it has the same meaning as RAID 1+0.
Table C.
The following table lists and describes the basic specifications of RKNAS. Table C.2 Basic Specifications of RKNAS Items Configuration RKNAS Specifications Configuration 1 RKNAS Subsystem appearance Physical specifications Input power specifications Start-up time (min) Standard: 3 (Note 3) Chassis size (W×D×H) (mm) 483×650×43 Mass (kg) 15 approx. Acoustic noise (dB) (Note 1) 60 approx.
Appendix D Interfaces Fibre Channel (Non-OFC) and Ethernet connections are used for Interface with the host computer. The AMS200 provides a Fibre Channel interface with the control unit as standard. The NAS unit and iSCSI interface provides an Ethernet interface as standard. D.1 D.1.
D.1.2 Cable Table D.1 shows specifications of the Fibre Channel interface cable. Figure D.1 shows the type of connector for the optical interface on the cable side. Table D.1 Cable Specification Cable Interface Type Type SC-LC cable Cable Node Name Equivalent to Sumitomo 3M 170AC-AAAA-XXX LC-LC cable 12.7 mm One Side Other Side 50/125 μm Multimode SC connector (JIS C 5973) LC connector Wavelength: 850 nm LC connector LC connector 6.
D.1.3 Connector on Subsystem Side Figure D.1 displays the type of connector for the optical interface on the subsystem side. LC Connector Type: Connector type: LC duplex receptacle connector. Interval: 6.25 mm flat type two rows. 6.25 mm Tx: Transmitter Rx: Receiver Tx Rx LC connector type Figure D.
D.1.4 Ordered Set Table D.2 displays the Ordered Sets defined by the Fibre Channel interface. Table D.2 Ordered Set No.
Frame Delimiters The Frame Delimiter is an Ordered Sets that immediately precedes or follows a frame context, and consists of the SOF (Start of Frame) and the EOF (End of Frame). SOF (Start of Frame) The SOF delimiter is an Ordered Set that immediately precedes the context of a frame. There are following SOF delimiters, shown in the Table D.3, based on the service class, etc. Table D.3 SOF Delimiters No.
Table D.4 EOF Delimiters No. Name Meaning 1 EOFt This shows that the sequence of the SEQ_ID which is owned by a frame. 2 EOFdt This is used to cancel the exclusive connection. This identifies the final ACK of the sequence and shows that the sequence of the SEQ_ID owned by a frame has completed. 3 EOFn This is used when no other EOF delimiter (EOFt or EOFdt) which shows valid frame contents is required. 4 EOFdti When the EOFdt has illegal contents, it is replaced with the EOFdti.
CLS (Close) --- FC-AL: A CLS is sent by the L_Port. When the L_Port sends the CLS, it does not transfer the frame and the R_RDY to the current circuit. The CLS shows that the control of the loop is ready to be abandoned or has already been abandoned. MRKtx (Mark) --- FC-AL: A MRKtx is a Primitive Signal transmitting on a Loop by a master control point to synchronize other Nodes.
D.1.5 Frames Frame Format Table D.5 displays the frame format used with Fibre Channel. Table D.5 Frame Format Start of Frame (SOF) Frame Header Data File CRC End of Frame (EOF) 4 bytes 24 bytes 0 to 2112 bytes 4 bytes 4 bytes Start of Frame: The Start of Frame (SOF) delimiter is an Ordered Set that immediately precedes the frame context. For the types of the SOF, refer to D.1.4).
Header The format of the Frame Header is shown in Table D.6. Table D.6 Frame Format Bits/Word 31 to 24 23 to 16 0 R_CTL D_ID 1 Reserved S_ID 2 TYPE F_CTL 3 SEQ_ID DF_CTL 4 OX_ID 5 Parameter 15 to 08 07 to 00 SEQ_CNT RX_ID R_CTL (Routing Control): The R_CTL field is used to categorize the frame function. Classification into the link control frame and data frame is done by the R_CTL.
Parameter: In the link control frame, the parameter is used to transmit original information of the individual link control frame and in the data frame, it is used for the relative offset. Header The presence of the Optional Headers is indicated by the DF_CTL field. The treatment of the Optional Headers with the AMS200 is shown in Table D.7. Table D.7 Frame Format No. Name Usage Treatment with the Disk Array Remarks 1 Expiration Security Header Used to specify the expiration time, etc.
Link Control Frames Table D.8 displays the defined Link Control frames (FT-0) and supports Link Control frames. The AMS200 supports link service frames shown in Table D.9. Table D.8 Link Control Frames No. Name Meaning Support 1 ACK_1 (Acknowledge_1) Indicates that a single Data frame is being acknowledged. { (Note) 2 ACK_0 (Acknowledge_0) Indicates that all Data frames of a Sequence are being acknowledged.
308 Appendix D Interfaces
D.1.6 Link Service Table D.9 No.
Table D.9 No. Classification Name Support Issue Receive PRLI (Process Login) × { PRLO (Process Logout) { { 28 SCN (State Change Notification) × × 29 TPLS (Test Process Login State) × × GAID (Get Alias_ID) × × FACT (Fabric Activate Alias_ID) × × 32 FDACT (Fabric Deactivate Alias_ID) × × 33 NACT (N_Port Activate Alias_ID) × × 34 NDACT (N_Port Deactivate Alias_ID) × × 26 27 30 31 Extended Link Service- Proc.
D.1.7 FCP Frame Format AMS200 supports the six Information Units (IU) shown in the following table. Table D.10 Information Unit No. Name Meaning Support 1 FCP_CMND Transfers SCSI Command or Task Management { 2 FCP_XFER_READY Notifies FCP_DATA will be transferred. { 3 FCP_DATA Transfers Data. { 4 FCP_RSP Transfers Status Information { 5 FCP_CMND+FCP_DATA Transfers SCSI Command and the first Data within a single Information Unit.
D_ID (Destination ID): This indicates the transmission destination of a frame. D_ID of the frame from the SCSI command issuer side (Exchange originator) is the target ID of SCSI-3. S_ID: This indicates the transmission destination of a frame. S_ID of the frame from the SCSI command issuer side (Exchange originator) is the initiator ID of SCSI-3. TYPE (Data structure type): In the TYPE field of all frames of the FCP sequence, 0x08 is set.
FCP_CMND The FCP_CMND is sent from a host and is used for the task management instruction such as SCSI command issue and target reset. The payload of FCP_CMND is shown in Table D.1.12. Table D.
FCP_LUN: The FCP_LUN field specifies the Logical Unit Number in which the issued SCSI Command is executed. The Table D.1.13 shows the format of the FCP_LUN field. Table D.13 FCP_LUN Format Byte 0 1 2 3 4 5 6 7 Logical unit number 0x00 LUN (Max.256) 0x00 0x00 0x00 0x00 0x00 0x00 FCP_CNTL: The FCP_CNTL field contains the following control information.
ABORT TASK SET: The ABORT TASK SET is used to clear all tasks in the specified Logical Unit for the Initiator. (Same as the SCSI-2 Abort message) The ABORT TASK (Same as the SCSI-2 Abort Tag message) is specified by the ABTS Link Service. Execution Management: The direction of the SCSI data transfer is specified in the Execution Management. The direction depends on the SCSI Command. FCP_CDB The SCSI CDB (Command Descriptor Block) is contained in the FCP_CDB field.
D.1.9 Initialization Process Link Initialization When the array unit is turned on and becomes ready, the AMS200 performs the Link Initialization process. The LR, LRR, NOS, OLS, and IDLE are exchanged between subsystem and the connected N_Port, and frames cannot be transmitted until the Active state. The details of the Link Initialization process is shown in the Table D.15. At the beginning, the AMS200 becomes OLS Transmit state, and the Link Initialization process continues until the Active state.
LILP: Loop AL_PA position map. The AMS200 transmits LIP first. When LIP is detected by the AMS200, the array controller transmits LISM. When the same LISM as the AMS200 has transmitted is received at the array controller, the subsystem becomes a Loop Master, and the subsystem transmits and receives ARBx, LIFA, LIPA, LIHA, LISA, LIRP, and LILP with address map, and determines the AL_PA of each L_Port. At the end of the Loop Initialization, the subsystem transmits and receives CLS.
D.1.
At the time of the Read Xfer Ready Disabled: Initiator IU Direction FCP_CMND ----------------------------------> Target IU <---------------------------------<---------------------------------- FCP_XFER_DATA FCP_XFER_DATA <---------------------------------- FCP_RSP The FCP_XFER_RDY is not sent before sending the FCP_XFER_DATA.
At the time of the Xfer Ready Disabled (not supported): Initiator IU Direction FCP_CMND ----------------------------------> FCP_DATA ----------------------------------> <---------------------------------- FCP_DATA FCP_XFER_READY ----------------------------------> <---------------------------------- FCP_DATA Target IU FCP_XFER_READY ----------------------------------> <---------------------------------- FCP_RSP The FCP_XFER_RDY is not sent before sending the first FCP_DATA.
Link service FLOGI, PLOGI, LOGO, PRLI, and PRLO.
When another loop master exists: LIP, LIP LISM, LISM ARB (F0), ARB (F0) LIFA LIPA LIHA LISA LIRP LILP 322 Appendix D Interfaces Direction ----------------------------------> <-------------------------------------------------------------------> <---------------------------------<-------------------------------------------------------------------> <-------------------------------------------------------------------> <-------------------------------------------------------------------> <----------
Fabric Connection Table D.16 displays the basic sequence of the frame at the time of start-up when the subsystem is in the fabric connection. Table D.16 Link Initialization Process No Opponent Party Frame 1 FAN Direction → ← ACC 2 ACC 3 FS_ACC 4 FS_ACC 5 FS_ACC 6 ACC 7 Logs in the name server. RCS_ID Registers the support class. RFT_ID Registers the FC-4 type. RPT_ID Registers the type of own port as the N/NL. SCR Receives and registers the RSCN.
Response when receiving the ELS without the PLOGI. Table D.17 displays the response made when receiving the ELS without the PLOGI. Table D.17 Response When Receiving ELS without PLOGI Frame Received Response In FC_AL In Point-to-Point (fabric) Connection FCP_CMND No response (frame is abandoned.) No response (frame is abandoned.
D.2 D.2.1 Ethernet Connection Specifications System Configuration To configure this NAS system, use the switches complied with the following standards: IEEE 802.1D STP IEEE 802.1w RSTP IEEE 802.3 CSMA/CD IEEE 802.3u Fast Ethernet IEEE 802.3z 1000BaseX IEEE 802.1Q Virtual LANs IEEE 802.
Cable The following table lists and describes the cable specification for LAN interface and the connector type. Table D.18 Response When Receiving ELS without PLOGI Cable Type Corresponding Transmission Band Category 6 Figure D.4 1000BASE-TX Specification Cable Connector UTP RJ-45 Connector Type on the Cable Side Connector Type on the Subsystem Side The following figure shows the connector type for the LAN interface on the subsystem side. Figure D.
Appendix E E.1 Remote Adapter Specifications Remote Adapter Specifications Table E.1 Remote Adapter Specifications Model Item Remote Adapter (Main Unit) (DF-F700-VR4A) Remote Adapter (Hub) (DF-F700-VR4H) Physical Specifications Chassis size (W×D×H) (mm) 109×190×42 219×190×42 Mass (kg) 1 2 Input power Input voltage (V) AC 100-240 Specifications Frequency (Hz) 50/60 ±1 Number of phases, cabling Single-phase with protective grounding Steady-state current (A) 0.
E.2 Remote Adapter Dimensions 109 mm Remote adapter (Main unit) (DF-F700-VR4A) 42 mm POWER OUT J100 J101 190 mm J1 Remote adapter cable 219 mm Remote adapter (Hub) (DF-F700-VR4H) 42 mm POWER J200 J201 J202 J203 J204 J205 IN OUT J100 J101 J102 J103 J104 J105 J1 Remote adapter cable Figure E.
Appendix F List of Storage Capacities Corresponding to RAID Levels and Configurations The upper and lower values in each cell show the number of mounted disk drives and disk capacity respectively. No spare disk is included. Note: All values of storage capacities in the following tables are calculated as 1 Gbyte = 1,000,000,000 bytes. (This definition is different from 1 Kbyte = 1,024 bytes.) Table F.
Table F.2 Disk Capacity Component Unit Range Total Range of Disk Drives 1D+1D 71.3 G bytes RKS 1 Min. 15 (Max) 2 71.31 14 499.17 RKAJ 1 30 2 45 3 60 4 75 5 90 6 105 30 1069.66 44 1568.84 60 2139.33 74 2638.51 90 3209.00 104 3708.18 Table F.3 List of Capacities Corresponding to RAID5 (72 Gbytes) Disk capacity Component unit Range Total range of Disk drives 2D+1P 71.
Table F.4 List of Capacities Corresponding to RAID6 (72 Gbytes) Disk capacity Component unit Range Total range of Disk drives 2D+2P 71.3 G bytes 3D+2P 4D+2P 5D+2P 6D+2P 7D+2P 8D+2P 9D+2P 10D+2P 11D+2P 12D+2P 13D+2P 14D+2P 15D+2P 16D+2P 17D+2P 18D+2P 19D+2P 20D+2P 21D+2P 22D+2P 23D+2P 24D+2P 25D+2P 26D+2P RKS 1 Min. 4 142.62 5 213.93 6 285.24 7 356.55 8 427.86 9 499.17 10 570.49 11 641.80 12 713.11 13 784.42 14 855.73 15 927.04 0 0.00 0 0.00 0 0.00 0 0.00 0 0.00 0 0.00 0 0.00 0 0.00 0 0.00 0 0.00 0 0.
Table F.4 List of Capacities Corresponding to RAID6 (72 Gbytes) (continued) Disk capacity Component unit Range Total range of Disk drives 27D+2P 71.3 G bytes 28D+2P 1 Min. 0 0.00 0 0.00 RKAJ 15 (Max) 0 0.00 0 0.00 1 30 2 45 3 60 4 75 5 90 6 105 29 1925.40 30 1996.71 29 1925.40 30 1996.71 58 3850.80 60 3993.43 58 3850.80 60 3993.43 87 5776.21 90 5990.14 87 5776.21 90 5990.14 Table F.
Table F.6 List of Capacities Corresponding to RAID0 (146 Gbytes) Disk capacity Component unit Range Total range of Disk drives 2D 143.3 G bytes 3D 4D 5D 6D 7D 8D 9D 10D 11D 12D 13D 14D 15D 16D Table F.7 Disk Capacity Component Unit Range Total Range of Disk Drives 1D+1D RKS RKAJ 1 Min. 15 (Max) 2 286.61 3 429.91 4 573.22 5 716.53 6 859.83 7 1003.14 8 1146.45 9 1289.75 10 1433.06 11 1576.37 12 1719.67 13 1862.98 14 2006.29 15 2149.59 0 0.00 14 2006.29 15 2149.59 12 1719.67 15 2149.59 12 1719.
Table F.8 List of Capacities Corresponding to RAID5 (146 Gbytes) Disk capacity Component unit Range Total range of Disk drives 2D+1P 143.3 G bytes 3D+1P 4D+1P 5D+1P 6D+1P 7D+1P 8D+1P 9D+1P 10D+1P 11D+1P 12D+1P 13D+1P 14D+1P 15D+1P 334 Appendix F RKS 1 Min. 3 286.61 4 429.91 5 573.22 6 716.53 7 859.83 8 1003.14 9 1146.45 10 1289.75 11 1433.06 12 1576.37 13 1719.67 14 1862.98 15 2006.29 0 0.00 RKAJ 15 (Max) 15 1433.06 12 1289.75 15 1719.67 12 1433.06 14 1719.67 8 1003.14 9 1146.45 10 1289.
Table F.9 List of Capacities Corresponding to RAID6 (146 Gbytes) Disk capacity Component unit Range Total range of Disk drives 2D+2P 143.3 G bytes 3D+2P 4D+2P 5D+2P 6D+2P 7D+2P 8D+2P 9D+2P 10D+2P 11D+2P 12D+2P 13D+2P 14D+2P 15D+2P 16D+2P 17D+2P 18D+2P 19D+2P 20D+2P 21D+2P 22D+2P 23D+2P 24D+2P 25D+2P 26D+2P RKS 1 Min. 4 286.61 5 429.91 6 573.22 7 716.53 8 859.83 9 1003.14 10 1146.45 11 1289.75 12 1433.06 13 1576.37 14 1719.67 15 1862.98 0 0.00 0 0.00 0 0.00 0 0.00 0 0.00 0 0.00 0 0.00 0 0.00 0 0.
Table F.9 List of Capacities Corresponding to RAID6 (146 Gbytes) (continued) Disk capacity Component unit Range Total range of Disk drives 27D+2P 143.3 G bytes 28D+2P Table F.10 Disk capacity Componen t unit Range Total range of Disk drives 2D+2D 3D+3D 4D+4D 5D+5D 6D+6D 7D+7D 8D+8D 336 Appendix F RKS 1 Min. 0 0.00 0 0.00 RKAJ 15 (Max) 0 0.00 0 0.00 1 30 2 45 3 60 4 75 5 90 6 105 29 3869.27 30 4012.58 29 3869.27 30 4012.58 58 7738.55 60 8025.17 58 7738.55 60 8025.17 87 11607.
Table F.11 Disk capacity Componen t unit Range Total range of Disk drives 2D 3D 4D 5D 6D 7D 8D 9D 10D 11D 12D 13D 14D 15D 16D Table F.12 Disk Capacity Component Unit Range Total Range of Disk Drives 1D+1D List of Capacities Corresponding to RAID0 (300 Gbytes) 287.6 G bytes RKS RKAJ 1 Min. 15 (Max) 2 575.25 3 862.88 4 1150.51 5 1438.14 6 1725.77 7 2013.40 8 2301.02 9 2588.65 10 2876.28 11 3163.91 12 3451.54 13 3739.17 14 4026.80 15 4314.42 0 0.00 14 4026.80 15 4314.42 12 3451.54 15 4314.42 12 3451.
Table F.13 Disk capacity Componen t unit Range Total range of Disk drives 2D+1P 3D+1P 4D+1P 5D+1P 6D+1P 7D+1P 8D+1P 9D+1P 10D+1P 11D+1P 12D+1P 13D+1P 14D+1P 15D+1P 338 Appendix F List of Capacities Corresponding to RAID5 (300 Gbytes) 287.6 G bytes RKS 1 Min. 3 575.25 4 862.88 5 1150.51 6 1438.14 7 1725.77 8 2013.40 9 2301.02 10 2588.65 11 2876.28 12 3163.91 13 3451.54 14 3739.17 15 4026.80 0 0.00 RKAJ 15 (Max) 15 2876.28 12 2588.65 15 3451.54 12 2876.28 14 3451.54 8 2013.40 9 2301.02 10 2588.
Table F.14 List of Capacities Corresponding to RAID6 (300 Gbytes) Disk capacity Component unit Range Total range of Disk drives 2D+2P 287.6 G bytes 3D+2P 4D+2P 5D+2P 6D+2P 7D+2P 8D+2P 9D+2P 10D+2P 11D+2P 12D+2P 13D+2P 14D+2P 15D+2P 16D+2P 17D+2P 18D+2P 19D+2P 20D+2P 21D+2P 22D+2P 23D+2P 24D+2P 25D+2P 26D+2P RKS 1 Min. 4 575.25 5 862.88 6 1150.51 7 1438.14 8 1725.77 9 2013.40 10 2301.02 11 2588.65 12 2876.28 13 3163.91 14 3451.54 15 3739.17 0 0.00 0 0.00 0 0.00 0 0.00 0 0.00 0 0.00 0 0.00 0 0.00 0 0.
Table F.14 List of Capacities Corresponding to RAID6 (300 Gbytes) (continued) Disk capacity Component unit Range Total range of Disk drives 27D+2P 287.6 G bytes 28D+2P RKAJ 1 Min. 15 (Max) 0 0.00 0 0 0 0.00 0 0.00 1 30 2 45 3 60 4 75 5 90 6 105 29 7765.97 30 8053.60 29 7765.97 30 8053.60 58 15531.94 60 16107.20 58 15531.94 60 16107.20 87 23297.91 90 24160.80 87 23297.91 90 24160.80 Table F.
Table F.17 List of Capacities Corresponding to RAID5 (250 Gbytes) Disk capacity Component unit Range Total range of Disk drives 2D+1P 245.7 G bytes 3D+1P 4D+1P 5D+1P 6D+1P 7D+1P 8D+1P 9D+1P 10D+1P 11D+1P 12D+1P 13D+1P 14D+1P 15D+1P RKAJAT 1 Min. 3 491.49 4 737.23 5 982.98 6 1228.72 7 1474.47 8 1720.22 9 1965.96 10 2211.71 11 2457.45 12 2703.20 13 2948.95 14 3194.69 15 3440.44 0 0.00 15 (Max) 15 2457.45 12 2211.71 15 2948.95 12 2457.45 14 2948.95 8 1720.22 9 1965.96 10 2211.71 11 2457.45 12 2703.
Table F.18 List of Capacities Corresponding to RAID6 (250 Gbytes) Disk capacity Component unit Range Total range of Disk drives 2D+2P 245.7 G bytes 3D+2P 4D+2P 5D+2P 6D+2P 7D+2P 8D+2P 9D+2P 10D+2P 11D+2P 12D+2P 13D+2P 14D+2P 15D+2P 16D+2P 17D+2P 18D+2P 19D+2P 20D+2P 21D+2P 22D+2P 23D+2P 24D+2P 25D+2P 26D+2P 342 Appendix F RKAJAT 1 Min. 4 491.49 5 737.23 6 982.98 7 1228.72 8 1474.47 9 1720.22 10 1965.96 11 2211.71 12 2457.45 13 2703.20 14 2948.95 15 3194.69 0 0.00 0 0.00 0 0.00 0 0.00 0 0.00 0 0.
Table F.18 List of Capacities Corresponding to RAID6 (250 Gbytes) (continued) Disk capacity Component unit Range Total range of Disk drives 27D+2P 245.7 G bytes 28D+2P Table F.19 Disk capacity Componen t unit Range Total range of Disk drives 2D+2D 3D+3D 4D+4D 5D+5D 6D+6D 7D+7D 8D+8D Table F.20 Disk Capacity Component Unit Range Total Range of Disk Drives 1D+1D RKAJAT 1 Min. 15 (Max) 0 0.00 0 0.00 0 0.00 0 0.00 2 30 3 45 4 60 5 75 6 90 29 6635.14 30 6880.88 29 6635.14 30 6880.88 58 13270.
Table F.21 Disk capacity Componen t unit Range Total range of Disk drives 2D+1P 3D+1P 4D+1P 5D+1P 6D+1P 7D+1P 8D+1P 9D+1P 10D+1P 11D+1P 12D+1P 13D+1P 14D+1P 15D+1P 344 Appendix F List of Capacities Corresponding to RAID5 (400 Gbytes) 393.4 G bytes RKAJAT 1 Min. 3 786.91 4 1180.37 5 1573.83 6 1967.29 7 2360.75 8 2754.21 9 3147.67 10 3541.13 11 3934.59 12 4328.05 13 4721.51 14 5114.97 15 5508.42 0 0.00 15 (Max) 15 3934.59 12 3541.13 15 4721.51 12 3934.59 14 4721.51 8 2754.21 9 3147.67 10 3541.
Table F.22 List of Capacities Corresponding to RAID6 (400 Gbytes) Disk capacity Component unit Range Total range of Disk drives 2D+2P 393.4 G bytes 3D+2P 4D+2P 5D+2P 6D+2P 7D+2P 8D+2P 9D+2P 10D+2P 11D+2P 12D+2P 13D+2P 14D+2P 15D+2P 16D+2P 17D+2P 18D+2P 19D+2P 20D+2P 21D+2P 22D+2P 23D+2P 24D+2P 25D+2P 26D+2P RKAJAT 1 Min. 4 786.91 5 1180.37 6 1573.83 7 1967.29 8 2360.75 9 2754.21 10 3147.67 11 3541.13 12 3934.59 13 4328.05 14 4721.51 15 5114.97 0 0.00 0 0.00 0 0.00 0 0.00 0 0.00 0 0.00 0 0.00 0 0.
Table F.22 List of Capacities Corresponding to RAID6 (400 Gbytes) (continued) Disk capacity Component unit Range Total range of Disk drives 27D+2P 393.4 G bytes 28D+2P Table F.23 Disk capacity Componen t unit Range Total range of Disk drives 2D+2D 3D+3D 4D+4D 5D+5D 6D+6D 7D+7D 8D+8D 346 Appendix F RKAJAT 1 Min. 0 0.00 0 0.00 15 (Max) 0 0.00 0 0.00 2 30 3 45 4 60 5 75 6 90 29 10623.40 30 11016.85 29 10623.40 30 11016.85 58 21246.80 60 22033.71 58 21246.80 60 22033.71 87 31870.
Appendix G Port Address Mapping Table Fibre channel physical addresses are converted to target IDs (TIDs) using a conversion table. The following table shows the current limits for TIDs on various operating systems. Table G.
Table G.
Port Addresses for Windows NT® (Fibre Board: Emulex®) Table G.
Table G.
Appendix H Power Cables This section includes descriptions of the following power cables: J1H J2H J2H5 and J2H10 Table H.1 J1H Power Cable Cable Name Part # Name Quantity Model Applicable Safety Standard/ Rating DF-F700-J1H 1 Cable - PVC code UL and CSA 2 Connector A 1 NEMA Standard 5-15 P For AC 125 V (13 A or 15 A) 3 Connector B 1 EN60320-C13 For standard use Power cable L=2.5 m Connector A Figure H.
Table H.2 J2H Power Cable Cable Name Part # Name Quantity Model Applicable Safety Standard/ Rating DF-F700-J2H 1 Cable - PVC code UL and CSA 2 Connector A 1 EN60324-C14 For AC 250 V (13 A or 15 A) 3 Connector B 1 EN60324-C13 For rack frame Power cable L=2.5 m Cable Connector A Connector B Figure H.2 J2H Power Cable Table H.
Appendix I Number of Logical Blocks Set the number of logical blocks for each logical unit using the following multiples in accordance with RAID levels. Note: All values of storage capacities in following tables are calculated as 1 Gbyte = 1,000,000,000 bytes. (This definition is different from 1 Kbyte = 1,024 bytes.) A logical unit can divide all RAID groups into up to 512. Set the number of logical blocks set for each logical unit using the following multiples in accordance with RAID levels. Table I.
Table I.
Table I.
Table I.1 Number of Logical Blocks and RAID Levels (continued) RAID Level RAID1+0 Logical Block Number (2D+2D) 4096 (3D+3D) 6144 (4D+4D) 8192 (5D+5D) 10240 (6D+6D) 12288 (7D+7D) 14336 (8D+8D) 16384 The number of logical blocks for one parity group is shown below.
When dividing RAID groups into multiple logical units, set the sum total of the number of logical blocks of each logical unit below the number of logical blocks per parity shown below. However, when creating multiple parity groups in each RAID group, set them below the number of logical blocks of one parity group multiplied by the number of parity groups. The number of logical blocks of one parity group is shown below. Table I.
Table I.2 Disk Drive Capacity RAID Configuration 71.3 G bytes Number of Logical Blocks 1 143.3 G bytes Number of Logical Blocks 1 245.7 G bytes Number of Logical Blocks 1 292 G bytes Number of Logical Blocks 1 393.
Appendix J Using LUN Security or LUN Management on a Fabric Switch Connection When using LUN Manager on a Fabric Switch connection: J.1 When connecting to the servers (HBA) or exchanging the HBA, connect to the servers (HBA) that access the Disk Array after the LUN Security or LUN Management settings, including WWN registration, are completed. Zoning on Fabric Switch must be set as shown below to disturb the access from HBA that cannot be accessed from the Disk Array by LUN Manager.
J.