6.2 HP IBRIX 9000 Storage Network Best Practices Guide (TA768-96069, December 2012)

Interconnect Module: Virtual Connect (VC)
The virtual connect modules are responsible for configuring and routing all of the network traffic
between the enclosure’s servers and the external customer network. The c7000 enclosure dedicates
interconnect bays 1 and 2 to the Virtual Connect modules. Two modules in a master-slave
relationship are used to achieve path redundancy between the server blades and the external
network.
Virtual Connect Flex-10 technology is a highly configurable hardware-based technology that lets
you separate the bandwidth of a single 10GbE across the separate physical NICs in the enclosure’s
servers. The module has eight external SFP connections labeled X1 through X8, and a single CX-4
connector which is shared with the X1 SFP.
The configuration of the Virtual Connect module determines which of the external connections are
active. Consult the topology descriptions in the next section to determine which of these external
connections need to be cabled to the customer network.
The multiple external ports on the VC module can lead to unexpected behavior when multiple ports
are active (SFP and cable) and the VC configuration maps those ports internally to the same physical
connection. If the ports cannot be successfully formed into a trunk, the VC module chooses the
fastest available link as the active connection. Slower links go into a standby mode and are not
used unless the faster connection is no longer available.
Each interconnect module has an Ethernet port that is connected to the management network via
the chassis midplane and OA module.
Interconnect Module: 6G SAS switch
There are four 6Gb SAS switch modules in Interconnect Bays 5-8. The SAS switch modules provide
the redundant SAS connection fabric between the server blades and the storage media. The switch
modules are connected to the storage using external SAS cables. All of the external connectors on
this module are for SAS cabling.
Each interconnect module has a Ethernet port that is connected to the management network via
the chassis midplane and OA module.
Recommended network topologies
The preferred network topologies for the IBRIX 9730 storage platform are:
Unified network. The cluster, user, and management networks are combined onto a single IP
network. This is the default.
Dedicated management network. Two distinct networks, with one network carrying the cluster
and user network traffic and a separate network dedicated to the management network traffic.
Additional physical networks. A dedicated management network configuration with up to two
additional physical networks to carry IBRIX user network traffic.
The description of each topology includes the following information:
Motivation. The motivations for choosing the topology.
Logical description. The topology at the IP addressing layer. The description illustrates the
network attached components that must be able to interchange packets and discusses the
preferred segregation of network traffic to match the expected use of each component. Example
IP addresses are used to show the relationships between components.
Physical description. The actual hardware implementation, focusing on how the logical entities
map to physical hardware and how that hardware is physically wired together.
Physical cabling. The patch cables that need to be connected between the enclosure network
components and the customer network.
Verifying the network configuration. Sample commands for verifying that the topology is set
up correctly.
Network attached components 27