Dell Integrated Systems for Oracle Database (DISOD) 2.0 Owner’s Guide 1.
Revisions Date Description August 2015 Document initial release version 1.0 November 2015 Document version 1.1. Replaced S60 with S3048-ON as the Management Switch. © 2015 Dell Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. Dell™ and the Dell logo are trademarks of Dell Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.
Contents 1 Overview―Dell Integrated Systems for Oracle Database (DISOD) .................................................................................. 4 1.1 DISOD 2.0 ......................................................................................................................................................................... 5 2 DISOD hardware components ..........................................................................................................................................
1 Overview―Dell Integrated Systems for Oracle Database (DISOD) Dell Integrated Systems for Oracle Database (DISOD) is a fully integrated hardware stack that is purpose-built as a high performance Oracle Database 12c racked solution. DISOD provides an out-of-box experience to customers, where, everything up to the point of the Oracle Database software installation is preconfigured.
DISOD is a scale-out solution and is available in multiple configuration sizes in terms of the database compute nodes and storage capacity. However, the scope of this document is to provide the details as a generic design principle that may be applied to various configuration sizes, versus providing the details of each and every configuration sizes supported by the DISOD solution. To know about the different DISOD configurations that are currently supported, contact your Dell Sales team.
DISOD 2.0 (R730) – Small (2x2) Force10 0 1 0 1 0 1 750W 42 Force10 750W 41 0 750W 42 750W 41 1 40 40 1 2 2 x S6000 40GbE ToR Switches S3048-ON Mgmt.
The architectural diagram of a medium (4×4) FC-based DISOD 2.0 configuration is shown in Figure 2. Corporate or Data Center Public Network Top-of-Rack (2 x Dell S6000) 40GbE Switches (Public & Private) 1 2 DB Servers (R730) 4 ... 2 x Brocade 6510 16 Gbps Fibre Channel Switches (SAN) Public Network (Ethernet ) Private Interconnect (Ethernet) SAN Network (FC) 2 x 40 GbE 1 Figure 2 Active/Active DAAD 2.0-HA (FC) 2 x 40 GbE 2 3 Active/Active DAAD 2.0-HA (FC) 4 Architectural diagram of DISOD 2.
2 DISOD hardware components This section provides information about the main hardware components in the DISOD 2.0 solution. 2.1 Database servers DISOD 2.0 offers two different servers to be used as Oracle Database nodes: Two-socket Dell PowerEdge R730 rack server, or Four-socket Dell PowerEdge R930 rack server The database servers are preinstalled with Oracle Linux 6 running Unbreakable Enterprise Kernel release 3.
Figure 3 shows the rear side of the R730 chassis with the adapters populated based on the recommended PCIe adapters slot priority of Dell.
Figure 4 shows the rear side of the R930 chassis with the adapters populated based on the recommended PCIe adapters slot priority of Dell.
o 25.6 TB DAAD 2.0 that uses four 6.4 TB Fusion ioMemory flash adapters Fabric: FC for front-end connectivity only Each HA pair of DAAD in the DISOD solution is precabled to the SAN switches, and is preconfigured with the necessary volumes required for Oracle Database 12c. For more information about the DAAD storage array configuration, see the 4.1 DAAD storage array configuration section. For more information about DAAD as a generic storage appliance, click DAAD. 2.
Both the S6000 switches are configured as Layer-2 switches. The switches ship with the following preconfigurations: Rapid Spanning Tree Protocol (RSTP) Virtual Link Trunking (VLT) Predefined port functions and configuration VLAN and Port Channels For more information about the S6000 switch configurations, see the 3.2.1 S6000 switch configurations section. 2.3.
2.4 Management server The Dell PowerEdge R320 server is set up as the management server that allows remote management and monitoring of all the DISOD hardware components. Table 3 lists all the components in the management server. R320 management server components Management Server Components Server 1S Dell PowerEdge R320 server Processor 1 × Intel Xeon E5-2430L v2 6c 2.
3 DISOD network configuration This section provides information about the hardware and software network that DISOD ships with. It provides information about the following network configurations: 3.
Network address/Subnet class: 172.16.211.0/263 Public network ports Management Server Public mgmt. ports DAAD nodes (LOM – eth0) 172.16.211.8 172.16.211.41 – 172.16.211.50 1 10 Some components within the DISOD solution rack are fixed in quantity, such as the switches and the management server. However, because DISOD is a scalable solution some components such as the database nodes and the DAAD nodes are variable in quantity.
address is assigned to VLAN30, the virtual DISOD private management network. User can choose to configure the physical management port manually at a later time, if need be. Any direct management of these switches can be done by connecting to the IP addresses provided in Table 5. 3.1.2 Management server network configuration As seen in Figure 5, the management server (R320) ships with a dual-port 1 GbE LOM and a dual-port 1 GbE PCIe network adapter.
Management server NIC teaming configuration Teaming Name Adapter 1 (LOM) Physical Port # OS Name BOND-priv 1 NIC1 BOND-pub NIC2 2 Adapter 2 (add-on) Physical Port # OS Name 0 SLOT 1 1 SLOT 1 2 Management server virtual switches configuration Virtual Switch Name Teamed Adapter Names Vswitch-priv BOND-priv Vswitch-pub BOND-pub After the virtual switches were created, Windows automatically created two more virtual network adapters at the base OS level called vEthernet(Vswitch-priv) and vEthernet(Vswitch-priv
Adapter Name eth1* IP Address 172.16.211.8 * IMPORTANT: eth1 in LinVM is preconfigured with a temporary public routable IP address. This was required to preconfigure the DAAD storage. Customers must ensure to change this IP address to match their public network before plugging in to the DISOD ToR switches to avoid any IP conflicts. 3.1.
3.2 LAN switch configurations This section describes the switch configuration details of the LAN switches supported by the DISOD solution rack. 3.2.1 S6000 switch configurations Dell Networking S6000 40 Gb Ethernet switches are used as the ToR or LAN switches in the DISOD solution rack. This section provides information about the switch settings that the S6000 switches ship with. Each S6000 switch is configured by using its own settings file before being shipped out to the customer.
The database node number increases from left to right In each column, top ports (0/4, 12, 20, 28, 36, 44, 52, and 60) connect to the Oracle RAC public network port, and the bottom ports (0/0, 8, 16, 24, 32, 40, 48, and 56) connect to the Oracle RAC private interconnect port on the same database node. o The ports that are precabled from factory depend on the customer’s DISOD configuration size.
3.2.1.2 S6000 VLAN and Port Channel configuration 124 84 80 120 76 72 116 68 64 108 60 56 112 52 48 104 44 40 92 36 32 100 28 24 96 20 88 12 8 16 0 4 Though it is recommended to have dedicated switches for the Oracle database’s public and private traffic, in this FC-based preintegrated solution, the S6000 switches are used for handling both types of traffic. In such configurations, it is recommended to segregate these two network traffics within the switch.
3.2.1.3 S6000 VLT domain configuration Virtual Link Trunking (VLT) is a proprietary aggregation protocol by Dell Force 10 that is available in its enterprise-class network switches. VLT provides a loop-free environment by presenting two physical switches as one logical switch while not losing bandwidth for the devices connecting to it over two separate links.
3.2.1.4 S6000 Spanning Tree Protocol (STP) configuration Though VLT offers a loop-free Layer2 topology, as a best practice to prevent configuration and patching mistakes, Rapid Spanning Tree Protocol (RSTP) is enabled on both the S6000 switches. Enabling RSTP also helps prevent any loops that could occur just before the VLT domain is established. Both the switches are configured with the default S6000 RSTP switch settings. Some of the important ones are: 3.
As shown in the example in Figure 10 and in general, the switch configuration design is as follows: The two switch ports in the same column connect to two ports on either the same DAAD node or on the same database node The DAAD or the database node number increases from left to right o Port range 0–15 is designated and used for connecting to the DAAD nodes o Port range 16–31 is designated and used for connecting to the database nodes Port range 32–47 usage is undefined.
3.4 Management switch configurations Dell Networking S3048 1 Gb Ethernet switch is used as the management switch in the DISOD solution rack. This section provides information about the configuration that the S3048 management switch ships with. 3.4.1 S3048 switch configurations This section provides information about the switch settings that the management switch S3048 ships with. 3.4.1.
In each column, top ports connect to the management port (LOM port 1 or em1), and the bottom ports connect to the iDRAC port on the same database node LAN switches connectivity: Two 10 GbE SFP+ ports are used to connect to the ToR LAN switches. o Port 49 is connected to the top LAN switch o Port 50 is connected to the bottom LAN switch Undefined connectivity: Ports 1, 41–48, and 51-52 connectivity is not predefined. o Refer to appendix section A.3 for management infrastructure cabling diagram. 3.4.
4 DISOD software configuration This section describes the software configuration that the following hardware components in the DISOD ships with: 4.1 DAAD storage array configuration Database nodes configuration Management server configuration DAAD storage array configuration This section provides the details of the DAAD 2.0 storage array that is configured by using Dell’s best practices for Oracle Databases. As described in the section 2.2 Shared storage array, DISOD 2.
As shown in Figure 13, each storage pool contains two ioMemory cards, one from each of the two DAAD nodes in the HA pair. Altogether, four storage pools are created per DAAD HA pair. Each volume that is preconfigured is assigned a primary storage node and a secondary storage node. DAAD ION clustering software uses synchronous replication protocol. Local ‘write’ operations on the primary node are considered complete only after both the local and the remote disk write operations have been confirmed.
Minimum DISOD configuration contains at least one DAAD HA pair because DISOD supports only clustered DAAD nodes. The cluster names of each DAAD HA pair and the individual hostnames of each DAAD node within that HA pair are preconfigured from factory. The first DAAD HA pair is assigned the cluster name fcion1 and the individual DAAD nodes within that HA pair are assigned the hostnames fcion1a and fcion1b.
4.2 Database nodes configuration This section describes the OS preconfiguration that is applied on the database nodes to facilitate faster deployment of Oracle Database 12c at the customer site. The configuration is done based on the best practices of Dell and Oracle.
To supplement the Oracle Database 12c preinstall RPM, a customized Dell deployment utility is also used. This utility builds on top of the preinstall RPM, and takes extra steps to ensure that the environment is set up properly.
provides three additional mirrored copies residing on separate disks on top of the Normal Redundancy, thus providing a total of six copies. 4.2.4 Disk ownership and permissions setup using UDEV rules The necessary and appropriate ownership and permissions that is required on the storage disks before Oracle 12c grid and database installation is preconfigured on the DISOD database nodes.
Oracle Private A: /etc/sysconfig/network-scripts/ifcfg-eth5 Oracle Private B: /etc/sysconfig/network-scripts/ifcfg-eth7 NOTE: For the preassigned IP addresses for the private interfaces in the database nodes, see the 3.1.3 Static IP addresses of Database and DAAD nodes section. During Oracle grid installation, select the above two preconfigured network interfaces for the private interconnect. 4.
4.3.2 WinVM configuration settings WinVM VM on the management server is preconfigured with the following settings: As a ‘generation 2’ VM Windows Server 2012 R2 64-bit guest OS Four virtual processors and 8 GB dynamic memory All firewalls (Public, Private, and Domain) disabled. Computer name set to: WinVM- Where, is the unique Service Tag of the management server For network configuration details, see the 3.1.2 Management server network configuration section.
the/etc/dhcpd/dhcpd.conf file with the MAC addresses of these network ports. o To allow easy upgrade of the firmware, or to apply preconfigured switch settings on the S6000 and the S3048 switches, which can be achieved by booting the switches in jumpstart mode. DHCP is configured to use the DISOD private network so that it does not cause any conflict with the customer’s own DHCP server.
5 DISOD management and monitoring DISOD solution rack has a built-in dedicated management infrastructure that consists of the following hardware components: Management switch (S3048) Management server (R320) iDRAC ports on the database servers and the DAAD nodes for GUI-based server accessibility Dedicated management port on the database servers (LOM port 1), on the DAAD nodes (LOM port 1), and on the switches for CLI-based hardware accessibility Using the preconfigured management infrastructure,
Data Center Public Network 108 116 104 112 120 108 116 124 104 112 124 84 92 100 76 72 80 88 96 92 100 52 68 60 60 64 68 48 56 64 52 56 36 44 28 24 32 40 4 12 0 8 16 ToR S6000 #1 (U42) 20 P92-DDDD Stack ID 1 2 3 4 5 6 7 8 9 10 11 12 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 34 35 36 37 38 39 40 41 42 43 44 45 84 Stack ID 120 76 33 96 80 32 88 72 48 36 14 44 28 32 13 40 24 20 4 8 0 S3048-Mgmt
VLAN 30 network. The DISOD solution thereby provides a secure and segregated access and management solution. Users can log in to the management server: By using a browser on their client-machine to log in to the management server’s iDRAC, or By using a Linux terminal or an application such as Putty.
the DISOD hardware components, all from a single OEM console. To completely set up the DISOD management system, see the ‘Dell Integrated Systems for Oracle Databases Oracle Enterprise Manager Integration Guide at Dell.comsupport/home. Dell Integrated Systems for Oracle Database 2.0 - Owner’s Guide 1.
A DISOD rack hardware cabling diagrams DISOD components are precabled before they are shipped to the customer. For reference and as an example, this section provides the rack cabling diagram only for a 4×4 R730based FC DISOD solution, and for a 2×2 R930-based FC DISOD solution. Users can extrapolate the cabling methodology for other DISOD configurations based on the logical design explained in section 3 DISOD network configuration.
124 116 108 DB #1 Stack ID 20 28 36 44 52 60 68 76 84 92 100 108 116 124 24 32 40 48 56 64 72 80 88 96 104 112 120 4 12 8 0 16 Port 4 ToR S6000 #2 (U41) DB #2 S6000-U42 120 112 104 92 84 76 100 96 88 80 72 60 52 44 36 28 68 64 56 48 40 32 24 20 4 8 0 16 ToR S6000 #1 (U42) 12 Figure 17 shows the LAN infrastructure cabling diagram of a 2×2 R930-based FC DISOD solution.
A.2 Cabling diagram - SAN infrastructure This section provides the cabling diagram of the hardware components that are part of the SAN infrastructure, showing connectivity between the database nodes and the SAN switches, and between the DAAD nodes and the SAN switches. Figure 18 shows the SAN infrastructure cabling diagram of a 4×4 R730-based FC DISOD solution.
Figure 19 shows the SAN infrastructure cabling diagram of a 2×2 R930-based FC DISOD solution.
A.3 Cabling diagram - management infrastructure This section provides the cabling diagram of the hardware components that are part of the management infrastructure, showing the connectivity between the various management ports and the management switches. Figure 20 shows the management infrastructure cabling diagram of a 4×4 R730-based FC DISOD solution.
Figure 21 shows the management infrastructure cabling diagram of a 2×2 R930-based FC DISOD solution.
B DISOD authentication settings The hardware components in the DISOD solution rack ship with the following default authentication settings: root/oracle: Database nodes and LinVM Administrator/Oracle@123: Management server base OS and WinVM admin/oracle: DAAD storage array nodes admin/password: Management switch (S3048-ON), Top-of-Rack S6000 switches, and the SAN Brocade 6510 switches root/calvin: iDRAC It is highly recommended to change these default settings before connecting the DISOD solutio