Dell™ PowerEdge™ Systems Dell Oracle Database 10g R2 Standard Edition on Microsoft® Windows Server® 2003 R2 with SP2, Standard x64 Edition Deployment Guide Version 4.
Notes and Cautions NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION indicates potential damage to hardware or loss of data if instructions are not followed. ___________________ Information in this document is subject to change without notice. © 2008 Dell Inc. All rights reserved. Reproduction of these materials in any manner whatsoever without the written permission of Dell Inc. is strictly forbidden.
Contents Terminology Used in this Document. . . . . . . . . . . . Software and Hardware Requirements . Minimum Software Requirements 7 . . . . . . . . . 8 . . . . . . . . . . 8 Minimum Hardware Requirements for Direct-Attached SAS or Fibre Channel Cluster Configurations . . . . 8 Installing and Configuring the Operating System . . . . 10 Installing the Operating System Using the Deployment CD/DVDs . . . . . . . . . . . . . . . 10 Verifying the Temporary Directory Paths . . . . . .
Installing Oracle RAC 10g R2 Using ASM . . . . . . . . Installing Oracle Clusterware Version 10.2.0.1 . Installing Oracle10g Database With Real Application Clusters 10.2.0.1. . . . 33 . . . . . . . . 35 . . . . . . . . . . . . . 37 . . . . . . . . . . . . . . 38 . . . . . . . . . . . . 39 Installing Patchset 10.2.0.4 Configuring the Listener Creating the Seed Database Configuring and Deploying Oracle Database 10g (Single Node) . . . . . . . . . . . . . . . . . . . . . . .
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents
This document provides information about installing, configuring, reinstalling, and using your Oracle Database 10g R2 software following Dell’s Supported Configurations for Oracle. Use this document in conjunction with the Dell Deployment CD to install your software. If you install your operating system using only the operating system CDs, the steps in this document may not be applicable.
environment and virtual disk is commonly used in a Direct-attached SAS (Dell MD3000/MD3000i and Dell MD3000/MD3000i with MD1000 expansion) storage environment. Software and Hardware Requirements The following sections describe the minimum software and hardware requirements for Dell’s Supported Configurations for Oracle. Minimum Software Requirements Table 1-1 lists the minimum software requirements. NOTE: Your Dell configuration includes a 30-day trial license of Oracle software.
Table 1-2. Minimum Hardware Requirements - Direct-Attached SAS or Fibre Channel Cluster Configurations Hardware Component Configuration Dell PowerEdge 1950, 2900, 2950, 1950 III, 2900 III, 2950 III system (up to 2 nodes) Intel® Xeon® processor family. 1 GB of RAM. Two 73-GB hard drives connected to an internal RAID controller. NOTE: Dell recommends two 73-GB hard drives (RAID 1) connected to an internal RAISD controller based on your system. See your PowerEdge system documentation for more details.
Table 1-2. Minimum Hardware Requirements - Direct-Attached SAS or Fibre Channel Cluster Configurations (continued) Hardware Component Configuration For Fibre Channel: See the Dell | EMC system documentation for more details. Dell|EMC CX3-10C, CX3-20 (C/F), CX4-120, CX4-240, CX4-480, CX4-960, AX4-5F Fibre Channel storage system For Direct-attached SAS: Dell™ PowerVault™ MD3000 with MD1000 expansion storage system.
The boot menu screen appears. 6 In the Select Language Screen, select English. 7 On the Software License Agreement page, click Accept. The Systems Build and Update Utility home page appears. 8 From the Dell Systems Build and Update Utility home page, click Server OS Installation. The Server OS Installation screen appears. The Server Operating System Installation (SOI) module in the Dell™ Systems Build and Update Utility enables you to install Dell-supported operating systems on your Dell systems.
Enter OS Information: h Enter the appropriate User Name, Organization, and Product ID. i Enter all other necessary information. j Install SNMP (default). NOTE: If you have the Dell OpenManage CD and want to install it during your OS install, select Install Server Administrator. The Server Administrator can be installed anytime after the OS is installed. Installation Summary: k Eject CD/DVD Automatically (default).
CAUTION: Do not leave the administrator password blank. NOTE: To configure the public network properly, the computer name and public host name must be identical. NOTE: Record the password that you created in this step. You will need this information in step 14. When the installation procedure completes, the Welcome to Windows window appears. 12 Shut down the system, reconnect all external storage devices, and restart the system. 13 In the Welcome to Windows window, press to continue.
22 Run install_drivers.bat NOTE: This procedure may take several minutes to complete. 23 Press any key to continue. 24 Check the logs to verify that all drivers were installed correctly. NOTE: Log information can be found at: C:\Dell_Resource_CD\logs 25 When installation is complete, remove the CD from the CD drive. 26 Reboot your system. Verifying the Temporary Directory Paths Verify that the paths to the Temp and Tmp directories have been set correctly.
• HBA drivers. • PowerVault MD3000 Resource CD (when using the PowerVault MD3000 as backend storage) The storage must be configured with a minimum of four virtual disks/LUNs (two for the redundant Voting Disk and Oracle Cluster Registry and two for the database and Flash Recovery area) assigned to cluster nodes. Table 1-3.
Figure 1-1. Hardware Connections for a SAN-attached Fibre Channel Cluster Public network Gb Ethernet switches (private network) PowerEdge systems (Oracle database) Dell | EMC Fibre Channel switches (SAN) CAT 5e/6 (copper Gigabit NIC) CAT 5e/6 (copper Gigabit NIC) Fiber optic cables Additional fiber optic cables Dell | EMC CX3-10c, CX3-20, CX3-20F, CX3-40, CX3-40F, CX3-80, CX4120, CX4-240, CX4-480, CX4-960, and AX4-5F Fibre Channel storage systems Table 1-4.
Table 1-4. Fibre Channel Hardware Interconnections (continued) Cluster Component Connections Dell|EMC Fibre Channel storage system Two CAT 5e/6 cables connected to LAN (one from each storage processor) One to four optical connections to each Fibre Channel switch in a SAN-attached configuration See "Cabling Your Dell|EMC Fibre Channel Storage" on page 17 for more information.
Figure 1-2. Cabling in a Dell|EMC SAN-Attached Fibre Channel Cluster Cluster node 1 Cluster node 2 HBA ports (2) HBA ports (2) SP-B (Storage processor B) SP-A (Storage processor A) CX3-20 storage system Use the following procedure to configure your Oracle cluster storage system in a four-port, SAN-attached configuration. 1 Connect one optical cable from SP-A port 0 to Fibre Channel switch 0. 2 Connect one optical cable from SP-A port 1 to Fibre Channel switch 1.
8 Connect one optical cable from HBA 1 of each additional node to Fibre Channel switch 1. Setting Up Your SAS Cluster with a PowerVault MD3000 To configure your PowerEdge Systems and PowerVault MD3000 hardware and software to function in an Oracle Real Application Cluster environment, verify the following hardware connections and the hardware and software configurations as described in this section using Figure 1-3, Table 1-5, Figure 1-4 and Table 1-3.
Figure 1-3. Cabling the SAS Cluster and PowerVault MD3000 Private Network PowerEdge Systems PowerVault MD3000 Storage System Table 1-5.
Before You Begin Verify that the following tasks have been completed for your cluster: • All hardware is installed in the rack. • All hardware interconnections are configured. • All virtual disks/LUNs, RAID groups, and storage groups are created on the storage system. • Storage groups are assigned to the cluster nodes. CAUTION: Before you perform the procedures in the following sections, ensure that the system hardware and cable connections are installed correctly.
Figure 1-4. Cabling in a Direct-attached SAS Cluster Dual-HBA Host Server Dual-HBA Host Server RAID Controller Module 0 RAID Controller Module 1 MD3000 RAID Enclosure MD1000 Expansion Enclosure MD1000 Expansion Enclosure Configuring Networking and Storage for Oracle RAC 10g R2 This section provides the following information about network and storage configuration: 22 • Configuring the public and private networks. • Verifying the storage configuration.
NOTE: Oracle RAC 10g R2 is a complex database configuration that requires an ordered list of procedures. To configure networking and storage in a minimal amount of time, perform the following procedures in order. Configuring the Public and Private Networks NOTE: Each node requires a unique public and private internet protocol (IP) address and an additional public IP address to serve as the virtual IP address for the client connections and connection failover.
Table 1-7. Network Configuration Example for a Two-Node Cluster Host Name Type IP Address Registered In rac1 Public 155.16.170.1 %SystemRoot%\system32\drivers\etc\hosts rac2 Public 155.16.170.2 %SystemRoot%\system32\drivers\etc\hosts rac1-vip Virtual 155.16.170.201 %SystemRoot%\system32\drivers\etc\hosts rac2-vip Virtual 155.16.170.202 %SystemRoot%\system32\drivers\etc\hosts rac1-priv Private 10.10.10.1 %SystemRoot%\system32\drivers\etc\hosts rac2-priv Private 10.10.10.
k In the Team Properties window, click OK. l In the Intel NIC's Properties window, click OK. m Close the Computer Management window. 4 If node 1 is configured with Broadcom NICs, configure NIC teaming by performing the following steps. If not go to step 5. NOTE: Before you run the Broadcom Advanced Control Suite (BACS) to team the adapters, make sure your system has the Microsoft .NET Framework version 2.0 installed.
m In the Broadcom Advanced Control Suite 3 window, click File then Exit. 5 Repeat step 1 through step 4 on the remaining nodes. Configuring the IP Addresses for Your Public and Private Network Adapters NOTE: The TOE functionality of TOE-capable NIC is not supported in this solution. 1 Update the adapter’s network interface name, if required. Otherwise, go to step 3. a On node 1, click Start→Settings→Control Panel→Network Connections.
f In the Properties window, click Close. g Repeat step a through step f for the Private NIC team. NOTE: Private NIC team does not require a default gateway address and DNS server entry. 3 Ensure that the public and private network adapters appear in the appropriate order for access by network services. a On the Windows desktop, click Start→Settings→Control Panel→ Network Connections. b In the Network Connections window, click Advanced and select Advanced Settings.
155.16.170.201rac1-vip 155.16.170.202rac2-vip NOTE: Registering the private IP addresses with the DNS server is not required as the private network IP addresses are not accessible from the public network. 5 Repeat step 1 through step 4 on the remaining nodes. 6 Ensure that the cluster nodes can communicate with the public and private networks. a On node 1, open a command prompt window.
Verifying the Storage Assignment to the Nodes 1 On the Windows desktop, right-click My Computer and select Manage. 2 In the Computer Management window, click Device Manager. 3 Expand Disk drives. 4 Under Disk drives, ensure that four small computer system interface (SCSI) disk devices appear for each LUN/virtual disk assigned in the storage. 5 Expand Storage and click Disk Management. If the Welcome to the Initialize and Convert Disk Wizard appears, perform step a through step d. Otherwise, go to step 6.
NOTE: For more information, see the EMC PowerPath documentation that came with your Dell|EMC storage system. 2 When the installation procedure is complete, restart your system. 3 Repeat step 1 and step 2 on the remaining nodes. Installing Multi-Path driver software for MD3000 1 On node 1, install the Multi-Path driver software from the PowerVault MD3000 Resource CD. NOTE: For more information, see the documentation that came with your Dell MD3000 storage system.
To prepare the disks for Oracle Clusterware, identify the OCR, voting, data and flash recovery area disks. After you identify the appropriate disks, perform the following steps on node 1. Enabling the Automount Option for the Shared Disks 1 On node 1, click Start and select Run. 2 In the Run field, enter cmd and click OK. 3 At the command prompt, enter diskpart. 4 At the DISKPART command prompt, enter automount enable. The following message appears: Automatic mounting of new volumes enabled.
9 Create a logical drive for the OCR disk. a On the partition area of the disk identified for OCR and voting disk (1 GB LUN/virtual disk), right-click the free space and select New Logical Drive. The Welcome to the New Partition Wizard appears. b Click Next. c In the Select Partition Type window, select Logical drive and click Next. d In the Specify Partition Size window, enter 120 in the Partition size in MB field and click Next.
NOTE: If you are using Redundant Voting Disk and OCR, repeat the steps outlined in step 9 and step 10 for the redundant Voting Disk and OCR. Preparing the Database Disk and Flash Recovery Area for Database Storage This section provides information about creating logical drives that will be used to create ASM disk storage. ASM disk storage consists of one or more disk groups that can span multiple disks. 1 Create one logical drive for the Database.
3 If you find any drive letters assigned to the drives that you created in "Preparing the OCR and Voting Disks for Clusterware" on page 30 perform the following steps: a Right-click the logical drive and select Change Drive Letter and Paths. b In the Change Drive Letter and Paths window, select the drive letter and click Remove. c In the Confirm window, click Yes. d Repeat step a through step c for the remaining logical drives on the storage partition.
4 In the Specify Home Details window, accept the default settings and click Next. NOTE: Record the OraCR10g_home (CRS Home) path for later use. 5 In the Product-Specific Prerequisite Checks window, click Next. 6 In the Specify Cluster Configuration window, perform the following steps: a Verify the public, private, and virtual Host names for the primary node. b If you want to change these values, click Edit and enter the desired values, and click OK. c Click Add.
11 In the Cluster Configure Storage screen, perform the following steps for the Voting disk: a Locate the three 50 MB partitions that you created in the subsection "Preparing the OCR and Voting Disks for Clusterware" on page 30. b Select the first partition and click Edit. c In the Specify Disk Configuration window, select Place Voting Disk on this partition and click OK. d Repeat steps b and c on the remaining Voting Disk partitions. 12 Click Next. 13 Ignore the warning messages and click OK.
The OUI starts and the Welcome window appears. 3 Click Next. 4 In the Select Installation Type window, click Standard Edition and click Next. 5 In the Specify Home Details window under Destination, verify the following: • In the Name field, the Oracle database home name is OraDb10g_home1 • In the Path field, the complete Oracle home path is %SystemDrive%\oracle\product\10.2.0\db_1 where %SystemDrive% is the user’s local drive. NOTE: Record the path because you will need this information later.
1 Download the patchset 10.2.0.4 from the Oracle Metalink website located at metalink.oracle.com. 2 Unzip the patchset to the following location %SystemDrive%. where %SystemDrive% is the user’s local drive. Installing Patchset 10.2.0.4 for Oracle 10g Clusterware Before You Begin 1 Stop nodeapps on all nodes. Enter the following: %SystemDrive%:\%CRS_HOME%\bin> srvctl stop nodeapps -n where %SystemDrive% is the user’s local drive.
NOTE: You must install the patchset software from the node where the Oracle RAC 10g R2 software was installed. If this is not the node where you are running the OUI, exit and install the patchset from that node. Patchset Installation Steps 1 Start the OUI located in the patchset folder. 2 In the Welcome window, click Next. 3 In the Specify home details window, select the name as OraDb10g_home1 from the drop down list to install the patchset to Oracle home and click Next.
3 In the Real Application Clusters Configuration window, select Cluster configuration and click Next. 4 In the Real Application Clusters Active Nodes window, select Select All nodes and click Next. 5 In the Welcome window, select Listener configuration and click Next. 6 In the Listener Configuration Listener window, select Add and click Next. 7 In the Listener Configuration Listener Name window, select the default setting in the Listener name field and click Next.
3 In the Run field, enter dbca and click OK. The Database Configuration Assistant starts. 4 In the Welcome window, select Oracle Real Application Clusters database and click Next. 5 In the Operations window, click Create a Database and click Next. 6 In the Node Selection window, click Select All and click Next. 7 In the Database Templates window, click Custom Database and click Next. 8 In the Database Identification window, in the Global Database Name field, enter a name such as racdb and click Next.
b In the Redundancy box, select External. c Click Stamp Disks. d Select Add or change label and click Next. e In the Select disks screen, select the disks which you plan to use for the database files. Note that the Status is marked as Candidate device. f In the Generate stamps with this prefix field, keep the default settings and click Next. g In the Stamp disks window, click Next. h Click Finish to save your settings. i Select the check boxes next to the available disks and click OK.
19 In the Database File Locations window, select Use Oracle-Managed Files and Multiplex Redo Logs and Control Files and click Next. 20 In the Recovery Configuration window, perform the following steps: a Select Specify Flash Recovery Area. b Click Browse. c Select the FLASH disk group that you created in step 17 and click OK. d In the Flash Recovery Area Size text box enter the total size of the flash disk group created in step 17. e Select Enable Archiving. f Click Edit Archive Mode Parameters.
Configuring and Deploying Oracle Database 10g (Single Node) This section provides information about installing the Oracle 10g R2 software on a single node. This section covers the following topics: • Installing Oracle Clusterware Version 10.2.0.1 • Installing Oracle 10g Database with Real Application Clusters 10.2.0.1 • Installing the Oracle Database 10g 10.2.0.4 Patchset • Configuring the Listener • Creating the Seed Database Installing Oracle Clusterware Version 10.2.0.
7 Click Next. The Specify Network Interface Usage window appears, displaying a list of cluster-wide network interfaces. 8 In the Interface Type drop-down menus, configure the public Interface Type as Public and the private Interface Type as Private (if required) by selecting the Interface Name and clicking Edit. Select the correct Interface Type and click OK. 9 Click Next.
15 Click Exit to finish the OUI session. 16 In the Exit window, click Yes. Installing Oracle10g Database With Real Application Clusters 10.2.0.1 1 Insert the Oracle Database 10g Release 2 CD into the CD drive. The OUI starts and the Welcome screen appears. If the Welcome screen does not appear: a Click Start→Run. b In the Run field, enter: %CD drive%\autorun\autorun.exe where %CD drive% is the drive letter of your CD drive. 2 Click OK to continue. The OUI starts and the Welcome window appears.
8 In the Product-Specific Prerequisite Checks window, click Next. 9 In the Select Configuration Option window, select Install Database Software only, and click Next. 10 In the Summary window, click Install. 11 In the End of Installation window, perform the steps as listed. NOTE: You should perform the steps as listed in the window before proceeding with the next step. 12 Click Exit. Installing Patchset 10.2.0.
Installing the Patchset NOTE: You must install the patchset software from the node where the Oracle RAC 10g R2 software was installed. If this is not the node where you are running the OUI, exit and install the patchset from that node. 1 Start the OUI located in the patchset folder. 2 In the Welcome window, click Next. 3 In the Specify home details window, select the name as OraCr10g_home and install the patchset to the Clusterware home and click Next.
5 In the Summary window, click Install. During the installation, the following error message may appear: Error in writing to file oci.dll. To work around this issue, perform the following steps: a Cancel the patchset installation. b Rename the %Oracle_home%\BIN directory to \bin_save. c Reboot the system. d After the reboot, rename the \bin_save file to \bin. e Run the setup.exe file from the patchset folder. Allow all the Oracle default services to run.
9 In the Listener Configuration More Listeners window, select No and click Next. 10 In the Listener Configuration Done window, click Next. 11 In the Welcome window, click Finish. Creating the Seed Database Perform the following steps to create the seed database using Oracle ASM: 1 Verify the Oracle Clusterware is running. a Open a command prompt window. Click Start→Run and enter cmd. b Enter crsctl check crs.
10 In the Storage Options window, select Automatic Storage Management (ASM) and click Next. 11 In the Create ASM Instance window, perform the following steps: a In the SYS password field, enter a new password in the appropriate fields. b Click Next. 12 In the Database Configuration Assistant window, click OK. The ASM Creation window appears, and the ASM Instance is created.
For example, FLASH. b In the Redundancy box, select External. c Click Stamp disks. d In the Select disks screen, select the disk which you plan to use for the Flash Recovery Area. Note that the Status is marked as Candidate device. e In the Generate stamps with this prefix field, enter FLASH, and click Next. f In the Stamp disks window, click Next. g Click Finish to save your settings. h Select the check boxes next to the available disks and click OK.
h Click Next. 20 In the Database Content window, click Next. 21 In the Database Services window, click Next. 22 In the Initialization Parameters window, click Next. 23 In the Database Storage window, click Next. 24 In the Creation Options window, click Finish. 25 In the Summary window, click OK. The Database Configuration Assistant window appears, and the Oracle software creates the database. NOTE: This procedure may take several minutes to complete.
NET USE \\host_name\C$ You have the required administrative privileges on each node if the operating system responds with: Command completed successfully. NOTE: If you are using ASM, then make sure that the new nodes can access the ASM disks with the same permissions as the existing nodes. NOTE: If you are using Oracle Cluster File Systems, then make sure that the new nodes can access the cluster file systems in the same way that the other nodes access them.
7 Execute the following command to identify the node names and node numbers that are currently in use: CRS home\bin\olsnodes -n 8 Execute the crssetup.exe command using the next available node names and node numbers to add CRS information for the new nodes. For example: crssetup.
vipca -nodelist Node1,Node2,Node3,...NodeN 6 Add a listener to the new node only by running the Net Configuration Assistant (NetCA). After completing the procedures in the previous section, the new nodes are defined at the cluster database layer. New database instances can now be added to the new nodes.
11 Review the information on the Summary dialog and click OK. The DBCA displays a progress dialog showing the DBCA performing the instance addition operation. When the DBCA completes the instance addition operation, the DBCA displays a dialog asking whether you want to perform another operation. 12 Click No and exit the DBCA, or click Yes to perform another operation.
%SystemDrive%\Oracle\product\10.2.0\crs\bin\vipca where %SystemDrive% is the user’s local drive. 3 Follow the steps in VIPCA by selecting the interface appropriate for the public interface, and specifying the correct VIP address to be used. 4 Click Finish. Uninstalling Oracle Clusterware NOTE: Copy the GUIOraObJman folder to a different location before uninstalling Clusterware. Utilities in this folder can be used to clean the share disks later.
a Click Start→Run. b In the Run field, enter the following and click OK: services.msc The Services window appears. 2 Identify and delete any remaining Oracle services. To delete a service: a Click Start→Run. b In the Run field, enter cmd and click OK. c Open a command prompt and enter the following: sc delete d Repeat step c for each additional service that you need to remove. 3 Restart node 1 and log in as administrator.
If OCRCFG, OCRMIRRORCFG, Votedsk1, Votedsk2, and Votedsk3 do not appear in the file, assign OCRCFG, OCRMIRRORCFG, Votedsk1, Votedsk2, and Votedsk3 to the appropriate disk and save the file. Use the Oracle Symbolic Link Importer (ImportSYMLinks) to import the symbolic links into the assigned storage disks (OCRCFG, OCRMIRRORCFG, Votedsk1, Votedsk2, and Votedsk3). At the command prompt, enter the following: %SystemDrive%\oracle\product\10.2.
6 Delete the symlinks for the OCR (OCRCFG and OCRMIRRORCFG) and the voting disks (Votedsk1, Votedsk2, and Votedsk3). a Select OCRCFG, OCRMIRRORCFG, Votedsk1, Votedsk2, and Votedsk3. b Click Options and select Commit. If successful, the OCRCFG, OCRMIRRORCFG, Votedsk1, Votedsk2, and Votedsk3 entries disappear. c Click Options and select Exit to close the Oracle Object Manager. 7 Launch the Computer Management Console. a On the Windows desktop, click Start→Run.
PowerPath Installation • PROBLEM: PowerPath installation fails. – CAUSE: Unknown installation error. – RESOLUTION: Reboot the system on which the PowerPath installation fails. NIC Teaming • PROBLEM: Broadcom NIC teaming fails. – CAUSE: The following steps may result in a NIC teaming failure: • One of the Broadcom NICs that was used in the NIC teaming fails or is disabled. Due to the availability of the second NIC, the private network is still active on this node through the second NIC.
NOTE: Though the suggested solutions may fix the above mentioned issue, be aware of the implications or issues that may arise from enabling Port Fast Learning or turning off Spanning Tree on your switches. Installing Oracle Clusterware • PROBLEM: During Clusterware installation you get the error message: The specified nodes are not clusterable. – CAUSE: The administrative or the account used to install Oracle has a blank password associated with it.
a Uninstall Oracle Clusterware using OUI. b Uninstall any remaining Oracle services. c Clean the storage devices. See "Uninstalling Oracle Clusterware" on page 56 for more information. Oracle Clusterware • 64 PROBLEM: The cluster node restarts with a blue screen. – CAUSE: The cluster node cannot communicate with the storage disks. – RESOLUTION: Perform the following steps: a Restart the cluster node. b During POST, press .
See "Installing the Host-Based Software Needed for Storage" on page 27 and "Verifying Multi-Path Driver Functionality" on page 29. o Repeat step a through step n and reset each Oracle service back to its original setting. System Blue Screen • PROBLEM: The cluster nodes generate a blue screen. – CAUSE: The cluster nodes cannot access the voting disk.
Storage • PROBLEM: Disks appear as unreachable. – CAUSE: On the Windows desktop, when you right-click My Computer, select Computer Management, and then click Disk Management, the disks appear unreachable. Potential causes are that the LUNs are not assigned to the cluster nodes, cabling is incorrectly installed, or the HBA drivers are not installed on the cluster node(s).
– CAUSE: The public network adapter interface (or the network interface assigned for VIP in case 4 network interfaces) name is not identical on both cluster nodes. – RESOLUTION: Ensure that the public network adapter interface name is identical on both cluster nodes. To verify the public network adapter interface name: a On node 1, click Start and select Settings→Control Panel→Network Connections.
Oracle Support For information about Oracle software and application clusterware training and contacting Oracle, see the Oracle website at www.oracle.com or your Oracle documentation. Technical support, downloads, and other technical information are available at the Oracle MetaLink website at www.metalink.oracle.com. Obtaining and Using Open Source Files The software contained on the Deployment CD is an aggregate of third-party programs as well as Dell programs.
Index C cabling SAS storage, 20 cluster fibre channel, 9, 15 Clusterware installing, 33, 43 preparing disks, 29 uninstalling, 56 flash recovery disks, 30 H hardware connections, 16 requirements, 9 help, 66 Dell support, 66 Oracle support, 66 D disks flash recovery, 30 voting, 29 I IP addresses configuring, 25 E iSCSI hardware requirements, 10 EMC Naviagent, 27 PowerPath, 8 L listener configuring, 38, 48 F fibre channel cluster configuration, 9 Dell|EMC, 17 SAN-attached, 16 setting up, 15 M Multi-P
N cluster configuration, 9 Naviagent, 27 network configuring, 21 NIC port assignments, 22 O OCR disk, 29 Oracle preparing disks for Clusterware, 29 Oracle Database 10g configuring, 43 deploying, 43 OUI running, 57 P partitions creating, 30 patchset installing, 37, 46 PowerPath installing, 29 S SAS 70 Index storage configuring, 21 T TOE, 23 V voting disk, 30 creating logical drive, 31 W Windows configuring, 10 installing, 10