Oracle RAC One Node in 11.2.0.1 Solution for ZCM

0 Likes

By kmegha

Table of Contents:

 

 

 

 

 

 

 

 

 

 

 

 

 

 



Introduction


Real Application Cluster - RAC One Node is a RAC enabled single instance database that can be moved around in a cluster. It allows the database instance to be relocated to another node of the cluster with a grace period of maximum 30 minutes. This grace period will allow all the existing transactions in the database to complete. Once the online relocation is initiated all the new client connections will be re-directed to the new node where the instance will be relocating.

Oracle RAC One Node is used for Active-Passive cluster scenarios where only one instance is running(active) in any given time and other nodes are passive(stand by) in a cluster. In the event of a first node server failure, all the connections are terminated and are re-established with the new instance on the secondary node within the grace period.

Oracle Grid Infrastructure software provides Clusterware files(Crs voting disks) and Automatic storage management(ASM) packaged together. This is essential for RAC interoperability.


Resource Requirements


Hardware resources

    1. A blade server with 48GB RAM, 1TB HDD with SCSI, 2 CPUs, quad core

 

    1. Oracle VM server 2.2.2

 

    1. Oracle VM manager 2.2.0

 

    1. Oracle Linux-Redhat5 x32 bit – 2 (node1 and node2)

 

    1. Kernel Rpms

 

    1. A VM on ESX with Windows/Linux to host ZCM

 

    1. A VM on ESX with Linux-Redhat5 to host VM manager



Software resources

    1. Oracle Grid infrastructure software 11.2.0.1

 

    1. Oracle RAC DB software 11.2.0.1

 

    1. ZCM 11.2 software



Installation Plan


Pre-installation tasks

    1. Install Oracle VM server 2.2.2 on the above mentioned blade server.

 

    1. Install Oracle VM manager 2.2.0. This is used to access the virtual machines created on VM server

 

    1. Download Oracle Linux Redhat5 x32 bit iso (OS with enhanced performance and few plugins)

 

    1. Through VM manager, import the downloaded iso file to create virtual machines.
      Note: We can also use templates for the same. Refer to the VM Manager documentation.

 

    1. Create two virtual machines using Oracle Linux iso each having 3 GB RAM, Swap memory twice as RAM, 60 HDD (approx- 10 GB for grid Rac, remaining for shared disks) , 2 NIC cards 1 GB of temp space /tmp eachRoot> grep MemTotal /proc/meminfo
      Root> grep MemTotal /proc/swapinfo

      These two machines will be the 2 nodes for oracle RAC

 

    1. Network ConfigurationThese two machines must have:

        1. A public IP address for each node, with the following characteristics:

            • 2 Static IP addresses with hostnames

            • Resolvable by dns, entry in \etc\hosts , same subnet as virtual and SCAN addresses



        1. A virtual IP address for each node, with the following characteristics:

            • 2 Static IP addresses with hostnames

            • Resolvable by dns, entry in \etc\hosts, same subnet mask as public ip and SCAN addresses.


          Note: VIP-virtual ip is a public IP address that is attached to the public interface of the RAC node. VIP should be used by all clients to communicate with the database to ensure fast failover during an outage

        1. A Single Client Access Name (SCAN) for the cluster, with the following characteristics:

            • Three Static IP addresses with hostnames configured on the domain name server (DNS) so that the three IP addresses are associated with the name provided as the SCAN, and all three addresses are returned in random order by the DNS to the requestor. (Take admin help)

            • Same subnet as public and virtual ip addesses


          Example:
          bash> nslookup blr-srm-cluster
          Server: 196.96.201.1
          Address: 196.96.201.1#53
          Name: blr-srm-cluster.labs.blr.novell.com
          Address: 196.96.94.226
          Name: blr-srm-cluster.labs.blr.novell.com
          Address: 196.96.94.227
          Name: blr-srm-cluster.labs.blr.novell.com
          Address: 196.96.94.228

          Note: Single Client Access Name (SCAN), is used to connect to databases within the cluster irrespective of which nodes they are running on. By default, the name used as the SCAN is also the name of the cluster. The default value for the SCAN is based on the local node name (where DB installed)

        1. A private IP address for each node, with the following characteristics:

            • 2 Static IP addresses with hostnames

            • Need not be resolvable by dns, having its own subnet addresses, entry in \etc\hosts file


          Example of a hosts file
          127.0.0.1 localhost.localdomain localhost
          ::1 localhost6.localdomain6 localhost6
          #node2
          196.96.94.229 blr-srm-r11t.labs.blr.novell.com blr-srm-r11t #public
          196.96.94.225 blr-srm-r11p.labs.blr.novell.com blr-srm-r11p #Vip
          #node1-localnode
          196.96.94.221 blr-srm-r11l.labs.blr.novell.com blr-srm-r11l #public
          196.96.94.223 blr-srm-r11n.labs.blr.novell.com blr-srm-r11n #Vip
          #private ip
          10.0.0.2 node2-priv.labs.blr.novell.com node2-priv
          10.0.0.1 node1-priv.labs.blr.novell.com node1-priv


 

    1. Create Shared disks-SCSI:From Oracle VM manager, before switching on the machine, create 5 SCSI shared disks from local node- node1. 3 for clusterware files, 1 for fast disk recovery, 1 for RAC DB

        1. Go to VM manager->Resources ->shared virtual disks-> create->->Create new virtual shared disks ->provide name (CRS1 or RAC DB) and disk space-approx 10GB each ->ok.

        1. After creating 5 shared disks, attach the disks to the two nodes, first localnode node1 & then node2Go to VM manager->Virtual machines ->select a local node-node1->configure->attach/detach shared virtual disks->add available disks-> confirm

        1. Repeat the same for node2 in the same order. If we select CRS1-sda1 in node1 to add first, same should be maintained in node2
          Note: Refer to the VM Manager documentation for any clarification.


 

    1. Power On the machine.

 

    1. Check if two nodes-node1 and node2 are inter-pingable

 

    1. Check-ping /nslookup for public/private/virtual and scan ips

 

    1. Disable firewall in both machines

 

    1. Disable SELinux in both machines.To see:
      root> sestatusTo disable:
      edit /etc/selinux/config
      SELINUX=disabled
      reboot

 

    1. Create users, user groups, directories and give permissions on both nodes from root

        1. create users - grid and oracle with common password across.
          Example: 
          user: grid, password: grid in both machines
          user: oracle, password: oracle

        1. Add user groups for 'grid' user
          groupadd -g 1000 oinstall
          groupadd -g 1200 asmadmin
          groupadd -g 1201 asmdba
          groupadd -g 1202 asmoper
          useradd -m -u 1100 -g oinstall -G asmadmin,asmdba,asmoper -d /home/grid -s /bin/bash -c "Grid Infrastructure Owner" grid

          Example:
          root> id grid
          uid=1100(grid)gid=1000(oinstall)groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper)

        1. Add user groups for 'oracle' user
          groupadd -g 1300 dba
          groupadd -g 1301 oper
          useradd -m -u 1101 -g oinstall -G dba,oper,asmdba -d /home/oracle -s /bin/bash -c "Oracle Software Owner" oracle

          Example:
          root> id oracle
          gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1300(dba),1301(oper)

        1. Make directories and give permissions
          root> mkdir -p /u01/app/oracle
          mkdir -p /u01/app/11.2.0/grid
          chown -R grid:oinstall /u01
          chown -R oracle:oinstall /u01/app/oracle
          chmod -R 775 /u01

        1. Create partitions for the added shared disks from node1-localnode.

            1. from root, to list all disk
              root@node1> fdisk -l

            1. to create partition for each SCSI disks
              root@node1 > fdisk /dev/sda

              Note: Every disk should have only 1 primary partition. After partition, we have sda1, sdb1, sdc1, sdd1, sde1



        1. After the partition from node1, we should inform the kernel of the new changes in all other nodes.Go to node2
          root@node2> partprobe
          root@node2>fdisk -l

          Verify that it lists all the partitioned disks in node2 as well.

        1. Install and configure ASM library drivers.Oracle ASM library driver (ASMLIB) simplifies the configuration and management of the disk devices by eliminating the need to rebind disk devices used with Oracle ASM each time the system is restarted.

            1. Download the drivers from below link on both nodes.http://www.oracle.com/technetwork/server-storage/linux/downloads/index-088143.html

            1. Install the packages from root on both nodes.
              root> rpm -oracleasm-support-2.1.7-1.el5
              oracleasmlib-2.0.4-1.el5
              oracleasm-2.6.18-274.el5xen-2.0.5-1.el5

            1. To configure: /usr/sbin/oracleasm configure -i on both nodes.
              Default user to own the driver interface []: grid
              Default group to own the driver interface []: asmadmin
              Start Oracle ASM library driver on boot (y/n) [n]: y
              Scan for Oracle ASM disks on boot (y/n) [y]: y
              Writing Oracle ASM library driver configuration: done

            1. To load kernel module: /usr/sbin/oracleasm init on both nodes.



        1. Mark shared disks as ASM disks.

            1. From node1-root, create/mark asm disks for all 5 shared disks.
              Example:
              root@node1> /usr/sbin/oracleasm createdisk CRS1 /dev/sda1
              root@node1> /usr/sbin/oracleasm createdisk CRS2 /dev/sdb1
              root@node1> /usr/sbin/oracleasm createdisk CRS3 /dev/sdc1
              root@node1> /usr/sbin/oracleasm createdisk RACDB /dev/sdd1
              root@node1> /usr/sbin/oracleasm createdisk FRA /dev/sde1
              root@node1> /usr/sbin/oracleasm listdisks

              Note: Disk name should be in capitals, and better to keep it simple & relevant.

            1. After the above step, for the node2 to know about the change, from root-node2.root@node2> /usr/sbin/oracleasm scandisks
              root@node2> /usr/sbin/oracleasm listdisks
              All the marked disks should be visible.



        1. Create a .bash_profile or .profile on both nodes.

            1. create a .bash_profile for user 'grid', save and source the below.
              grid>vi .bash_profile
              PATH=$HOME/bin:/u01/app/11.2.0/grid/bin:$PATH
              export PATH
              umask 022
              ORACLE_SID= ASM1
              export ORACLE_SID
              ORACLE_BASE=/u01/app/grid
              export ORACLE_BASE
              ORACLE_HOME=/u01/app/11.2.0/grid
              export ORACLE_HOME
              export TEMP=/tmp
              export TMPDIR=/tmp
              TNS_ADMIN=/u01/app/11.2.0/grid/network/admin
              export TNS_ADMIN
              LD_LIBRARY_PATH=/u01/app/11.2.0/grid/lib
              export LD_LIBRARY_PATH
              CLASSPATH=/u01/app/11.2.0/grid/jlib:/u01/app/11.2.0/grid/JRE:$CLASSPATH
              export CLASSPATH

              Note: ORACLE_SID= ASM1 for node1, ASM2 for node2 and so on.

            1. Create a .bash_profile for user 'oracle', save and source the below.
              oracle>vi .bash_profile
              PATH=$HOME/bin:/u01/app/oracle/product/11.2.0/dbhome_1/bin:$PATH
              export PATH
              ORACLE_BASE=/u01/app/oracle
              export ORACLE_BASE
              ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
              export ORACLE_HOME
              ORACLE_SID=orcl1
              export ORACLE_SID
              LD_LIBRARY_PATH=/u01/app/oracle/product/11.2.0/dbhome_1/lib:$LD_LIBRARY_PATH
              export LD_LIBRARY_PATH
              TNS_ADMIN=/u01/app/oracle/product/11.2.0/dbhome_1/network/admin
              export TNS_ADMIN
              CLASSPATH=/u01/app/oracle/product/11.2.0/dbhome_1/jlib:$CLASSPATH
              export CLASSPATH
              ORACLE_UNQNAME=orcl
              export ORACLE_UNQNAME
              export TEMP=/tmp
              export TMPDIR=/tmp

              Note: ORACLE_SID= orcl1 for node1, orcl2 for node2 and so on.



        1. Kernel Parameters:If we have missing kernel rpms or missing packages on linux, It will be identified when the Grid installer is triggered.

            1. If we have configured oracle unbreakable Linux (licensed support), then we can install it by:
              up2date --whatprovides libstdc  .so.5 or
              ./runInstaller downloadUpdate

            1. If we do not have the support, we can use yum-server. Refer the below link and install all the missing packages indicated by the installer.http://public-yum.oracle.com



        1. NTP service for time synchronization on both nodes.
          Edit ntpd file to add -x flag
          root>vi /etc/sysconfig/ntpd
          #Drop root to id 'ntp:ntp' by default.
          OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"
          #Set to 'yes' to sync hw clock after successful ntpdate
          SYNC_HWCLOCK=no
          # Additional options for ntpdate
          NTPDATE_OPTIONS=""
          root> service ntpd restart

          Note: For SUSE systems, NTPD_OPTIONS="-x -u ntp"
          Check the time on both nodes, should be for cluster services to operate properly.

        1. Xterm display:

            1. For GUI, while accessing with Putty, enable the SSH-X11 forwarding or

            1. grid/oracle@node1>DISPLAY=<your local workstation>:0.0
               export DISPLAY
              TEST X CONFIGURATION BY RUNNING xterm
              grid@node1>xterm &

            1. Please note that xming server should be installed and running from where putty is accessed.



        1. Configure passwordless SSH connectivity for User equivalence on both nodes.To configure passwordless SSH, you must first create RSA or DSA keys on each cluster node, and then copy all the keys generated on all cluster node members into an authorized keys file that is identical on each node. Note that the SSH files must be readable only by root and by the software installation user (grid, oracle), as SSH ignores a private key file if it is accessible by others. In the examples that follow, the DSA key is used.


            1. grid> mkdir ~/.ssh
              grid> chmod 700 ~/.ssh
              grid> /usr/bin/ssh-keygen -t dsa
              Generating public/private dsa key pair.
              Enter file in which to save the key (/home/grid/.ssh/id_dsa): [Enter]
              Enter passphrase (empty for no passphrase): [Enter]
              Enter same passphrase again: [Enter]
              Your identification has been saved in /home/grid/.ssh/id_dsa.
              Your public key has been saved in /home/grid/.ssh/id_dsa.pub.
              The key fingerprint is: 57:21:d7:d5:54:29:4c:12:40:23:36:e9:6e:2f:e6:40

              Note: Repeat the above steps for all nodes.


            1. grid@node1>touch ~/.ssh/authorized_keys
              grid@node1>ls -l ~/.ssh
              grid@node2>ssh node1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
              The authenticity of host 'node1 (196.96.94.221)' can't be established.
              RSA key fingerprint is 66:65:a6:99:5f:cb:6e:60:6a:06:18:b7:fc:c2:cc:3e
              Are you sure you want to continue connecting (yes/no)? yes
              Warning: Permanently added 'node1, 196.96.94.221' (RSA) to the list of known hosts.
              grid@node1's password: xxxxx (grid)
              grid@node1>ssh node2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
              The authenticity of host 'node2 (196.96.94.229)' can't be established.
              RSA key fingerprint is 30:cd:90:ad:18:00:24:c5:42:49:21:b0:1d:59:2d:7b.
              Are you sure you want to continue connecting (yes/no)? yes
              Warning: Permanently added 'node2, 196.96.94.229’ (RSA) to the list of known hosts
              grid@rode1's password: xxxxx (grid)
              grid@node1>ls -l ~/.ssh
              grid@node1>scp ~/.ssh/authorized_keys node2:.ssh/authorized_keys
              grid@node2's password: xxxxx
              authorized_keys
              grid@node1>chmod 600 ~/.ssh/authorized_keys
              grid@node2>chmod 600 ~/.ssh/authorized_keys

              Note: Perform the above steps in all the nodes for 'grid' and 'oracle' user.

              Test:
              grid@node1>ssh "date:hostname" node2
              grid@node2>ssh "date:hostname" node1



        1. Download and copy.

            1. Download Oracle Infrastructure Grid 11.2.0.1 software to node1 only from the link: http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.htmlLogin with grid user, copy it to /home/grid, give permissions and extract.

            1. Download Oracle RAC 11.2.0.1 software to node1 only from the link:
              http://www.oracle.com/technetwork/database/enterprise-edition/downloads/112010-linuxsoft-085393.htmlLogin with oracle user, copy it to /home/grid, give permissions and extract.
              Note: GRID infrastructure version >= Oracle RAC version.



        1. Lastly, before we start Oracle Grid installation, we need to run a cluster requirements check with cvuqdisk utility that comes with the software.

            1. For cvuqdisk verification utility, we need to install a rpm package on both nodes.

            1. root>export CVUQDISK_GRP=oinstall

            1. Copy /home/grid/grid from node1 to node2 /home/grid. 'grid' folder contains cvuqdisk utility obtained when the software was copied.


            1. root>rpm -iv cvuqdisk-1.0.7-1.rpm
              Preparing packages for installation...
              cvuqdisk-1.0.7-1
              Verify cvuqdisk install on both nodes
              root> ls -l /usr/sbin/cvuqdisk
              Execute ./runcluvfy.sh script for both nodes
              grid>/home/grid/grid/runcluvfy.sh stage -pre crsinst -n node1,node2 -fixup –verbose





      Oracle Grid Infrastructure 11.2.0.1 Software Installation


      We are ready to start Oracle Grid Infrastructure 11.2.0.1 software installation.

        1. From node1 with grid user using OUI (oracle universal installer)(navigate to the location where the software was copied)

          grid@node1>cd /home/grid/

          ./runInstaller

        1. Select installation option:Select "Install and Configure Grid Infrastructure for a Cluster"

        1. Select Installation Type:Select "Advanced Installation"

        1. Select "Product Languages"Make the appropriate selection(s) for your environment.

        1. Grid Plug and Play Information.




SCAN name and Cluster name will the same by default. After clicking next, the OUI will validate the SCAN and cluster information



 

    1. Add all the nodes which are to be a part of the cluster with their chosen vips.

 

    1. Provide grid username/password.

 

    1. Setup SSH connectivity, test SSH connectivity, click Next.




 

    • Storage option information.Select "Automatic storage management"

 

    • Create ASM disk group.All the previously created disks/asm marked will be listed here.

        1. Specify a disk name

        1. Select Normal redundancy

        1. Select the disks ex: CRS1,CRS2,CRS3


      Note: These disks are for storing clusterware files.
      Normal redundancy requires a minimum of 3 disks.

 

    • Specify an ASM password."Use same password for all the accounts"

 

    • Failure isolation support.Select " Do not use Intelligent Platform Management Interface(IPMI)"

 

    • Privileged operating system groups.If we have created the groups correctly, it will auto populate.

      OSDBA for ASM : asmdba
      OSOPER for ASM : asmoper
      OSASM: asmadmin
      Refer Pre-installation tasks: step 13.

 

    • Specify Installation Location.Set the "Oracle Base" ($GRID_BASE) and "Software Location" ($GRID_HOME) for the Oracle Grid Infrastructure installation:

      Oracle Base: /u01/app/grid
      Software Location: /u01/app/11.2.0/grid

 

    • Create Inventory.Since this is the first install on the host, you will need to create the Oracle Inventory. Use the default values provided by the OUI:

      Inventory Directory: /u01/app/oraInventory
      oraInventory Group Name: oinstall

 

    • Prerequisite Checks.The installer will run through a series of checks to determine if both Oracle RAC nodes meet the minimum requirements for installing and configuring the Oracle Clusterware and Automatic Storage Management software.

      Starting with Oracle Clusterware 11g Release 2 (11.2), if any check fails, the installer (OUI) will create shell script programs called fixup scripts to resolve many incomplete system configuration requirements. If OUI detects an incomplete task that is marked "fixable", then you can easily fix the issue by generating the fixup script by clicking the [Fix & Check Again] button.

      The fixup script is generated during installation. You will be prompted to run the script as root in a separate terminal session. When you run the script, it raises kernel values to required inimums, if necessary, and completes other operating system configuration tasks.

      If all prerequisite checks pass (as was the case for my install), the OUI continues to the Summary screen.
      Note: If there are any missing kernel parameters, install them and re-run the installer.Refer to Section -Pre-installation tasks: step 19 for details.

 

    • Summary: Click Finish to start the installation. Installer eprforms configuration on all nodes-first on primary (local) node, then on all other secondary nodes.

 

    • Execute Configuration scripts.After the installation completes, you will be prompted to run the /u01/app/oraInventory/orainstRoot.sh and /u01/app/11.2.0/grid/root.sh scripts. Open a new console window on both Oracle RAC nodes in the cluster, (starting with the node you are performing the install from), as the root user account and run the scripts. After the scripts run successfully, press ok.

      Go back to OUI and acknowledge the "Execute Configuration scripts" dialog window.

 

    • The final step performed by OUI is to run the Cluster Verification Utility (CVU).

 

    • Finish: At the end of the installation, click the [Close] button to exit the OUI.



Post Installation CHECKS after Grid Installation



    1. Verify the cluster services on all nodes.
      grid>cd /u01/app/11.2.0/grid/bin
      ./crsctl check cluster –all
      CRS-4638: Oracle High Availability Services is online
      CRS-4537: Cluster Ready Services is online
      CRS-4529: Cluster Synchronization Services is online
      CRS-4533: Event Manager is online

 

    1. Check Oracle TNS listener process on all nodes.
      grid@node1> ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' | awk '{print $9}'
      LISTENER_SCAN2
      LISTENER_SCAN3
      LISTENER
      grid@node2> ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' | awk '{print $9}'
      LISTENER_SCAN1
      LISTENER

 

    1. Confirming oracle ASM status on both nodes.
      grid> cd /u01/app/11.2.0/grid/bin
      ./srvctl status asm -a

 

    1. Check Oracle cluster registry (OCR).
      grid@node1>/u01/app/11.2.0/grid/bin/ocrcheck

 

    1. Check voting disk.
      grid@node1>cd /u01/app/11.2.0/grid/bin
      ./crsctl query css votedisk

 

    1. grid@node1>cd /u01/app/11.2.0/grid/bin

 

    1. Backup the ./root.sh script from /u01/app/11.2.0/grid/bin



ASM Disks Creation


We have to create Asm disks for RAC DB and Fast Recovery disk since grid installation only creates asm disks for CRS files.

    1. grid@node1>cd /u01/app/11.2.0/grid/bin./asmca

 

    1. Select "Create", choose "external redundancy", specify an appropriate disk name(FRA or RACDB), choose the listed disks and ok. Need to create it twice separately.Exit
      Note: Please note that the disks will be listed in this console only if they are asm marked.
      Refer to Section1: step 17

 

    1. Verify the asm disks.grid>cd /u01/app/11.2.0/grid/bin
      ./asmcmd
      ls (lists all the created asm disks)



Oracle RAC 11.2.0.1 Software Installation


We are ready to perform Oracle RAC database installation on all the nodes.

    1. From node1 with 'Oracle' user using OUI, go to the location where the software was copiedoracle@node1>cd /home/oracle

      ./runInstaller

 

    1. Configure Security updates. Required to provide email id if any.

 

    1. Installation option:'Select database software only"

 

    1. Grid options:


        1. Select all the list nodes for RAC database

        1. Provide oracle user/password

        1. Setup SSH connectivity, test SSH connectivity, click next.


 

    1. Product languages:Make the appropriate selection(s) for your environment.

 

    1. Database Edition:Select "Enterprise Edition".

 

    1. Installation Location:Specify the Oracle base and Software location (Oracle home) as follows:

      Oracle Base: /u01/app/oracle
      Software Location: /u01/app/oracle/product/11.2.0/dbhome_1

 

    1. Operating system groups:Select the OS groups to be used for the SYSDBA and SYSOPER privileges:

      Database Administrator (OSDBA) Group: dba
      Database Operator (OSOPER) Group: oper

 

    1. Prerequisites check:The installer will run through a series of checks to determine if both Oracle RAC nodes meet the minimum requirements for installing and configuring the Oracle Database software.

      Starting with 11g Release 2 (11.2), if any checks fail, the installer (OUI) will create shell script programs called fixup scripts to resolve many incomplete system configuration requirements. If OUI detects an incomplete task that is marked "fixable", then you can easily fix the issue by generating the fixup script by clicking the [Fix & Check Again] button.

      The fixup script is generated during installation. You will be prompted to run the script as root in a separate terminal session. When you run the script, it raises kernel values to required minimums, if necessary, and completes other operating system configuration tasks.

      If all prerequisite checks pass (as was the case for my install), the OUI continues to the Summary screen.

 

    1. Summary:Click Finish to start the installation. The installer performs the Oracle Database software installation process on both Oracle RAC nodes.

 

    1. Execute configuartion scripts:After the installation completes, you will be prompted to run the /u01/app/oracle/product/11.2.0/dbhome_1/root.sh script on both Oracle RAC nodes. Open a new console window on both Oracle RAC nodes in the cluster, (starting with the node you are performing the install from), as the root user account.

      Go back to OUI and acknowledge the "Execute Configuration scripts" dialog window.

 

    1. Finish:At the end of the installation, click the [Close] button to exit the OUI.



Software Patching For RAC One Node


Download and install Patch 9004119 on all the nodes. This patch install the following scripts which are required for Rac One Node.

raconefix - Fixes metadata after an Omotion failure or failover
raconeinit - Initialize the database to RAC One Node
raconestatus - Check the status of RAC One Node database
racone2rac - Upgrade RAC One Node database to RAC
Omotion - Migrate database online from one node to another

Verify whether patch is installed: oracle> opatch lsinventory

Database Creation Through DBCA


For Rac One Node functionality, database should be created on only one node, localnode in this case.

    1. oracle@node1> dbca & ororacle@node1>cd /u01/app/oracle/product/11.2.0/dbhome_1/bin

      ./dbca

 

    1. Welcome Screen. Select Oracle Real Application Clusters database.

 

    1. Operations: Select "Create a Database"

 

    1. Database Templates: Custom Database

 

    1. Database Identification:


        1. Select "admin managed" database

        1. Specify Global database name which will be the service name. It's recommended to specify full domain name

        1. Select only one node-node1

        1. Specify SID prefix. This is the name given to DB instance which will get appended by number of nodes. This would have already been set by us as an environment variable in .bash_profile. This is a must for database connectivity.
          Example: 
          SID of node1 = orcl1


 

    1. Management options: Leave the default options here, which is to Configure Enterprise Manager/Configure Database Control for local management.

 

    1. Database credentials: "Use the Same Administrative Password for All Accounts". Enter the password.

 

    1. Database file options: Specify storage type and locations for database files.

 

    1. Specify ASMSNMP Password :Specify the ASMSNMP password for the ASM instance.

 

    1. Recovery configuration: Check the option for "Specify Fast Recovery Area".For the Fast Recovery Area, click the [Browse] button and select the disk group name FRA.

 

    1. Database Content: Default options.

 

    1. Initialization Parameters: Default options.

 

    1. Database Storage: Default options.

 

    1. Creation options: Select "create database" and " Generate database creation scripts".Click Finish to start the database creation process. After acknowledging the database creation report and script generation dialog, the database creation will start.Click OK on the "Summary" screen.

 

    1. At the end of the database creation, exit from the DBCA.



Post Installation Checks After Oracle RAC Installation



    1. Log in Oracle Enterprise Manager(Database control) using DBSNMP user and explore.https://node1.labs.blr.novell.com:1158/em

      We can see database and cluster status, create users, check the performance etc.

 

    1. Re-compile invalid objects:
      oracle@node1> sqlplus / as sysdba
      sql>@?/rdbms/admin/utlrp.sql

 

    1. Enabling archive log in RAC environment:If the database is in "Archive Log Mode", Oracle will make a copy of the online redo log before it gets reused. A thread must contain at least two online redologs (or online redolog groups).

        1. Disable the cluster instance
          oracle@node1>sqlplus / as sysdba
          SQL> alter system set cluster_database=false scope=spfile sid='orcl1';
          System altered.

        1. Shutdown all instances accessing the cluster database as the oracle user:
          oracle@node1> srvctl stop database -d orcl

        1. Using the local instance, mount the database:
          oracle@node1>sqlplus / as sysdba
          SQL*Plus: Release 11.2.0.1.0 Production on Sat Nov 21 19:26:47 2009
          Copyright (c) 1982, 2009, Oracle. All rights reserved.
          Connected to an idle instance.
          SQL> startup mount
          ORACLE instance started.
          Total System Global Area 1653518336 bytes
          Fixed Size 2213896 bytes
          Variable Size 1073743864 bytes
          Database Buffers 570425344 bytes
          Redo Buffers 7135232 bytes

        1. Enable archiving:
          SQL> alter database archivelog;
          Database altered.

        1. Re-enable support for clustering by modifying the instance parameter cluster_database to TRUE from the current instance:
          SQL> alter system set cluster_database=true scope=spfile 	sid='orcl1';
          System altered.

        1. Shutdown the local instance:
          SQL> shutdown immediate
          ORA-01109: database not open
          Database dismounted.
          ORACLE instance shut down.

        1. Bring all instances back up as the oracle account using srvctl
          oracle@node1> srvctl start database -d orcl

        1. Log in to the local instance and verify Archive Log Mode is enabled:
          [oracle@node1 ~]$ sqlplus / as sysdba
          SQL*Plus: Release 11.2.0.1.0 Production on Mon Nov 8 20:07:48 2010
          Copyright (c) 1982, 2009, Oracle. All rights reserved.
          Connected to:
          Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
          With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options
          SQL> archive log list
          Database log mode Archive Mode
          Automatic archival Enabled
          Archive destination USE_DB_RECOVERY_FILE_DEST
          Oldest online log sequence 68
          Next log sequence to archive 69
          Current log sequence 6


 

    1. Verify database status:
      oracle@node1>srvctl status database -d orcl
      Instance orcl1 is running on node node1
      Instance orcl2 is running on node node2

 

    1. To see the configuration of database:
      oracle@node1>srvctl config database -d racdb –a
      Database unique name: orcl
      Database name: orcl
      Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
      Oracle user: oracle
      Spfile: RACDB/orcl/spfileorcl.ora
      Domain: labs.blr.novell.com
      Start options: open
      Stop options: immediate
      Database role: PRIMARY
      Management policy: AUTOMATIC
      Server pools: orcl
      Database instances: racdb1,racdb2
      Disk Groups: RACDB,FRA,CRS
      Mount point paths:
      Services: orcl.labs.blr.novell.com
      Type: RAC
      Database is enabled
      Database is administrator managed

 

    1. To see the asm status:
      oracle@node1> srvctl status asm

 

    1. TNS listener status:
      srvctl status listener

 

    1. SCAN status:
      oracle@node1> srvctl status scan

 

    1. To start/stop cluster:
      root>cd /u01/app/11.2.0/grid/bin
      ./crsctl stop cluster
      ./crsctl start cluster

 

    1. To start/stop database:
      oracle> srvctl stop database -d databasename
      oracle> srvctl start database -d databasename

 

    1. To start only local instance or only one instance:
      oracle>sqlplus/ as sysdba
      sql> startup mount;
      sql> shutdown immediate;



Initialize the Database to RAC One Node


We would utilize the scripts installed by Patch 9004119.

    1. The raconeinit utility initializes the database, renames the DB instance and creates files and directories supporting renamed instance.
      (a)    Run the script "raconeinit" script from node1.
      oracle@node1>$ORACLE_HOME/bin/raconeinit
      Candidate Databases on this cluster:
      # Database RAC One Node        Fix Required
      ===     ========     ============    ============
      [1]        orcl           NO                            N/A
      Enter the database to initialize [1]: 1
      Database orcl is now running on server: node1
      Candidate servers that may be used for this DB:  node2
      Enter the names of additional candidate servers where this DB may run (space delimited): node2 
      Please wait, this may take a few minutes to finish….
      Database configuration modified.
      We can check the status using raconestatus

 

    1. oracle@node1>$ORACLE_HOME/bin/raconestatus
      	Database     UP     Fix Required      Current Server      Candidate Server Names
      ———–    —–   —————–    ——————-            ——————————–
      orcl      Y       N                     node1                                 node1 node2

 

    1. Once the database is initialized, we can run Omotion to start the database relocation to the second node.
      oracle@node1>$ORACLE_HOME/bin/Omotion
      Enter number of the database to migrate [1]: 1
      Specify maximum time in minutes for migration to complete (max 30) [30]: 30
      Available Target Server(s) :
      #            Server                           Available
      ==         =============      =========
      [1]       node 2                               Y

      Enter number of the target node [1]: 1

 

    1. Check the status after relocation.
      oracle@node1>$ORACLE_HOME/bin/raconestatus
      RAC One Node databases on this cluster:
      Database   UP         Fix Required          Current Server     Candidate Server Names
      ————    ——-    ——————-    ——————       ——————————
      orcl        Y            N                         node 2                         node1 node2

 

    1. Check the database status.
      	srvctl status database -d orcl
      Instance orcl_2 is running on node2

 

    1. Since raconeinit renames the instance, we should manually reset the oracle_SID for database connectivity.
      	export ORACLE_SID=orcl_1 : for node1
      export ORACLE_SID=orcl_2 : for node2

 

    1. We can use 'racone2rac' script to convert racone nodeback to rac database. It converts back to single instance rac database. Later we can use dbca utility to add as many instances as we want.



Troubleshooting



    1. Log files can be found in

        1. Event manager log files: $GRID_HOME/log/hostname/evmd

        1. Database log files: $GRID_HOME/log/hostname/dbname:$ORACLE_HOME/log/hostname/dbname

        1. Cluster ready services log files: $GRID_HOME/log/hostname/dbname

        1. Oracle cluster registry(OCR) file logs: $GRID_HOME/log/hostname/client

        1. CRS alert logs: $ORA_GRID_HOME/log/<hostname>/alert<nodename>.log


 

    1. Error: ora-01219 database not open queries allowed on fixed tables views only.This error is seen sometimes, after shutdown/startup local instance and when tried viewing tables.
      Solution: from localnode
      sqlplus /as SYSBDA
      SQL> select status from v$instance;
      STATUS
      --------
      OPEN
      --------
      MOUNTED
      We still need to manually open database:
      SQL> alter database open;

 

    1. Error ORA-00257: archiver error. Connect internal only, until freed.This is because of less space due to archive redo logs
      Solution: from localnode
      SQL> SELECT * FROM V$RECOVERY_FILE_DEST;
      if used limit is equal to space_limit, then delete the Archive Log , when it is not needed to free up some space
      rman target /
      RMAN>delete archivelog until time 'SYSDATE-1';

 

    1. ORA-00603: ORACLE server session terminated by fatal error or ORA-29702: error occurred in Cluster Group Service operation.If the RAC node name is listed for the loopback address, you will receive the following error during the RAC installation:" 127.0.0.1 node1 localhost.localdomain localhost"

      Solution: Remove the database name from loopback address

 

    1. ORA-01078: failure in processing system parameters ,LRM-00109: could not open parameter file.'/u01/app/oracle/product/11.2.0/dbhome_1/dbs/initorcl_1.ora'

      Though database is up and running,while trying to connect to database, it connects to an idle instance. Starting local instance gives the above error.

      Solution: this error is seen when ORACLE_SID is not set properly.

      For localnode: Export ORACLE_SID=dbname1(orcl1)

 

    1. RAC One node:ORA: 12505: Renaming the instance name results in the OEM-DB Control to return the error: "ORA-12505:TNS:listener does not currently know of SID given in connect descriptor".

      Solution: Recreate the DB configuration by using dbca utility, reconfigure option. Refer to admin guide.

 

    1. Rac One Node:Error num: 2. ERROR: Unable to start the new instance of orcl on node2.

      Run raconefix and run Omotion again.

      This is error is seen after running Omotion. Since the instance gets renamed with '_'(orcl_1) it is considered as policy managed database, where undo tablespaces has to be done manually unlike admin managed database.

      Solution: run 'raconefix' scripts to fix this problem and run Omotion again for successful relocation.

 

    1. Error: server candidate is not found in the cluster.When Omotion is run, this is seen and just exists without relocation.

      Solution: Check cluster, crs services, node reachability. If any services are down, restart it
      root> $GRID_HOME/crsctl check cluster -all
      root> $GRID_HOME/crsctl check crs -n node1,node2
      root> $GRID_HOME/crsctl check cts -n node1,node2
      root> $GRID_HOME/srvctl status listener -n node1,node2
      root> $GRID_HOME/crsctl start cluster -all



ZCM (ZENworks Configuration Management) Installation with Oracle RAC



    1. Create a VM image with windows or linux OS

 

    1. Download ZCM 11.2 build and trigger the installation

 

    1. Complete the initial steps.

 

    1. In the Database selection page, choose 'Oracle' as database for ZCM

 

    1. We can new schema or existing schema. For existing schema, we should create a schema beforehand

 

    1. Specify the service name, hostname and port of the database

      Check the tnsname.ora file under $ORACLE_HOME/network/admin and verify the hostname and service name.
      Example:
      ORCL =
      (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = blr-srm-cluster)(PORT = 1521))
      (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = orcl.labs.blr.novell.com)
      )
      )

      Note: Hostname should be the Scan name and not the individual hostname as all the nodes would be registered under SCAN as a cluster. During a node failure, SCAN immediately delegates all the client connections and transactions to another available node (registered under cluster) thus ensuring zero downtime.

 

    1. Provide administrative oracle user and ZENworks user and proceed with successful ZENworks installation

 

    1. Test the relocation:

        1. Shut down the local node.Sql> shutdown immediate

        1. Run "Raconeinit" and "Omotion" scripts to relocate to node2.Refer to section Initialize the database to Rac one node for more details


 

    1. ZENworks would continue to function properly. During relocation and migration, the maximum downtime expected is 0-30 minutes.

 

    1. We can relocate back to node1 if required.



Uninstallation Procedure



    1. Uninstall ZENworks

 

    1. Convert Rac One Node back to RAC using 'racone2rac' script

 

    1. Delete database through dbca
      oracle@node1> dbca &

 

    1. De-install the Oracle RAC database
      oracle@node1> cd $ORACLE_HOME/deinstall/
      ./deinstall
      Specify all the required information

 

    1. De-install Oracle Grid software installation
      grid@node1> /u01/app/11.2.0/grid/deinstall
      ./deinstall

 

    1. Delete ASM marked shared disksroot> /usr/sbin/oracleasm deletedisk CRS1(all the disks)

 

    1. Check for oracle processes by running ps -ef command. Stop the processes, if any, and the machine is as good as a fresh one.



Links and Downloads



    1. Oracle VM server 2.2.2
      http://www.oracle.com/virtualization

 

    1. Oracle VM manager 2.2.0
      http://www.oracle.com/virtualization

 

    1. Oracle Linux-Redhat5
      http://www.oracle.com/linux

 

    1. Oracle Grid Infrastructure software 11.2.0.1.0
      http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html

 

    1. Oracle RAC Database 1.2.0.1.0
      http://www.oracle.com/technetwork/database/enterprise- edition/downloads/112010-linuxsoft-085393.html

 

    1. Kernel Rpms:
      http://public-yum.oracle.com

 

    1. Patch -9004119 for Rac One Node
      http://www.support.oracle.com



IMPORTANT NOTE:
Please note that if ZENworks Reporting Server(ZRS) has been configured in the setup, Oracle RAC One Node would not work.

Labels:

Collateral
How To-Best Practice
Comment List
Parents
  • Hello,
    As mentioned in the document, Zenworks Reporting Serve(ZRS) doesn't work with Oracle RAC One Node by default.

    Please follow the below steps to make ZRS work
    Steps:
    1. Install ZRS on top of ZCM
    2. Create an empty 'boe_jdbc_url.txt' under
    /etc/opt/novell/zenworks/datamodel/ if ZRS is running on linux OS.
    or
    ZENWORKS_HOME/conf/datamodel/ if ZRS is running on Windows OS.
    3. Add JDBC URL for RAC server in file.
    Below is eg url for the same:
    jdbc:oracle:thin:@(DESCRIPTION= (ADDRESS=(PROTOCOL=tcp)(HOST=Ipaddress1)(PORT=1521)) (ADDRESS=(PROTOCOL=tcp)(HOST=IpAddress2)(PORT=1521)) (CONNECT_DATA=(SERVICE_NAME=OracleServiceName)))
    Note: DbserverIP, dbname, dbinstance should be changed if the database is moved.
    4. Run the command: novell-zenworks-configure -c UpdateBOE .
    5. Launch ZRS.
    6. Run or create reports successfully in an Oracle RAC One Node environment

    Also refer to www.novell.com/.../doc.php for more information

    Best Regards,
    Megha
Comment
  • Hello,
    As mentioned in the document, Zenworks Reporting Serve(ZRS) doesn't work with Oracle RAC One Node by default.

    Please follow the below steps to make ZRS work
    Steps:
    1. Install ZRS on top of ZCM
    2. Create an empty 'boe_jdbc_url.txt' under
    /etc/opt/novell/zenworks/datamodel/ if ZRS is running on linux OS.
    or
    ZENWORKS_HOME/conf/datamodel/ if ZRS is running on Windows OS.
    3. Add JDBC URL for RAC server in file.
    Below is eg url for the same:
    jdbc:oracle:thin:@(DESCRIPTION= (ADDRESS=(PROTOCOL=tcp)(HOST=Ipaddress1)(PORT=1521)) (ADDRESS=(PROTOCOL=tcp)(HOST=IpAddress2)(PORT=1521)) (CONNECT_DATA=(SERVICE_NAME=OracleServiceName)))
    Note: DbserverIP, dbname, dbinstance should be changed if the database is moved.
    4. Run the command: novell-zenworks-configure -c UpdateBOE .
    5. Launch ZRS.
    6. Run or create reports successfully in an Oracle RAC One Node environment

    Also refer to www.novell.com/.../doc.php for more information

    Best Regards,
    Megha
Children
No Data
Related
Recommended