Application Delivery Management
Application Modernization & Connectivity
CyberRes by OpenText
IT Operations Management
By kmegha
Table of Contents:
Real Application Cluster - RAC One Node is a RAC enabled single instance database that can be moved around in a cluster. It allows the database instance to be relocated to another node of the cluster with a grace period of maximum 30 minutes. This grace period will allow all the existing transactions in the database to complete. Once the online relocation is initiated all the new client connections will be re-directed to the new node where the instance will be relocating.
Oracle RAC One Node is used for Active-Passive cluster scenarios where only one instance is running(active) in any given time and other nodes are passive(stand by) in a cluster. In the event of a first node server failure, all the connections are terminated and are re-established with the new instance on the secondary node within the grace period.
Oracle Grid Infrastructure software provides Clusterware files(Crs voting disks) and Automatic storage management(ASM) packaged together. This is essential for RAC interoperability.
Hardware resources
Software resources
Pre-installation tasks
Example:
bash> nslookup blr-srm-cluster
Server: 196.96.201.1
Address: 196.96.201.1#53
Name: blr-srm-cluster.labs.blr.novell.com
Address: 196.96.94.226
Name: blr-srm-cluster.labs.blr.novell.com
Address: 196.96.94.227
Name: blr-srm-cluster.labs.blr.novell.com
Address: 196.96.94.228
Example of a hosts file
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
#node2
196.96.94.229 blr-srm-r11t.labs.blr.novell.com blr-srm-r11t #public
196.96.94.225 blr-srm-r11p.labs.blr.novell.com blr-srm-r11p #Vip
#node1-localnode
196.96.94.221 blr-srm-r11l.labs.blr.novell.com blr-srm-r11l #public
196.96.94.223 blr-srm-r11n.labs.blr.novell.com blr-srm-r11n #Vip
#private ip
10.0.0.2 node2-priv.labs.blr.novell.com node2-priv
10.0.0.1 node1-priv.labs.blr.novell.com node1-priv
Example:
user: grid, password: grid in both machines
user: oracle, password: oracle
groupadd -g 1000 oinstall
groupadd -g 1200 asmadmin
groupadd -g 1201 asmdba
groupadd -g 1202 asmoper
useradd -m -u 1100 -g oinstall -G asmadmin,asmdba,asmoper -d /home/grid -s /bin/bash -c "Grid Infrastructure Owner" grid
Example:
root> id grid
uid=1100(grid)gid=1000(oinstall)groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper)
groupadd -g 1300 dba
groupadd -g 1301 oper
useradd -m -u 1101 -g oinstall -G dba,oper,asmdba -d /home/oracle -s /bin/bash -c "Oracle Software Owner" oracle
Example:
root> id oracle
gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1300(dba),1301(oper)
root> mkdir -p /u01/app/oracle
mkdir -p /u01/app/11.2.0/grid
chown -R grid:oinstall /u01
chown -R oracle:oinstall /u01/app/oracle
chmod -R 775 /u01
root@node1> fdisk -l
root@node1 > fdisk /dev/sda
root> rpm -oracleasm-support-2.1.7-1.el5
oracleasmlib-2.0.4-1.el5
oracleasm-2.6.18-274.el5xen-2.0.5-1.el5
Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Example:
root@node1> /usr/sbin/oracleasm createdisk CRS1 /dev/sda1
root@node1> /usr/sbin/oracleasm createdisk CRS2 /dev/sdb1
root@node1> /usr/sbin/oracleasm createdisk CRS3 /dev/sdc1
root@node1> /usr/sbin/oracleasm createdisk RACDB /dev/sdd1
root@node1> /usr/sbin/oracleasm createdisk FRA /dev/sde1
root@node1> /usr/sbin/oracleasm listdisks
grid>vi .bash_profile
PATH=$HOME/bin:/u01/app/11.2.0/grid/bin:$PATH
export PATH
umask 022
ORACLE_SID= ASM1
export ORACLE_SID
ORACLE_BASE=/u01/app/grid
export ORACLE_BASE
ORACLE_HOME=/u01/app/11.2.0/grid
export ORACLE_HOME
export TEMP=/tmp
export TMPDIR=/tmp
TNS_ADMIN=/u01/app/11.2.0/grid/network/admin
export TNS_ADMIN
LD_LIBRARY_PATH=/u01/app/11.2.0/grid/lib
export LD_LIBRARY_PATH
CLASSPATH=/u01/app/11.2.0/grid/jlib:/u01/app/11.2.0/grid/JRE:$CLASSPATH
export CLASSPATH
oracle>vi .bash_profile
PATH=$HOME/bin:/u01/app/oracle/product/11.2.0/dbhome_1/bin:$PATH
export PATH
ORACLE_BASE=/u01/app/oracle
export ORACLE_BASE
ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
export ORACLE_HOME
ORACLE_SID=orcl1
export ORACLE_SID
LD_LIBRARY_PATH=/u01/app/oracle/product/11.2.0/dbhome_1/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH
TNS_ADMIN=/u01/app/oracle/product/11.2.0/dbhome_1/network/admin
export TNS_ADMIN
CLASSPATH=/u01/app/oracle/product/11.2.0/dbhome_1/jlib:$CLASSPATH
export CLASSPATH
ORACLE_UNQNAME=orcl
export ORACLE_UNQNAME
export TEMP=/tmp
export TMPDIR=/tmp
up2date --whatprovides libstdc .so.5 or
./runInstaller downloadUpdate
Edit ntpd file to add -x flag
root>vi /etc/sysconfig/ntpd
#Drop root to id 'ntp:ntp' by default.
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"
#Set to 'yes' to sync hw clock after successful ntpdate
SYNC_HWCLOCK=no
# Additional options for ntpdate
NTPDATE_OPTIONS=""
root> service ntpd restart
export DISPLAY
TEST X CONFIGURATION BY RUNNING xterm
grid@node1>xterm &
grid> mkdir ~/.ssh
grid> chmod 700 ~/.ssh
grid> /usr/bin/ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/grid/.ssh/id_dsa): [Enter]
Enter passphrase (empty for no passphrase): [Enter]
Enter same passphrase again: [Enter]
Your identification has been saved in /home/grid/.ssh/id_dsa.
Your public key has been saved in /home/grid/.ssh/id_dsa.pub.
The key fingerprint is: 57:21:d7:d5:54:29:4c:12:40:23:36:e9:6e:2f:e6:40
grid@node1>touch ~/.ssh/authorized_keys
grid@node1>ls -l ~/.ssh
grid@node2>ssh node1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
The authenticity of host 'node1 (196.96.94.221)' can't be established.
RSA key fingerprint is 66:65:a6:99:5f:cb:6e:60:6a:06:18:b7:fc:c2:cc:3e
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node1, 196.96.94.221' (RSA) to the list of known hosts.
grid@node1's password: xxxxx (grid)
grid@node1>ssh node2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
The authenticity of host 'node2 (196.96.94.229)' can't be established.
RSA key fingerprint is 30:cd:90:ad:18:00:24:c5:42:49:21:b0:1d:59:2d:7b.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node2, 196.96.94.229’ (RSA) to the list of known hosts
grid@rode1's password: xxxxx (grid)
grid@node1>ls -l ~/.ssh
grid@node1>scp ~/.ssh/authorized_keys node2:.ssh/authorized_keys
grid@node2's password: xxxxx
authorized_keys
grid@node1>chmod 600 ~/.ssh/authorized_keys
grid@node2>chmod 600 ~/.ssh/authorized_keys
Test:
grid@node1>ssh "date:hostname" node2
grid@node2>ssh "date:hostname" node1
root>rpm -iv cvuqdisk-1.0.7-1.rpm
Preparing packages for installation...
cvuqdisk-1.0.7-1
Verify cvuqdisk install on both nodes
root> ls -l /usr/sbin/cvuqdisk
Execute ./runcluvfy.sh script for both nodes
grid>/home/grid/grid/runcluvfy.sh stage -pre crsinst -n node1,node2 -fixup –verbose
SCAN name and Cluster name will the same by default. After clicking next, the OUI will validate the SCAN and cluster information
grid>cd /u01/app/11.2.0/grid/bin
./crsctl check cluster –all
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
grid@node1> ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' | awk '{print $9}'
LISTENER_SCAN2
LISTENER_SCAN3
LISTENER
grid@node2> ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' | awk '{print $9}'
LISTENER_SCAN1
LISTENER
grid> cd /u01/app/11.2.0/grid/bin
./srvctl status asm -a
grid@node1>/u01/app/11.2.0/grid/bin/ocrcheck
grid@node1>cd /u01/app/11.2.0/grid/bin
./crsctl query css votedisk
We have to create Asm disks for RAC DB and Fast Recovery disk since grid installation only creates asm disks for CRS files.
We are ready to perform Oracle RAC database installation on all the nodes.
Download and install Patch 9004119 on all the nodes. This patch install the following scripts which are required for Rac One Node.
raconefix - Fixes metadata after an Omotion failure or failover
raconeinit - Initialize the database to RAC One Node
raconestatus - Check the status of RAC One Node database
racone2rac - Upgrade RAC One Node database to RAC
Omotion - Migrate database online from one node to another
Verify whether patch is installed: oracle> opatch lsinventory
For Rac One Node functionality, database should be created on only one node, localnode in this case.
Example:
SID of node1 = orcl1
oracle@node1> sqlplus / as sysdba
sql>@?/rdbms/admin/utlrp.sql
oracle@node1>sqlplus / as sysdba
SQL> alter system set cluster_database=false scope=spfile sid='orcl1';
System altered.
oracle@node1> srvctl stop database -d orcl
oracle@node1>sqlplus / as sysdba
SQL*Plus: Release 11.2.0.1.0 Production on Sat Nov 21 19:26:47 2009
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup mount
ORACLE instance started.
Total System Global Area 1653518336 bytes
Fixed Size 2213896 bytes
Variable Size 1073743864 bytes
Database Buffers 570425344 bytes
Redo Buffers 7135232 bytes
SQL> alter database archivelog;
Database altered.
SQL> alter system set cluster_database=true scope=spfile sid='orcl1';
System altered.
SQL> shutdown immediate
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
oracle@node1> srvctl start database -d orcl
[oracle@node1 ~]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.1.0 Production on Mon Nov 8 20:07:48 2010
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 68
Next log sequence to archive 69
Current log sequence 6
oracle@node1>srvctl status database -d orcl
Instance orcl1 is running on node node1
Instance orcl2 is running on node node2
oracle@node1>srvctl config database -d racdb –a
Database unique name: orcl
Database name: orcl
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: RACDB/orcl/spfileorcl.ora
Domain: labs.blr.novell.com
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: orcl
Database instances: racdb1,racdb2
Disk Groups: RACDB,FRA,CRS
Mount point paths:
Services: orcl.labs.blr.novell.com
Type: RAC
Database is enabled
Database is administrator managed
oracle@node1> srvctl status asm
srvctl status listener
oracle@node1> srvctl status scan
root>cd /u01/app/11.2.0/grid/bin
./crsctl stop cluster
./crsctl start cluster
oracle> srvctl stop database -d databasename
oracle> srvctl start database -d databasename
oracle>sqlplus/ as sysdba
sql> startup mount;
sql> shutdown immediate;
We would utilize the scripts installed by Patch 9004119.
(a) Run the script "raconeinit" script from node1.
oracle@node1>$ORACLE_HOME/bin/raconeinit
Candidate Databases on this cluster:
# Database RAC One Node Fix Required
=== ======== ============ ============
[1] orcl NO N/A
Enter the database to initialize [1]: 1
Database orcl is now running on server: node1
Candidate servers that may be used for this DB: node2
Enter the names of additional candidate servers where this DB may run (space delimited): node2
Please wait, this may take a few minutes to finish….
Database configuration modified.
We can check the status using raconestatus
Database UP Fix Required Current Server Candidate Server Names
———– —– —————– ——————- ——————————–
orcl Y N node1 node1 node2
oracle@node1>$ORACLE_HOME/bin/Omotion
Enter number of the database to migrate [1]: 1
Specify maximum time in minutes for migration to complete (max 30) [30]: 30
Available Target Server(s) :
# Server Available
== ============= =========
[1] node 2 Y
Enter number of the target node [1]: 1
oracle@node1>$ORACLE_HOME/bin/raconestatus
RAC One Node databases on this cluster:
Database UP Fix Required Current Server Candidate Server Names
———— ——- ——————- —————— ——————————
orcl Y N node 2 node1 node2
srvctl status database -d orcl
Instance orcl_2 is running on node2
export ORACLE_SID=orcl_1 : for node1
export ORACLE_SID=orcl_2 : for node2
Solution: from localnode
sqlplus /as SYSBDA
SQL> select status from v$instance;
STATUS
--------
OPEN
--------
MOUNTED
We still need to manually open database:
SQL> alter database open;
Solution: from localnode
SQL> SELECT * FROM V$RECOVERY_FILE_DEST;
if used limit is equal to space_limit, then delete the Archive Log , when it is not needed to free up some space
rman target /
RMAN>delete archivelog until time 'SYSDATE-1';
root> $GRID_HOME/crsctl check cluster -all
root> $GRID_HOME/crsctl check crs -n node1,node2
root> $GRID_HOME/crsctl check cts -n node1,node2
root> $GRID_HOME/srvctl status listener -n node1,node2
root> $GRID_HOME/crsctl start cluster -all
Example:
ORCL =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = blr-srm-cluster)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orcl.labs.blr.novell.com)
)
)
oracle@node1> dbca &
oracle@node1> cd $ORACLE_HOME/deinstall/
./deinstall
Specify all the required information
grid@node1> /u01/app/11.2.0/grid/deinstall
./deinstall
IMPORTANT NOTE:
Please note that if ZENworks Reporting Server(ZRS) has been configured in the setup, Oracle RAC One Node would not work.