This is a place-holder for miscellaneous notes on 11gR2 Grid and RAC.
09-Nov-11 : Location of Voting disk
If the voting disk exists on ASM you cannot add CFS votedisks. You must replace your ASM voting disk with the CFS votedisks. Apparently, both location types cannot be used concurrently ?
[root@node1 crs]# crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 5e25b31afc8e4fcabf3af0462e71ada9 (/crs/ocr.configuration) [DATA] Located 1 voting disk(s). [root@node1 crs]# [root@node1 crs]# crsctl add css votedisk /data1/votedisk.1 /data2/votedisk.2 CRS-4258: Addition and deletion of voting files are not allowed because the voting files are on ASM [root@node1 crs]# crsctl replace votedisk /data1/votedisk.1 /data2/votedisk.2 Now formatting voting disk: /data1/votedisk.1. Now formatting voting disk: /data2/votedisk.2. CRS-4256: Updating the profile Successful addition of voting disk 0ba633a673684fc0bf95cfbf188c399b. Successful addition of voting disk 5784bae373ba4fcfbfb5c89b7136a7ea. Successful deletion of voting disk 5e25b31afc8e4fcabf3af0462e71ada9. CRS-4256: Updating the profile CRS-4266: Voting file(s) successfully replaced [root@node1 crs]# crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 0ba633a673684fc0bf95cfbf188c399b (/data1/votedisk.1) [] 2. ONLINE 5784bae373ba4fcfbfb5c89b7136a7ea (/data2/votedisk.2) [] Located 2 voting disk(s). [root@node1 crs]# ls -l /data*/*otedi* -rw-r----- 1 grid oinstall 21004800 Nov 9 22:52 /data1/votedisk.1 -rw-r----- 1 grid oinstall 21004800 Nov 9 22:52 /data2/votedisk.2 [root@node1 crs]#
So, I have now "moved" the voting disk from ASM (+DATA) to CFS (two separate files on two separate mount-points). (Note : /crs/ocr.configuration is actually an ASM disk).
09-Nov-11 : OCR Backups are going to node1 only ?
I have a two-node RAC. "Automatic" *and* Manual OCR Backups are being created on node1. I expect the backups to be spread out to different nodes.
[root@node2 ~]# uname -a Linux node2.mydomain.com 2.6.18-238.el5 #1 SMP Tue Jan 4 15:24:05 EST 2011 i686 i686 i386 GNU/Linux [root@node2 ~]# ocrconfig -showbackup node1 2011/10/22 03:09:03 /u01/app/grid/11.2.0/cdata/rac/backup00.ocr node1 2011/10/21 23:06:39 /u01/app/grid/11.2.0/cdata/rac/backup01.ocr node1 2011/10/21 23:06:39 /u01/app/grid/11.2.0/cdata/rac/day.ocr node1 2011/10/21 23:06:39 /u01/app/grid/11.2.0/cdata/rac/week.ocr node1 2011/11/09 23:09:16 /u01/app/grid/11.2.0/cdata/rac/backup_20111109_230916.ocr node1 2011/11/09 22:47:25 /u01/app/grid/11.2.0/cdata/rac/backup_20111109_224725.ocr node1 2011/11/09 22:29:41 /u01/app/grid/11.2.0/cdata/rac/backup_20111109_222941.ocr [root@node2 ~]# ocrconfig -manualbackup node1 2011/11/09 23:09:40 /u01/app/grid/11.2.0/cdata/rac/backup_20111109_230940.ocr node1 2011/11/09 23:09:16 /u01/app/grid/11.2.0/cdata/rac/backup_20111109_230916.ocr node1 2011/11/09 22:47:25 /u01/app/grid/11.2.0/cdata/rac/backup_20111109_224725.ocr node1 2011/11/09 22:29:41 /u01/app/grid/11.2.0/cdata/rac/backup_20111109_222941.ocr [root@node2 ~]#Commands fired from node2 show that all backups, even manual backups are on node1.
09-Nov-11 : Shutdown of services from a node
crsctl can be used to shutdown all services. Here I shutdown the local node :
[root@node2 log]# crsctl stop cluster CRS-2673: Attempting to stop 'ora.crsd' on 'node2' CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'node2' CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'node2' CRS-2673: Attempting to stop 'ora.DATA.dg' on 'node2' CRS-2673: Attempting to stop 'ora.registry.acfs' on 'node2' CRS-2673: Attempting to stop 'ora.DATA1.dg' on 'node2' CRS-2673: Attempting to stop 'ora.DATA2.dg' on 'node2' CRS-2673: Attempting to stop 'ora.FRA.dg' on 'node2' CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'node2' CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'node2' succeeded CRS-2673: Attempting to stop 'ora.scan1.vip' on 'node2' CRS-2677: Stop of 'ora.scan1.vip' on 'node2' succeeded CRS-2672: Attempting to start 'ora.scan1.vip' on 'node1' CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'node2' succeeded CRS-2673: Attempting to stop 'ora.node2.vip' on 'node2' CRS-2677: Stop of 'ora.node2.vip' on 'node2' succeeded CRS-2672: Attempting to start 'ora.node2.vip' on 'node1' CRS-2677: Stop of 'ora.registry.acfs' on 'node2' succeeded CRS-2676: Start of 'ora.node2.vip' on 'node1' succeeded CRS-2676: Start of 'ora.scan1.vip' on 'node1' succeeded CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'node1' CRS-2677: Stop of 'ora.FRA.dg' on 'node2' succeeded CRS-2677: Stop of 'ora.DATA1.dg' on 'node2' succeeded CRS-2677: Stop of 'ora.DATA2.dg' on 'node2' succeeded CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'node1' succeeded CRS-2677: Stop of 'ora.DATA.dg' on 'node2' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'node2' CRS-2677: Stop of 'ora.asm' on 'node2' succeeded CRS-2673: Attempting to stop 'ora.ons' on 'node2' CRS-2677: Stop of 'ora.ons' on 'node2' succeeded CRS-2673: Attempting to stop 'ora.net1.network' on 'node2' CRS-2677: Stop of 'ora.net1.network' on 'node2' succeeded CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node2' has completed CRS-2677: Stop of 'ora.crsd' on 'node2' succeeded CRS-2673: Attempting to stop 'ora.ctssd' on 'node2' CRS-2673: Attempting to stop 'ora.evmd' on 'node2' CRS-2673: Attempting to stop 'ora.asm' on 'node2' CRS-2677: Stop of 'ora.evmd' on 'node2' succeeded CRS-2677: Stop of 'ora.asm' on 'node2' succeeded CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'node2' CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'node2' succeeded CRS-2677: Stop of 'ora.ctssd' on 'node2' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'node2' CRS-2677: Stop of 'ora.cssd' on 'node2' succeeded CRS-2673: Attempting to stop 'ora.diskmon' on 'node2' CRS-2677: Stop of 'ora.diskmon' on 'node2' succeeded [root@node2 log]# [root@node2 log]# ps -fugrid UID PID PPID C STIME TTY TIME CMD grid 3169 1 0 22:24 ? 00:00:32 /u01/app/grid/11.2.0/bin/oraagent.bin grid 3183 1 0 22:24 ? 00:00:00 /u01/app/grid/11.2.0/bin/mdnsd.bin grid 3194 1 0 22:24 ? 00:00:06 /u01/app/grid/11.2.0/bin/gpnpd.bin grid 3207 1 2 22:24 ? 00:01:46 /u01/app/grid/11.2.0/bin/gipcd.bin [root@node2 log]#Only the basic services are running now.
.
26-Nov-11 : IP Addresses
On node 1 :
SQL> select instance_number, instance_name from v$instance; INSTANCE_NUMBER INSTANCE_NAME --------------- ---------------- 1 RACDB_1 SQL> show parameter local_listener NAME_COL_PLUS_SHOW_PARAM ------------------------------------------------------------------------------ TYPE ----------- VALUE_COL_PLUS_SHOW_PARAM ------------------------------------------------------------------------------ local_listener string (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.56.149)(PORT=1 521)))) SQL>One node 2 :
SQL> select instance_number, instance_name from v$instance; INSTANCE_NUMBER INSTANCE_NAME --------------- ---------------- 2 RACDB_2 SQL> show parameter local_listener NAME_COL_PLUS_SHOW_PARAM ------------------------------------------------------------------------------ TYPE ----------- VALUE_COL_PLUS_SHOW_PARAM ------------------------------------------------------------------------------ local_listener string (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.56.144)(PORT=1 521)))) SQL>The /var/messages/log file on node1 has :
Nov 26 22:12:22 node1 avahi-daemon[2572]: Registering new address record for 192.168.56.127 on eth0. Nov 26 22:12:27 node1 avahi-daemon[2572]: Registering new address record for 192.168.56.148 on eth0. Nov 26 22:12:28 node1 avahi-daemon[2572]: Registering new address record for 192.168.56.146 on eth0. Nov 26 22:12:28 node1 avahi-daemon[2572]: Registering new address record for 192.168.56.147 on eth0. Nov 26 22:12:28 node1 avahi-daemon[2572]: Registering new address record for 192.168.56.144 on eth0. Nov 26 22:12:28 node1 avahi-daemon[2572]: Registering new address record for 192.168.56.149 on eth0. Nov 26 22:18:46 node1 avahi-daemon[2572]: Withdrawing address record for 192.168.56.144 on eth0. Nov 26 22:18:48 node1 avahi-daemon[2572]: Withdrawing address record for 192.168.56.148 on eth0.while node2 has :
Nov 26 22:18:48 node2 avahi-daemon[2573]: Registering new address record for 192.168.56.144 on eth0. Nov 26 22:18:50 node2 avahi-daemon[2573]: Registering new address record for 192.168.56.148 on eth0.These are the assigned IP addressees. 1448 and 148 switched from node1 to node2 when node2 came up.
.
.
.
8 comments:
Hi Hemant,
good blog. Regarding "OCR backups are only on node 1" ... this is because only the so called master node (node which 1st joined the cluster = node with lowest node id) is the OCR master, which means ony this nodes write the OCR and makes the backup.
Regards,
Ralf
Hi
with regards to oracle clusterware processes on Linux platform.
Are they suppose to run only under root account ?
or
some database software owner account and some under root account ?
or
some under grid infrastructure owner account and some under root account ?
Ralf,
Quoting from the book "Oracle Database 11g Release 2 High Availability" by Scott Jesse et. al., page 54 "In previous releases, the backup may only be on one node, but in Oracle Database 11g Release 2 the backup is spread out on different nodes". I had hoped to see the backup "automatically" spread out on both nodes irrespective of the fact that I always start node1 first.
Anonymous,
The services start with init.ohasd being spawned via the inittab. So, these start off as root.
When installing GI, the post install root script also resets ownership as required.
It is easiest to use root (with the GRID_ORACLE_HOME and PATH configured) to run crsctl commands.
Hi,
Suppose I want to create a 500MB volume called TESTVOL in the ACFSDG acfs disk group by storing only one copy of the volume file extents in the disk group.
Should I create this by:
SQL> create volume TESTVOL diskgroup ACFSDG size 500M unprotected
and
ASMCMD> volcreate -G ACFSDG -s 500M --redundancy unprotected TESTVOL
or
SQL>alter diskgroup ACFSDG add volume TESTVOL size 500M unprotected
and
ASMCMD> volcreate -G ASMFS -s 500M
or
SQL> alter diskgroup ACFS add volume TESTVOL size 500M;
ASMCMD> volcreate -G ACFSDG -s 500M --redundancy unprotected TESTVOL
very nice one and helpful.
Thanks&Regards
Muhammad Abdul Halim
http://halimdba.blogspot.com/
Rico,
I believe that your question has been answered by Levi Pereira on forums.oracle.com
Post a Comment