In 11gR2
Listing the Status of Resources
So we see that :
a) The Cluster consists of two nodes node1 and node2
b) There are 4 ASM DiskGroups DATA, DATA1, DATA2 and FRA
c) GSD is offline as expected -- it is required only for 9i Databases
d) There is a database racdb and a service new_svc (see my previous post)
Listing the status of SCAN Listeners
Listing the Status of Resources
[root@node1 ~]# su - grid -sh-3.2$ crsctl status resource -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.DATA.dg ONLINE ONLINE node1 ONLINE ONLINE node2 ora.DATA1.dg ONLINE ONLINE node1 ONLINE ONLINE node2 ora.DATA2.dg ONLINE ONLINE node1 ONLINE ONLINE node2 ora.FRA.dg ONLINE ONLINE node1 ONLINE ONLINE node2 ora.LISTENER.lsnr ONLINE ONLINE node1 ONLINE ONLINE node2 ora.asm ONLINE ONLINE node1 ONLINE ONLINE node2 ora.gsd OFFLINE OFFLINE node1 OFFLINE OFFLINE node2 ora.net1.network ONLINE ONLINE node1 ONLINE ONLINE node2 ora.ons ONLINE ONLINE node1 ONLINE ONLINE node2 ora.registry.acfs ONLINE ONLINE node1 ONLINE ONLINE node2 -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE node2 ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE node1 ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE node1 ora.cvu 1 ONLINE ONLINE node1 ora.gns 1 ONLINE ONLINE node1 ora.gns.vip 1 ONLINE ONLINE node1 ora.node1.vip 1 ONLINE ONLINE node1 ora.node2.vip 1 ONLINE ONLINE node2 ora.oc4j 1 ONLINE ONLINE node1 ora.racdb.db 1 ONLINE ONLINE node1 Open 2 ONLINE ONLINE node2 Open ora.racdb.new_svc.svc 1 ONLINE ONLINE node1 2 ONLINE ONLINE node2 ora.scan1.vip 1 ONLINE ONLINE node2 ora.scan2.vip 1 ONLINE ONLINE node1 ora.scan3.vip 1 ONLINE ONLINE node1 -sh-3.2$
So we see that :
a) The Cluster consists of two nodes node1 and node2
b) There are 4 ASM DiskGroups DATA, DATA1, DATA2 and FRA
c) GSD is offline as expected -- it is required only for 9i Databases
d) There is a database racdb and a service new_svc (see my previous post)
Listing the status of SCAN Listeners
-sh-3.2$ id uid=500(grid) gid=1001(oinstall) groups=1001(oinstall),1011(asmdba) -sh-3.2$ srvctl status scan SCAN VIP scan1 is enabled SCAN VIP scan1 is running on node node2 SCAN VIP scan2 is enabled SCAN VIP scan2 is running on node node1 SCAN VIP scan3 is enabled SCAN VIP scan3 is running on node node1 -sh-3.2$ srvctl status scan_listener SCAN Listener LISTENER_SCAN1 is enabled SCAN listener LISTENER_SCAN1 is running on node node2 SCAN Listener LISTENER_SCAN2 is enabled SCAN listener LISTENER_SCAN2 is running on node node1 SCAN Listener LISTENER_SCAN3 is enabled SCAN listener LISTENER_SCAN3 is running on node node1 -sh-3.2$
So we see that
a) There are 3 SCAN Listeners
b) Since this is a 2-node cluster, 2 of the SCAN Listeners are on one node node1
Listing the status of the OCR
-sh-3.2$ id uid=500(grid) gid=1001(oinstall) groups=1001(oinstall),1011(asmdba) -sh-3.2$ ocrcheck Status of Oracle Cluster Registry is as follows : Version : 3 Total space (kbytes) : 262120 Used space (kbytes) : 3668 Available space (kbytes) : 258452 ID : 605940771 Device/File Name : +DATA Device/File integrity check succeeded Device/File Name : /fra/ocrfile Device/File integrity check succeeded Device/File Name : +FRA Device/File integrity check succeeded Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check bypassed due to non-privileged user -sh-3.2$ su root Password: [root@node1 grid]# ocrcheck Status of Oracle Cluster Registry is as follows : Version : 3 Total space (kbytes) : 262120 Used space (kbytes) : 3668 Available space (kbytes) : 258452 ID : 605940771 Device/File Name : +DATA Device/File integrity check succeeded Device/File Name : /fra/ocrfile Device/File integrity check succeeded Device/File Name : +FRA Device/File integrity check succeeded Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check succeeded [root@node1 grid]#
So we see that :
a) The OCR is in 3 locations +DATA, +FRA and NFS filesystem /fra/ocrfile
b) A Logical corruption check of the OCR can only be done by root, not by grid
Listing the status of the Vote Disk
-sh-3.2$ crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 0e94545ce44f4fb1bf6906dc6889aaff (/fra/votedisk.3) [] 2. ONLINE 0d13305520f84f3fbf6c2008a6f79829 (/data1/votedisk.1) [] Located 2 voting disk(s). -sh-3.2$
So we see that :
a) There are 2 votedisk copies (yes, two -- not the recommended three !)
b) Both are on filesystem
How do I happen to have 2 votedisk copies ? I actually had 3 but removed one. DON'T TRY THIS ON YOUR PRODUCTION CLUSTER. I am adding the third one back now :
-sh-3.2$ crsctl add css votedisk /data2/votedisk.2 Now formatting voting disk: /data2/votedisk.2. CRS-4603: Successful addition of voting disk /data2/votedisk.2. -sh-3.2$ crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 0e94545ce44f4fb1bf6906dc6889aaff (/fra/votedisk.3) [] 2. ONLINE 0d13305520f84f3fbf6c2008a6f79829 (/data1/votedisk.1) [] 3. ONLINE 41c24037b51c4f97bf4cb7002649aee4 (/data2/votedisk.2) [] Located 3 voting disk(s). -sh-3.2$
There, I am now back to 3 votedisk copies.
.
.
.