27 July, 2014

GI Commands : 1 -- Monitoring Status of Resources

In 11gR2

Listing the Status of Resources

[root@node1 ~]# su - grid
-sh-3.2$ crsctl status resource -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       node1                                        
               ONLINE  ONLINE       node2                                        
ora.DATA1.dg
               ONLINE  ONLINE       node1                                        
               ONLINE  ONLINE       node2                                        
ora.DATA2.dg
               ONLINE  ONLINE       node1                                        
               ONLINE  ONLINE       node2                                        
ora.FRA.dg
               ONLINE  ONLINE       node1                                        
               ONLINE  ONLINE       node2                                        
ora.LISTENER.lsnr
               ONLINE  ONLINE       node1                                        
               ONLINE  ONLINE       node2                                        
ora.asm
               ONLINE  ONLINE       node1                                        
               ONLINE  ONLINE       node2                                        
ora.gsd
               OFFLINE OFFLINE      node1                                        
               OFFLINE OFFLINE      node2                                        
ora.net1.network
               ONLINE  ONLINE       node1                                        
               ONLINE  ONLINE       node2                                        
ora.ons
               ONLINE  ONLINE       node1                                        
               ONLINE  ONLINE       node2                                        
ora.registry.acfs
               ONLINE  ONLINE       node1                                        
               ONLINE  ONLINE       node2                                        
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       node2                                        
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       node1                                        
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       node1                                        
ora.cvu
      1        ONLINE  ONLINE       node1                                        
ora.gns
      1        ONLINE  ONLINE       node1                                        
ora.gns.vip
      1        ONLINE  ONLINE       node1                                        
ora.node1.vip
      1        ONLINE  ONLINE       node1                                        
ora.node2.vip
      1        ONLINE  ONLINE       node2                                        
ora.oc4j
      1        ONLINE  ONLINE       node1                                        
ora.racdb.db
      1        ONLINE  ONLINE       node1                    Open                
      2        ONLINE  ONLINE       node2                    Open                
ora.racdb.new_svc.svc
      1        ONLINE  ONLINE       node1                                        
      2        ONLINE  ONLINE       node2                                        
ora.scan1.vip
      1        ONLINE  ONLINE       node2                                        
ora.scan2.vip
      1        ONLINE  ONLINE       node1                                        
ora.scan3.vip
      1        ONLINE  ONLINE       node1                                        
-sh-3.2$ 

So we see that :
a) The Cluster consists of two nodes node1 and node2
b) There are 4 ASM DiskGroups DATA, DATA1, DATA2 and FRA
c) GSD is offline as expected -- it is required only for 9i Databases
d) There is a database racdb and a service new_svc  (see my previous post)


Listing the status of SCAN Listeners

-sh-3.2$ id
uid=500(grid) gid=1001(oinstall) groups=1001(oinstall),1011(asmdba)
-sh-3.2$ srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node node2
SCAN VIP scan2 is enabled
SCAN VIP scan2 is running on node node1
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node node1
-sh-3.2$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node node2
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node node1
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node node1
-sh-3.2$ 

So we see that
a) There are 3 SCAN Listeners
b) Since this is a 2-node cluster, 2 of the SCAN Listeners are on one node node1


Listing the status of the OCR

-sh-3.2$ id
uid=500(grid) gid=1001(oinstall) groups=1001(oinstall),1011(asmdba)
-sh-3.2$ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3668
         Available space (kbytes) :     258452
         ID                       :  605940771
         Device/File Name         :      +DATA
                                    Device/File integrity check succeeded
         Device/File Name         : /fra/ocrfile
                                    Device/File integrity check succeeded
         Device/File Name         :       +FRA
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check bypassed due to non-privileged user

-sh-3.2$ su root
Password: 
[root@node1 grid]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3668
         Available space (kbytes) :     258452
         ID                       :  605940771
         Device/File Name         :      +DATA
                                    Device/File integrity check succeeded
         Device/File Name         : /fra/ocrfile
                                    Device/File integrity check succeeded
         Device/File Name         :       +FRA
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

[root@node1 grid]# 

So we see that :
a) The OCR is in 3 locations +DATA, +FRA and NFS filesystem /fra/ocrfile
b) A Logical corruption check of the OCR can only be done by root, not by grid


Listing the status of the Vote Disk

-sh-3.2$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   0e94545ce44f4fb1bf6906dc6889aaff (/fra/votedisk.3) []
 2. ONLINE   0d13305520f84f3fbf6c2008a6f79829 (/data1/votedisk.1) []
Located 2 voting disk(s).
-sh-3.2$ 

So we see that :
a) There are 2 votedisk copies  (yes, two -- not the recommended three !)
b) Both are on filesystem
How do I happen to have 2 votedisk copies ?  I actually had 3 but removed one. DON'T TRY THIS ON YOUR PRODUCTION CLUSTER.  I am adding the third one back now :
-sh-3.2$ crsctl add css votedisk /data2/votedisk.2
Now formatting voting disk: /data2/votedisk.2.
CRS-4603: Successful addition of voting disk /data2/votedisk.2.
-sh-3.2$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   0e94545ce44f4fb1bf6906dc6889aaff (/fra/votedisk.3) []
 2. ONLINE   0d13305520f84f3fbf6c2008a6f79829 (/data1/votedisk.1) []
 3. ONLINE   41c24037b51c4f97bf4cb7002649aee4 (/data2/votedisk.2) []
Located 3 voting disk(s).
-sh-3.2$ 

There, I am now back to 3 votedisk copies.
.
.
.

4 comments:

Anonymous said...

Hi Hemanth, nice to see ur your blog as always, how do you manage time to blog besides your busy schedule? Meantime I have a question, our IT support guy pulled the cables from our SAN storage of live database (non RAC), i shutdown the server and brought it up just to realize that datafiles gone to recover mode. Then I ran alter database recover database; from RMAN, is it right step or should have just brought the datafile online? though the database is OK now.

Anonymous said...

hi hemanth, nice to see u blog besides ur busy schedule, how do u manage ur time? I have a question, when my support guy pulled the cables of SAN storage of live database(non RAC), I had to restart the server just to realize many datafiles gone to recover mode. In panic I simply run alter database recover from RMAN. is this the right step or I should have just brought the datafiles online. please suggest.

gopi krishna said...

Hi Hemanth, nice to see ur blog as always, here is a question, when my IT support guy accidentally pulled the cables of SAN storage of live database(non RAC), i had to restart the server to realize that many of datafiles are in recovery mode. I ran 'alter database recover from RMAN and started the server. all is OK now, is it OK to do so, or is there any better way. I should have run recover datafile instead of recover database (or) should have tried command just to bring the datafile online without recovery. pls suggest me.

Hemant K Chitale said...

Likely an ALTER DATABASE DATAFILE ... ONLINE command would have required a RECOVER DATAFILE ... command.