Search My Oracle Blog

Custom Search

17 August, 2014

ASM Commands : 1 -- Adding and Using a new DiskGroup for RAC

In 11gR2 Grid Infrastructure and RAC

On node1, I discover and add a disk to ASM.  NFS "devices" asmdisk.1 to asmdisk.6 are present as ASM Disks. asmdisk.7 has been added on NFS mount point /data1. (Disks asmdisk.3 to asmdisk.6 are on /data2)

I start on node1 in my Cluster

[root@node1 ~]# su - grid
-sh-3.2$ sqlplus 

SQL*Plus: Release 11.2.0.2.0 Production on Sat Aug 16 23:42:02 2014

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Enter user-name: / as sysasm

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> show parameter asm_diskstring

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
asm_diskstring                       string      /crs/*, /data1/*, /data2/*, /f
                                                 ra/*
SQL> !ls -l /data1/asm*
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 16 23:42 /data1/asmdisk.1
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 16 23:42 /data1/asmdisk.2
-rw-r--r-- 1 grid oinstall 2048000000 Aug 16 23:33 /data1/asmdisk.7

SQL> create diskgroup DATA3 disk '/data1/asmdisk.7';
create diskgroup DATA3 disk '/data1/asmdisk.7'
*
ERROR at line 1:
ORA-15018: diskgroup cannot be created
ORA-15072: command requires at least 2 regular failure groups, discovered only
1


SQL> create diskgroup DATA3 external redundancy disk '/data1/asmdisk.7';

Diskgroup created.

SQL> 
SQL> select group_number, name, total_mb
  2  from v$asm_diskgroup
  3  where name = 'DATA3'
  4  /

GROUP_NUMBER NAME                             TOTAL_MB
------------ ------------------------------ ----------
           5 DATA3                                1953

SQL> 

I now have a new DiskGroup using External Redundancy with a single disk.  Is it visible at node2 ?

[root@node2 ~]# su - grid
-sh-3.2$ sqlplus 

SQL*Plus: Release 11.2.0.2.0 Production on Sat Aug 16 23:47:45 2014

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Enter user-name: / as sysasm

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> select group_number, name, total_mb 
  2  from v$asm_diskgroup
  3  where name = 'DATA3'
  4  /

GROUP_NUMBER NAME                             TOTAL_MB
------------ ------------------------------ ----------
           0 DATA3                                   0

SQL> 

Why is the size not visible yet ?  Because, although the CREATE from node1 had also MOUNTed the Disk Group, it hasn't been mounted on node2 yet.

SQL> alter diskgroup DATA3 mount;

Diskgroup altered.

SQL> select group_number, name, total_mb
  2  from v$asm_diskgroup
  3  where name = 'DATA3'
  4  /

GROUP_NUMBER NAME                             TOTAL_MB
------------ ------------------------------ ----------
           5 DATA3                                1953

SQL> 

Can I confirm the underlying disk ?

SQL> select group_number, disk_number, header_status, state, total_mb
  2  from v$asm_disk
  3  where group_number = 5;

GROUP_NUMBER DISK_NUMBER HEADER_STATU STATE      TOTAL_MB
------------ ----------- ------------ -------- ----------
           5           0 MEMBER       NORMAL         1953

SQL> 


What happens when I create a tablespace/datafile in this DiskGroup, from the instance on node1 ?

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options
-sh-3.2$ su - oracle
Password: 
-sh-3.2$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 17 00:08:31 2014

Copyright (c) 1982, 2010, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> create tablespace NEW_TBS datafile '+DATA3';
create tablespace NEW_TBS datafile '+DATA3'
*
ERROR at line 1:
ORA-01119: error in creating database file '+DATA3'
ORA-15045: ASM file name '+DATA3' is not in reference form
ORA-17502: ksfdcre:5 Failed to create file +DATA3
ORA-15081: failed to submit an I/O operation to a disk


SQL> 

Why do I get this error ? I could create a DiskGroup on the ASM Disk but I couldn't add a datafile ?  Let me check the permissions.

SQL> !sh
sh-3.2$ cd /data1
sh-3.2$ ls -l asmd*
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 17 00:11 asmdisk.1
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 17 00:11 asmdisk.2
-rw-r--r-- 1 grid oinstall 2048000000 Aug 17 00:11 asmdisk.7
sh-3.2$ su grid
Password: 
sh-3.2$ pwd
/data1
sh-3.2$ ls -l asmd*
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 17 00:12 asmdisk.1
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 17 00:12 asmdisk.2
-rw-r--r-- 1 grid oinstall 2048000000 Aug 17 00:12 asmdisk.7
sh-3.2$ chmod 775 asmdisk.7
sh-3.2$ ls -l asmdisk.7
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 17 00:12 asmdisk.7
sh-3.2$ 

The oinstall group that is used by "oracle" did not have write permissions. Let me go back to Oracle now after having granted the permissions.

sh-3.2$ exit
exit
sh-3.2$ exit
exit

SQL> l
  1* create tablespace NEW_TBS datafile '+DATA3'
SQL> /

Tablespace created.

SQL> 

The CREATE TABLESPACE has succeeded.  I can verify the datafile and the ASM file from node2 now.

-sh-3.2$ id
uid=500(grid) gid=1001(oinstall) groups=1001(oinstall),1011(asmdba)
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 17 00:17:19 2014

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Enter user-name: / as sysasm

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> select group_number, file_number, bytes/1048576, type, redundancy
  2  from v$asm_file
  3  where group_number=5;

GROUP_NUMBER FILE_NUMBER BYTES/1048576
------------ ----------- -------------
TYPE                                                             REDUND
---------------------------------------------------------------- ------
           5         256    100.007813
DATAFILE                                                         UNPROT


SQL> 
SQL> exit
suDisconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options
 -sh-3.2$ 
-sh-3.2$ su - oracle
Password: 
-sh-3.2$ sqlplus 
 
SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 17 00:19:34 2014

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> select file_name, bytes/1048576 from dba_data_files
  2  where tablespace_name = 'NEW_TBS';

FILE_NAME
--------------------------------------------------------------------------------
BYTES/1048576
-------------
+DATA3/racdb/datafile/new_tbs.256.855792859
          100


SQL> 

Now, I have the new DataFile visible in ASM and the Database on the New DiskGroup.
.
.
.

03 August, 2014

GI Commands : 2 -- Managing the Local and Cluster Registry

In 11gR2

There are essentially two registries, the Local Registry and the Cluster Registry.

Let's check the Local Registry :

[root@node1 ~]# cat /etc/oracle/olr.loc
olrconfig_loc=/u01/app/grid/11.2.0/cdata/node1.olr
crs_home=/u01/app/grid/11.2.0
[root@node1 ~]# 
[root@node1 ~]# file /u01/app/grid/11.2.0/cdata/node1.olr
/u01/app/grid/11.2.0/cdata/node1.olr: data
[root@node1 ~]# 

So, like the Cluster Registry, the Local Registry is a binary file.  It is on a local filesystem on the node, not on ASM/NFS/CFS.  Each node in the cluster has its own Local Registry.

The Local Registry can be checked for consistency (corruption) using ocrcheck with the "-local" flag.  Note : As demonstrated in my previous post, the root account must be used for the check.

[root@node1 ~]# su - grid
-sh-3.2$ su 
Password: 
[root@node1 grid]# ocrcheck -local
Status of Oracle Local Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       2696
         Available space (kbytes) :     259424
         ID                       : 1388021147
         Device/File Name         : /u01/app/grid/11.2.0/cdata/node1.olr
                                    Device/File integrity check succeeded

         Local registry integrity check succeeded

         Logical corruption check succeeded

[root@node1 grid]# 

Now let's look at the Cluster Registry :

[root@node1 grid]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3668
         Available space (kbytes) :     258452
         ID                       :  605940771
         Device/File Name         :      +DATA
                                    Device/File integrity check succeeded
         Device/File Name         : /fra/ocrfile
                                    Device/File integrity check succeeded
         Device/File Name         :       +FRA
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

[root@node1 grid]# 

The Cluster Registry is distributed across two ASM DiskGroups (+DATA and +FRA) and one filesystem (/fra/ocrfile). Yes, this is a special case that I've created to distribute the OCR in this manner.

I cannot add the OCR to a location which is an ASM diskgroup with a lower asm.compatible.

[root@node1 grid]# ocrconfig -add +DATA2
PROT-30: The Oracle Cluster Registry location to be added is not accessible
PROC-8: Cannot perform cluster registry operation because one of the parameters is invalid.
ORA-15056: additional error message
ORA-17502: ksfdcre:4 Failed to create file +DATA2.255.1
ORA-15221: ASM operation requires compatible.asm of 11.1.0.0.0 or higher
ORA-06512: at line 4

[root@node1 grid]# 

I now remove the filesystem copy of the OCR.

[root@node1 grid]# ocrconfig -delete /fra/ocrfile
[root@node1 grid]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3668
         Available space (kbytes) :     258452
         ID                       :  605940771
         Device/File Name         :      +DATA
                                    Device/File integrity check succeeded
         Device/File Name         :       +FRA
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

[root@node1 grid]# 

Note, however, that the ocrconfig delete doesn't actually remove the filesystem file that I had created.

[root@node1 grid]# ls -l /fra/ocrfile
-rw-r--r-- 1 root root 272756736 Aug  3 21:27 /fra/ocrfile
[root@node1 grid]# rm /fra/ocrfile
rm: remove regular file `/fra/ocrfile'? yes
[root@node1 grid]# 

I will now add a filesystem location for the OCR.

[root@node1 grid]# touch /fra/new_ocrfile
[root@node1 grid]# ocrconfig -add /fra/new_ocrfile
[root@node1 grid]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3668
         Available space (kbytes) :     258452
         ID                       :  605940771
         Device/File Name         :      +DATA
                                    Device/File integrity check succeeded
         Device/File Name         :       +FRA
                                    Device/File integrity check succeeded
         Device/File Name         : /fra/new_ocrfile
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

[root@node1 grid]# ls -l /fra/new_ocrfile
-rw-r--r-- 1 root root 272756736 Aug  3 21:30 /fra/new_ocrfile
[root@node1 grid]# 

What about OCR Backups ?  (Note : Oracle does frequent automatic backups of the OCR, but *not* of the OLR).
N.B. : This listing doesn't show all the OCR backups you'd expect because I don't have my cluster running continuously through all the days.

[root@node1 grid]# ocrconfig -showbackup

node1     2014/07/06 21:53:25     /u01/app/grid/11.2.0/cdata/rac/backup00.ocr

node1     2011/10/22 03:09:03     /u01/app/grid/11.2.0/cdata/rac/backup01.ocr

node1     2011/10/21 23:06:39     /u01/app/grid/11.2.0/cdata/rac/backup02.ocr

node1     2014/07/06 21:53:25     /u01/app/grid/11.2.0/cdata/rac/day.ocr

node1     2014/07/06 21:53:25     /u01/app/grid/11.2.0/cdata/rac/week.ocr

node1     2014/07/06 22:39:55     /u01/app/grid/11.2.0/cdata/rac/backup_20140706_223955.ocr

node1     2014/07/05 17:30:25     /u01/app/grid/11.2.0/cdata/rac/backup_20140705_173025.ocr

node1     2014/06/16 22:15:07     /u01/app/grid/11.2.0/cdata/rac/backup_20140616_221507.ocr

node1     2014/06/16 22:14:05     /u01/app/grid/11.2.0/cdata/rac/backup_20140616_221405.ocr

node1     2011/11/09 23:20:25     /u01/app/grid/11.2.0/cdata/rac/backup_20111109_232025.ocr
[root@node1 grid]# 

Let me run an additional backup from node2.

[root@node2 grid]# ocrconfig -manualbackup

node1     2014/08/03 21:37:17     /u01/app/grid/11.2.0/cdata/rac/backup_20140803_213717.ocr

node1     2014/07/06 22:39:55     /u01/app/grid/11.2.0/cdata/rac/backup_20140706_223955.ocr

node1     2014/07/05 17:30:25     /u01/app/grid/11.2.0/cdata/rac/backup_20140705_173025.ocr

node1     2014/06/16 22:15:07     /u01/app/grid/11.2.0/cdata/rac/backup_20140616_221507.ocr

node1     2014/06/16 22:14:05     /u01/app/grid/11.2.0/cdata/rac/backup_20140616_221405.ocr
[root@node2 grid]# 

We can see that the backup done today (03-Aug) is listed at the top.  Let's check a listing from node1

[root@node1 grid]# ocrconfig -showbackup

node1     2014/07/06 21:53:25     /u01/app/grid/11.2.0/cdata/rac/backup00.ocr

node1     2011/10/22 03:09:03     /u01/app/grid/11.2.0/cdata/rac/backup01.ocr

node1     2011/10/21 23:06:39     /u01/app/grid/11.2.0/cdata/rac/backup02.ocr

node1     2014/07/06 21:53:25     /u01/app/grid/11.2.0/cdata/rac/day.ocr

node1     2014/07/06 21:53:25     /u01/app/grid/11.2.0/cdata/rac/week.ocr

node1     2014/08/03 21:37:17     /u01/app/grid/11.2.0/cdata/rac/backup_20140803_213717.ocr

node1     2014/07/06 22:39:55     /u01/app/grid/11.2.0/cdata/rac/backup_20140706_223955.ocr

node1     2014/07/05 17:30:25     /u01/app/grid/11.2.0/cdata/rac/backup_20140705_173025.ocr

node1     2014/06/16 22:15:07     /u01/app/grid/11.2.0/cdata/rac/backup_20140616_221507.ocr

node1     2014/06/16 22:14:05     /u01/app/grid/11.2.0/cdata/rac/backup_20140616_221405.ocr
[root@node1 grid]# 

Yes, the backup of 03-Aug is also listed.  But, wait ! Why is it on node1 ?  Let's go back to node2 and do a filesytem listing.

[root@node2 grid]# ls -l /u01/app/grid/11.2.0/cdata/rac/backup*
ls: /u01/app/grid/11.2.0/cdata/rac/backup*: No such file or directory
[root@node2 grid]#

Yes, as we've noticed. The backup doesn't really exist on node2.

[root@node1 grid]# ls -lt /u01/app/grid/11.2.0/cdata/rac/
total 114316
-rw------- 1 root root 8024064 Aug  3 21:37 backup_20140803_213717.ocr
-rw------- 1 root root 8003584 Jul  6 22:39 backup_20140706_223955.ocr
-rw------- 1 root root 8003584 Jul  6 21:53 day.ocr
-rw------- 1 root root 8003584 Jul  6 21:53 week.ocr
-rw------- 1 root root 8003584 Jul  6 21:53 backup00.ocr
-rw------- 1 root root 8003584 Jul  5 17:30 backup_20140705_173025.ocr
-rw------- 1 root root 7708672 Jun 16 22:15 backup_20140616_221507.ocr
-rw------- 1 root root 7708672 Jun 16 22:14 backup_20140616_221405.ocr
-rw------- 1 root root 7688192 Nov  9  2011 backup_20111109_232025.ocr
-rw------- 1 root root 7667712 Nov  9  2011 backup_20111109_230940.ocr
-rw------- 1 root root 7647232 Nov  9  2011 backup_20111109_230916.ocr
-rw------- 1 root root 7626752 Nov  9  2011 backup_20111109_224725.ocr
-rw------- 1 root root 7598080 Nov  9  2011 backup_20111109_222941.ocr
-rw------- 1 root root 7593984 Oct 22  2011 backup01.ocr
-rw------- 1 root root 7593984 Oct 21  2011 backup02.ocr
[root@node1 grid]# 

Yes, *ALL* the OCR backups to date have been created on node1 -- even when executed from node2.  node1 is still the "master" node for OCR backups as long as it is up and running.  I shut down Grid Infrastructure on node1.

[root@node1 grid]# crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'node1'
CRS-2673: Attempting to stop 'ora.crsd' on 'node1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'node1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'node1'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'node1'
CRS-2673: Attempting to stop 'ora.racdb.new_svc.svc' on 'node1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'node1'
CRS-2673: Attempting to stop 'ora.cvu' on 'node1'
CRS-2673: Attempting to stop 'ora.oc4j' on 'node1'
CRS-2673: Attempting to stop 'ora.gns' on 'node1'
CRS-2677: Stop of 'ora.cvu' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.cvu' on 'node2'
CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.scan3.vip' on 'node1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'node1' succeeded
CRS-2677: Stop of 'ora.scan3.vip' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.scan3.vip' on 'node2'
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.scan2.vip' on 'node1'
CRS-2677: Stop of 'ora.scan2.vip' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.scan2.vip' on 'node2'
CRS-2676: Start of 'ora.cvu' on 'node2' succeeded
CRS-2677: Stop of 'ora.racdb.new_svc.svc' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.node1.vip' on 'node1'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'node1'
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'node1'
CRS-2673: Attempting to stop 'ora.racdb.db' on 'node1'
CRS-2677: Stop of 'ora.node1.vip' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.node1.vip' on 'node2'
CRS-2676: Start of 'ora.scan3.vip' on 'node2' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN3.lsnr' on 'node2'
CRS-2676: Start of 'ora.scan2.vip' on 'node2' succeeded
CRS-2677: Stop of 'ora.registry.acfs' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on 'node2'
CRS-2677: Stop of 'ora.gns' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.gns.vip' on 'node1'
CRS-2677: Stop of 'ora.gns.vip' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.gns.vip' on 'node2'
CRS-2676: Start of 'ora.node1.vip' on 'node2' succeeded
CRS-2676: Start of 'ora.gns.vip' on 'node2' succeeded
CRS-2672: Attempting to start 'ora.gns' on 'node2'
CRS-2676: Start of 'ora.LISTENER_SCAN3.lsnr' on 'node2' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'node2' succeeded
CRS-2677: Stop of 'ora.racdb.db' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.DATA1.dg' on 'node1'
CRS-2673: Attempting to stop 'ora.DATA2.dg' on 'node1'
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'node1'
CRS-2676: Start of 'ora.gns' on 'node2' succeeded
CRS-2677: Stop of 'ora.DATA1.dg' on 'node1' succeeded
CRS-2677: Stop of 'ora.DATA2.dg' on 'node1' succeeded
CRS-2677: Stop of 'ora.oc4j' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.oc4j' on 'node2'
CRS-2676: Start of 'ora.oc4j' on 'node2' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'node1' succeeded
CRS-2677: Stop of 'ora.FRA.dg' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'node1'
CRS-2677: Stop of 'ora.asm' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'node1'
CRS-2677: Stop of 'ora.ons' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'node1'
CRS-2677: Stop of 'ora.net1.network' on 'node1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node1' has completed
CRS-2677: Stop of 'ora.crsd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'node1'
CRS-2673: Attempting to stop 'ora.crf' on 'node1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'node1'
CRS-2673: Attempting to stop 'ora.evmd' on 'node1'
CRS-2673: Attempting to stop 'ora.asm' on 'node1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'node1'
CRS-2677: Stop of 'ora.crf' on 'node1' succeeded
CRS-2677: Stop of 'ora.asm' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'node1'
CRS-2677: Stop of 'ora.evmd' on 'node1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'node1' succeeded
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'node1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'node1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'node1' succeeded
CRS-2677: Stop of 'ora.cssd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'node1'
CRS-2673: Attempting to stop 'ora.diskmon' on 'node1'
CRS-2677: Stop of 'ora.gipcd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'node1'
CRS-2677: Stop of 'ora.gpnpd' on 'node1' succeeded
CRS-2677: Stop of 'ora.diskmon' on 'node1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'node1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@node1 grid]# 

So, all the Grid Infrastructure services are down on node1. I will run an OCR Backup from node2 and verify it's location.

[root@node2 grid]# ocrconfig -manualbackup

node2     2014/08/03 21:49:02     /u01/app/grid/11.2.0/cdata/rac/backup_20140803_214902.ocr

node1     2014/08/03 21:37:17     /u01/app/grid/11.2.0/cdata/rac/backup_20140803_213717.ocr

node1     2014/07/06 22:39:55     /u01/app/grid/11.2.0/cdata/rac/backup_20140706_223955.ocr

node1     2014/07/05 17:30:25     /u01/app/grid/11.2.0/cdata/rac/backup_20140705_173025.ocr

node1     2014/06/16 22:15:07     /u01/app/grid/11.2.0/cdata/rac/backup_20140616_221507.ocr
[root@node2 grid]# ls -l /u01/app/grid/11.2.0/cdata/rac/backup*
-rw------- 1 root root 8024064 Aug  3 21:49 /u01/app/grid/11.2.0/cdata/rac/backup_20140803_214902.ocr
[root@node2 grid]# 

Yes, the backup got created on node2 now.

Question : Would there have been a way to create a backup on node2 without shutting down node1 ?

.
.
.

27 July, 2014

GI Commands : 1 -- Monitoring Status of Resources

In 11gR2

Listing the Status of Resources

[root@node1 ~]# su - grid
-sh-3.2$ crsctl status resource -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       node1                                        
               ONLINE  ONLINE       node2                                        
ora.DATA1.dg
               ONLINE  ONLINE       node1                                        
               ONLINE  ONLINE       node2                                        
ora.DATA2.dg
               ONLINE  ONLINE       node1                                        
               ONLINE  ONLINE       node2                                        
ora.FRA.dg
               ONLINE  ONLINE       node1                                        
               ONLINE  ONLINE       node2                                        
ora.LISTENER.lsnr
               ONLINE  ONLINE       node1                                        
               ONLINE  ONLINE       node2                                        
ora.asm
               ONLINE  ONLINE       node1                                        
               ONLINE  ONLINE       node2                                        
ora.gsd
               OFFLINE OFFLINE      node1                                        
               OFFLINE OFFLINE      node2                                        
ora.net1.network
               ONLINE  ONLINE       node1                                        
               ONLINE  ONLINE       node2                                        
ora.ons
               ONLINE  ONLINE       node1                                        
               ONLINE  ONLINE       node2                                        
ora.registry.acfs
               ONLINE  ONLINE       node1                                        
               ONLINE  ONLINE       node2                                        
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       node2                                        
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       node1                                        
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       node1                                        
ora.cvu
      1        ONLINE  ONLINE       node1                                        
ora.gns
      1        ONLINE  ONLINE       node1                                        
ora.gns.vip
      1        ONLINE  ONLINE       node1                                        
ora.node1.vip
      1        ONLINE  ONLINE       node1                                        
ora.node2.vip
      1        ONLINE  ONLINE       node2                                        
ora.oc4j
      1        ONLINE  ONLINE       node1                                        
ora.racdb.db
      1        ONLINE  ONLINE       node1                    Open                
      2        ONLINE  ONLINE       node2                    Open                
ora.racdb.new_svc.svc
      1        ONLINE  ONLINE       node1                                        
      2        ONLINE  ONLINE       node2                                        
ora.scan1.vip
      1        ONLINE  ONLINE       node2                                        
ora.scan2.vip
      1        ONLINE  ONLINE       node1                                        
ora.scan3.vip
      1        ONLINE  ONLINE       node1                                        
-sh-3.2$ 

So we see that :
a) The Cluster consists of two nodes node1 and node2
b) There are 4 ASM DiskGroups DATA, DATA1, DATA2 and FRA
c) GSD is offline as expected -- it is required only for 9i Databases
d) There is a database racdb and a service new_svc  (see my previous post)


Listing the status of SCAN Listeners

-sh-3.2$ id
uid=500(grid) gid=1001(oinstall) groups=1001(oinstall),1011(asmdba)
-sh-3.2$ srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node node2
SCAN VIP scan2 is enabled
SCAN VIP scan2 is running on node node1
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node node1
-sh-3.2$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node node2
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node node1
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node node1
-sh-3.2$ 

So we see that
a) There are 3 SCAN Listeners
b) Since this is a 2-node cluster, 2 of the SCAN Listeners are on one node node1


Listing the status of the OCR

-sh-3.2$ id
uid=500(grid) gid=1001(oinstall) groups=1001(oinstall),1011(asmdba)
-sh-3.2$ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3668
         Available space (kbytes) :     258452
         ID                       :  605940771
         Device/File Name         :      +DATA
                                    Device/File integrity check succeeded
         Device/File Name         : /fra/ocrfile
                                    Device/File integrity check succeeded
         Device/File Name         :       +FRA
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check bypassed due to non-privileged user

-sh-3.2$ su root
Password: 
[root@node1 grid]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3668
         Available space (kbytes) :     258452
         ID                       :  605940771
         Device/File Name         :      +DATA
                                    Device/File integrity check succeeded
         Device/File Name         : /fra/ocrfile
                                    Device/File integrity check succeeded
         Device/File Name         :       +FRA
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

[root@node1 grid]# 

So we see that :
a) The OCR is in 3 locations +DATA, +FRA and NFS filesystem /fra/ocrfile
b) A Logical corruption check of the OCR can only be done by root, not by grid


Listing the status of the Vote Disk

-sh-3.2$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   0e94545ce44f4fb1bf6906dc6889aaff (/fra/votedisk.3) []
 2. ONLINE   0d13305520f84f3fbf6c2008a6f79829 (/data1/votedisk.1) []
Located 2 voting disk(s).
-sh-3.2$ 

So we see that :
a) There are 2 votedisk copies  (yes, two -- not the recommended three !)
b) Both are on filesystem
How do I happen to have 2 votedisk copies ?  I actually had 3 but removed one. DON'T TRY THIS ON YOUR PRODUCTION CLUSTER.  I am adding the third one back now :
-sh-3.2$ crsctl add css votedisk /data2/votedisk.2
Now formatting voting disk: /data2/votedisk.2.
CRS-4603: Successful addition of voting disk /data2/votedisk.2.
-sh-3.2$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   0e94545ce44f4fb1bf6906dc6889aaff (/fra/votedisk.3) []
 2. ONLINE   0d13305520f84f3fbf6c2008a6f79829 (/data1/votedisk.1) []
 3. ONLINE   41c24037b51c4f97bf4cb7002649aee4 (/data2/votedisk.2) []
Located 3 voting disk(s).
-sh-3.2$ 

There, I am now back to 3 votedisk copies.
.
.
.

Aggregated by orafaq.com

Aggregated by orafaq.com
This blog is being aggregated by orafaq.com