Search My Oracle Blog

Custom Search

24 August, 2014

ASM Commands : 2 -- Migrating a DiskGroup to New Disk(s)

In 11gR2 Grid Infrastructure and RAC

After the previous demonstration of adding a new DiskGroup, I now demonstrate migrating the DiskGroup to a new pair of disks.

First, I create a table in the Tablespace on that DiskGroup.

[root@node1 ~]# su - oracle
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 24 22:17:28 2014

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Enter user-name: hemant/hemant

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> create table new_tbs_tbl                        
  2  tablespace new_tbs
  3  as select * from dba_objects 
  4  /

Table created.

SQL> select segment_name, bytes/1048576
  2  from user_segments
  3  where tablespace_name = 'NEW_TBS'
  4  /

SEGMENT_NAME
--------------------------------------------------------------------------------
BYTES/1048576
-------------
NEW_TBS_TBL
            9


SQL> select file_name, bytes/1048576
  2  from dba_data_files
  3  where tablespace_name = 'NEW_TBS'
  4  /

FILE_NAME
--------------------------------------------------------------------------------
BYTES/1048576
-------------
+DATA3/racdb/datafile/new_tbs.256.855792859
          100


SQL> 


Next, I verify that the DiskGroup is currently on disk asmdisk.7 and that the two new disks that I plan to migrate the DiskGroup to are available as asmdisk.8 and asmdisk.9  (yes, unfortunately, they are on /fra, instead of /data1 or /data2 because I have run out of disk space in /data1 and /data2).
This I do from node1 :

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
-sh-3.2$ exit
logout

[root@node1 ~]# 
[root@node1 ~]# su - grid
-sh-3.2$ sqlplus 

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 24 22:22:32 2014

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Enter user-name: / as sysasm

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> l
  1  select d.name, d.path
  2  from v$asm_disk d, v$asm_diskgroup g
  3  where d.group_number=g.group_number
  4* and g.name = 'DATA3'
SQL> /

NAME
------------------------------
PATH
--------------------------------------------------------------------------------
DATA3_0000
/data1/asmdisk.7


SQL> 
SQL> !sh
sh-3.2$ ls -l /fra/asmdisk*
-rwxrwxr-x 1 grid oinstall 1024000000 Aug 24 22:06 /fra/asmdisk.8
-rwxrwxr-x 1 grid oinstall 1024000000 Aug 24 22:07 /fra/asmdisk.9
sh-3.2$


Note how the ownership and permissions are set for the two new disks (see my previous post).

I now add the two new disks.

sh-3.2$ exit
exit

SQL> show parameter power

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
asm_power_limit                      integer     1
SQL> alter diskgroup data3  add disk '/fra/asmdisk.8', '/fra/asmdisk.9';

Diskgroup altered.

SQL> 
SQL> select * from v$asm_operation;

GROUP_NUMBER OPERA STAT      POWER     ACTUAL      SOFAR   EST_WORK   EST_RATE
------------ ----- ---- ---------- ---------- ---------- ---------- ----------
EST_MINUTES ERROR_CODE
----------- --------------------------------------------
           3 REBAL RUN           1          1          1        101         60
          1


SQL> 


With ASM_POWERLIMIT set to 1, Oracle ASM automatically starts a REBALANCE operation.  However, since I did *not* drop the existing asmdisk.7, Oracle will still continue to use it.

After a while, I confirm that the REBALANCE has completed.  I can now drop asmdisk.7.  Unfortunately, this will trigger a new REBALANCE !

SQL> l
  1* select * from v$asm_operation
SQL> /

no rows selected

SQL> 
SQL> l
  1  select d.name, d.path
  2  from v$asm_disk d, v$asm_diskgroup g
  3  where d.group_number=g.group_number
  4* and g.name = 'DATA3'
SQL> /

NAME
------------------------------
PATH
--------------------------------------------------------------------------------
DATA3_0000
/data1/asmdisk.7

DATA3_0002
/fra/asmdisk.9

DATA3_0001
/fra/asmdisk.8


SQL> 
SQL> alter diskgroup data3 drop disk '/data1/asmdisk.7';
alter diskgroup data3 drop disk '/data1/asmdisk.7'
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15054: disk "/DATA1/ASMDISK.7" does not exist in diskgroup "DATA3"


SQL> alter diskgroup data3 drop disk 'DATA3_0000';

Diskgroup altered.

SQL> 
SQL> select * from v$asm_operation;

GROUP_NUMBER OPERA STAT      POWER     ACTUAL      SOFAR   EST_WORK   EST_RATE
------------ ----- ---- ---------- ---------- ---------- ---------- ----------
EST_MINUTES ERROR_CODE
----------- --------------------------------------------
           3 REBAL RUN           1          1          2        102        120
          0


SQL> 
SQL> l
  1* select * from v$asm_operation
SQL> 
SQL> /

GROUP_NUMBER OPERA STAT      POWER     ACTUAL      SOFAR   EST_WORK   EST_RATE
------------ ----- ---- ---------- ---------- ---------- ---------- ----------
EST_MINUTES ERROR_CODE
----------- --------------------------------------------
           3 REBAL RUN           1          1         47        101         95
          0


SQL> /

no rows selected

SQL> 


NOTE : Note how I must specify the Disk NAME (not the PATH) for the DROP. When I added disks asmdisk.8 and asmdisk.9, I could have given then meaningful names as well. Oracle has automatically named them.

Ideally, what I should have done is to use the ADD and DROP command together.  That way, I would have a single-pass REBALANCE required.

After a while, I run my validation queries on node2.


[root@node2 ~]# su - grid
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 24 22:42:39 2014

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Enter user-name: / as sysasm

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> select d.name, d.path
    from v$asm_disk d, v$asm_diskgroup g
    where d.group_number=g.group_number
   and g.name = 'DATA3'  2    3    4  
  5  
SQL> l
  1  select d.name, d.path
  2      from v$asm_disk d, v$asm_diskgroup g
  3      where d.group_number=g.group_number
  4*    and g.name = 'DATA3'
SQL> /

NAME
------------------------------
PATH
--------------------------------------------------------------------------------
DATA3_0002
/fra/asmdisk.9

DATA3_0001
/fra/asmdisk.8


SQL> 
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options
-sh-3.2$ exit
logout

[root@node2 ~]# su - oracle
-sh-3.2$ 
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 24 22:44:10 2014

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Enter user-name: hemant/hemant

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> select count(*) from new_tbs_tbl;

  COUNT(*)
----------
     72460

SQL> 


I have now accessed the table, tablespace, diskgroup and disks from node2 successfully. Disk asmdisk.7 is no longer part of the DiskGroup.

I can physically remove disk asmdisk7 from the storage.


SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options
-sh-3.2$ exit
logout

[root@node1 ~]# cd /data1
[root@node1 data1]# ls -l asmdisk.7
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 24 22:39 asmdisk.7
[root@node1 data1]# rm asmdisk.7
rm: remove regular file `asmdisk.7'? y
[root@node1 data1]# 
[root@node1 data1]# su - grid
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 24 22:50:18 2014

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Enter user-name: / as sysasm

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> set pages 60
SQL> col name format a15
SQL> col path format a20
SQL> select group_number, name, path
  2  from v$asm_disk
  3  order by 1,2;

GROUP_NUMBER NAME            PATH
------------ --------------- --------------------
           0                 /crs/voting.disk
           0                 /data1/votedisk.1
           0                 /data2/votedisk.2
           0                 /fra/votedisk.3
           1 DATA1_0000      /data1/asmdisk.1
           1 DATA1_0001      /data2/asmdisk.4
           2 DATA2_0000      /data1/asmdisk.2
           2 DATA2_0001      /data2/asmdisk.5
           2 DATA2_0002      /data2/asmdisk.6
           3 DATA3_0001      /fra/asmdisk.8
           3 DATA3_0002      /fra/asmdisk.9
           4 DATA_0000       /crs/ocr.configurati
                             on

           5 FRA_0000        /fra/fradisk.3
           5 FRA_0001        /fra/fradisk.2
           5 FRA_0002        /fra/fradisk.1
           5 FRA_0003        /fra/fradisk.4

16 rows selected.

SQL> 

The disk asmdisk.7 is no longer part of the storage. (Remember : All my disks here are on NFS).
.
.
.

17 August, 2014

ASM Commands : 1 -- Adding and Using a new DiskGroup for RAC

In 11gR2 Grid Infrastructure and RAC

On node1, I discover and add a disk to ASM.  NFS "devices" asmdisk.1 to asmdisk.6 are present as ASM Disks. asmdisk.7 has been added on NFS mount point /data1. (Disks asmdisk.3 to asmdisk.6 are on /data2)

I start on node1 in my Cluster

[root@node1 ~]# su - grid
-sh-3.2$ sqlplus 

SQL*Plus: Release 11.2.0.2.0 Production on Sat Aug 16 23:42:02 2014

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Enter user-name: / as sysasm

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> show parameter asm_diskstring

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
asm_diskstring                       string      /crs/*, /data1/*, /data2/*, /f
                                                 ra/*
SQL> !ls -l /data1/asm*
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 16 23:42 /data1/asmdisk.1
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 16 23:42 /data1/asmdisk.2
-rw-r--r-- 1 grid oinstall 2048000000 Aug 16 23:33 /data1/asmdisk.7

SQL> create diskgroup DATA3 disk '/data1/asmdisk.7';
create diskgroup DATA3 disk '/data1/asmdisk.7'
*
ERROR at line 1:
ORA-15018: diskgroup cannot be created
ORA-15072: command requires at least 2 regular failure groups, discovered only
1


SQL> create diskgroup DATA3 external redundancy disk '/data1/asmdisk.7';

Diskgroup created.

SQL> 
SQL> select group_number, name, total_mb
  2  from v$asm_diskgroup
  3  where name = 'DATA3'
  4  /

GROUP_NUMBER NAME                             TOTAL_MB
------------ ------------------------------ ----------
           5 DATA3                                1953

SQL> 

I now have a new DiskGroup using External Redundancy with a single disk.  Is it visible at node2 ?

[root@node2 ~]# su - grid
-sh-3.2$ sqlplus 

SQL*Plus: Release 11.2.0.2.0 Production on Sat Aug 16 23:47:45 2014

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Enter user-name: / as sysasm

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> select group_number, name, total_mb 
  2  from v$asm_diskgroup
  3  where name = 'DATA3'
  4  /

GROUP_NUMBER NAME                             TOTAL_MB
------------ ------------------------------ ----------
           0 DATA3                                   0

SQL> 

Why is the size not visible yet ?  Because, although the CREATE from node1 had also MOUNTed the Disk Group, it hasn't been mounted on node2 yet.

SQL> alter diskgroup DATA3 mount;

Diskgroup altered.

SQL> select group_number, name, total_mb
  2  from v$asm_diskgroup
  3  where name = 'DATA3'
  4  /

GROUP_NUMBER NAME                             TOTAL_MB
------------ ------------------------------ ----------
           5 DATA3                                1953

SQL> 

Can I confirm the underlying disk ?

SQL> select group_number, disk_number, header_status, state, total_mb
  2  from v$asm_disk
  3  where group_number = 5;

GROUP_NUMBER DISK_NUMBER HEADER_STATU STATE      TOTAL_MB
------------ ----------- ------------ -------- ----------
           5           0 MEMBER       NORMAL         1953

SQL> 


What happens when I create a tablespace/datafile in this DiskGroup, from the instance on node1 ?

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options
-sh-3.2$ su - oracle
Password: 
-sh-3.2$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 17 00:08:31 2014

Copyright (c) 1982, 2010, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> create tablespace NEW_TBS datafile '+DATA3';
create tablespace NEW_TBS datafile '+DATA3'
*
ERROR at line 1:
ORA-01119: error in creating database file '+DATA3'
ORA-15045: ASM file name '+DATA3' is not in reference form
ORA-17502: ksfdcre:5 Failed to create file +DATA3
ORA-15081: failed to submit an I/O operation to a disk


SQL> 

Why do I get this error ? I could create a DiskGroup on the ASM Disk but I couldn't add a datafile ?  Let me check the permissions.

SQL> !sh
sh-3.2$ cd /data1
sh-3.2$ ls -l asmd*
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 17 00:11 asmdisk.1
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 17 00:11 asmdisk.2
-rw-r--r-- 1 grid oinstall 2048000000 Aug 17 00:11 asmdisk.7
sh-3.2$ su grid
Password: 
sh-3.2$ pwd
/data1
sh-3.2$ ls -l asmd*
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 17 00:12 asmdisk.1
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 17 00:12 asmdisk.2
-rw-r--r-- 1 grid oinstall 2048000000 Aug 17 00:12 asmdisk.7
sh-3.2$ chmod 775 asmdisk.7
sh-3.2$ ls -l asmdisk.7
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 17 00:12 asmdisk.7
sh-3.2$ 

The oinstall group that is used by "oracle" did not have write permissions. Let me go back to Oracle now after having granted the permissions.

sh-3.2$ exit
exit
sh-3.2$ exit
exit

SQL> l
  1* create tablespace NEW_TBS datafile '+DATA3'
SQL> /

Tablespace created.

SQL> 

The CREATE TABLESPACE has succeeded.  I can verify the datafile and the ASM file from node2 now.

-sh-3.2$ id
uid=500(grid) gid=1001(oinstall) groups=1001(oinstall),1011(asmdba)
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 17 00:17:19 2014

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Enter user-name: / as sysasm

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> select group_number, file_number, bytes/1048576, type, redundancy
  2  from v$asm_file
  3  where group_number=5;

GROUP_NUMBER FILE_NUMBER BYTES/1048576
------------ ----------- -------------
TYPE                                                             REDUND
---------------------------------------------------------------- ------
           5         256    100.007813
DATAFILE                                                         UNPROT


SQL> 
SQL> exit
suDisconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options
 -sh-3.2$ 
-sh-3.2$ su - oracle
Password: 
-sh-3.2$ sqlplus 
 
SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 17 00:19:34 2014

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> select file_name, bytes/1048576 from dba_data_files
  2  where tablespace_name = 'NEW_TBS';

FILE_NAME
--------------------------------------------------------------------------------
BYTES/1048576
-------------
+DATA3/racdb/datafile/new_tbs.256.855792859
          100


SQL> 

Now, I have the new DataFile visible in ASM and the Database on the New DiskGroup.
.
.
.

03 August, 2014

GI Commands : 2 -- Managing the Local and Cluster Registry

In 11gR2

There are essentially two registries, the Local Registry and the Cluster Registry.

Let's check the Local Registry :

[root@node1 ~]# cat /etc/oracle/olr.loc
olrconfig_loc=/u01/app/grid/11.2.0/cdata/node1.olr
crs_home=/u01/app/grid/11.2.0
[root@node1 ~]# 
[root@node1 ~]# file /u01/app/grid/11.2.0/cdata/node1.olr
/u01/app/grid/11.2.0/cdata/node1.olr: data
[root@node1 ~]# 

So, like the Cluster Registry, the Local Registry is a binary file.  It is on a local filesystem on the node, not on ASM/NFS/CFS.  Each node in the cluster has its own Local Registry.

The Local Registry can be checked for consistency (corruption) using ocrcheck with the "-local" flag.  Note : As demonstrated in my previous post, the root account must be used for the check.

[root@node1 ~]# su - grid
-sh-3.2$ su 
Password: 
[root@node1 grid]# ocrcheck -local
Status of Oracle Local Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       2696
         Available space (kbytes) :     259424
         ID                       : 1388021147
         Device/File Name         : /u01/app/grid/11.2.0/cdata/node1.olr
                                    Device/File integrity check succeeded

         Local registry integrity check succeeded

         Logical corruption check succeeded

[root@node1 grid]# 

Now let's look at the Cluster Registry :

[root@node1 grid]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3668
         Available space (kbytes) :     258452
         ID                       :  605940771
         Device/File Name         :      +DATA
                                    Device/File integrity check succeeded
         Device/File Name         : /fra/ocrfile
                                    Device/File integrity check succeeded
         Device/File Name         :       +FRA
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

[root@node1 grid]# 

The Cluster Registry is distributed across two ASM DiskGroups (+DATA and +FRA) and one filesystem (/fra/ocrfile). Yes, this is a special case that I've created to distribute the OCR in this manner.

I cannot add the OCR to a location which is an ASM diskgroup with a lower asm.compatible.

[root@node1 grid]# ocrconfig -add +DATA2
PROT-30: The Oracle Cluster Registry location to be added is not accessible
PROC-8: Cannot perform cluster registry operation because one of the parameters is invalid.
ORA-15056: additional error message
ORA-17502: ksfdcre:4 Failed to create file +DATA2.255.1
ORA-15221: ASM operation requires compatible.asm of 11.1.0.0.0 or higher
ORA-06512: at line 4

[root@node1 grid]# 

I now remove the filesystem copy of the OCR.

[root@node1 grid]# ocrconfig -delete /fra/ocrfile
[root@node1 grid]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3668
         Available space (kbytes) :     258452
         ID                       :  605940771
         Device/File Name         :      +DATA
                                    Device/File integrity check succeeded
         Device/File Name         :       +FRA
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

[root@node1 grid]# 

Note, however, that the ocrconfig delete doesn't actually remove the filesystem file that I had created.

[root@node1 grid]# ls -l /fra/ocrfile
-rw-r--r-- 1 root root 272756736 Aug  3 21:27 /fra/ocrfile
[root@node1 grid]# rm /fra/ocrfile
rm: remove regular file `/fra/ocrfile'? yes
[root@node1 grid]# 

I will now add a filesystem location for the OCR.

[root@node1 grid]# touch /fra/new_ocrfile
[root@node1 grid]# ocrconfig -add /fra/new_ocrfile
[root@node1 grid]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3668
         Available space (kbytes) :     258452
         ID                       :  605940771
         Device/File Name         :      +DATA
                                    Device/File integrity check succeeded
         Device/File Name         :       +FRA
                                    Device/File integrity check succeeded
         Device/File Name         : /fra/new_ocrfile
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

[root@node1 grid]# ls -l /fra/new_ocrfile
-rw-r--r-- 1 root root 272756736 Aug  3 21:30 /fra/new_ocrfile
[root@node1 grid]# 

What about OCR Backups ?  (Note : Oracle does frequent automatic backups of the OCR, but *not* of the OLR).
N.B. : This listing doesn't show all the OCR backups you'd expect because I don't have my cluster running continuously through all the days.

[root@node1 grid]# ocrconfig -showbackup

node1     2014/07/06 21:53:25     /u01/app/grid/11.2.0/cdata/rac/backup00.ocr

node1     2011/10/22 03:09:03     /u01/app/grid/11.2.0/cdata/rac/backup01.ocr

node1     2011/10/21 23:06:39     /u01/app/grid/11.2.0/cdata/rac/backup02.ocr

node1     2014/07/06 21:53:25     /u01/app/grid/11.2.0/cdata/rac/day.ocr

node1     2014/07/06 21:53:25     /u01/app/grid/11.2.0/cdata/rac/week.ocr

node1     2014/07/06 22:39:55     /u01/app/grid/11.2.0/cdata/rac/backup_20140706_223955.ocr

node1     2014/07/05 17:30:25     /u01/app/grid/11.2.0/cdata/rac/backup_20140705_173025.ocr

node1     2014/06/16 22:15:07     /u01/app/grid/11.2.0/cdata/rac/backup_20140616_221507.ocr

node1     2014/06/16 22:14:05     /u01/app/grid/11.2.0/cdata/rac/backup_20140616_221405.ocr

node1     2011/11/09 23:20:25     /u01/app/grid/11.2.0/cdata/rac/backup_20111109_232025.ocr
[root@node1 grid]# 

Let me run an additional backup from node2.

[root@node2 grid]# ocrconfig -manualbackup

node1     2014/08/03 21:37:17     /u01/app/grid/11.2.0/cdata/rac/backup_20140803_213717.ocr

node1     2014/07/06 22:39:55     /u01/app/grid/11.2.0/cdata/rac/backup_20140706_223955.ocr

node1     2014/07/05 17:30:25     /u01/app/grid/11.2.0/cdata/rac/backup_20140705_173025.ocr

node1     2014/06/16 22:15:07     /u01/app/grid/11.2.0/cdata/rac/backup_20140616_221507.ocr

node1     2014/06/16 22:14:05     /u01/app/grid/11.2.0/cdata/rac/backup_20140616_221405.ocr
[root@node2 grid]# 

We can see that the backup done today (03-Aug) is listed at the top.  Let's check a listing from node1

[root@node1 grid]# ocrconfig -showbackup

node1     2014/07/06 21:53:25     /u01/app/grid/11.2.0/cdata/rac/backup00.ocr

node1     2011/10/22 03:09:03     /u01/app/grid/11.2.0/cdata/rac/backup01.ocr

node1     2011/10/21 23:06:39     /u01/app/grid/11.2.0/cdata/rac/backup02.ocr

node1     2014/07/06 21:53:25     /u01/app/grid/11.2.0/cdata/rac/day.ocr

node1     2014/07/06 21:53:25     /u01/app/grid/11.2.0/cdata/rac/week.ocr

node1     2014/08/03 21:37:17     /u01/app/grid/11.2.0/cdata/rac/backup_20140803_213717.ocr

node1     2014/07/06 22:39:55     /u01/app/grid/11.2.0/cdata/rac/backup_20140706_223955.ocr

node1     2014/07/05 17:30:25     /u01/app/grid/11.2.0/cdata/rac/backup_20140705_173025.ocr

node1     2014/06/16 22:15:07     /u01/app/grid/11.2.0/cdata/rac/backup_20140616_221507.ocr

node1     2014/06/16 22:14:05     /u01/app/grid/11.2.0/cdata/rac/backup_20140616_221405.ocr
[root@node1 grid]# 

Yes, the backup of 03-Aug is also listed.  But, wait ! Why is it on node1 ?  Let's go back to node2 and do a filesytem listing.

[root@node2 grid]# ls -l /u01/app/grid/11.2.0/cdata/rac/backup*
ls: /u01/app/grid/11.2.0/cdata/rac/backup*: No such file or directory
[root@node2 grid]#

Yes, as we've noticed. The backup doesn't really exist on node2.

[root@node1 grid]# ls -lt /u01/app/grid/11.2.0/cdata/rac/
total 114316
-rw------- 1 root root 8024064 Aug  3 21:37 backup_20140803_213717.ocr
-rw------- 1 root root 8003584 Jul  6 22:39 backup_20140706_223955.ocr
-rw------- 1 root root 8003584 Jul  6 21:53 day.ocr
-rw------- 1 root root 8003584 Jul  6 21:53 week.ocr
-rw------- 1 root root 8003584 Jul  6 21:53 backup00.ocr
-rw------- 1 root root 8003584 Jul  5 17:30 backup_20140705_173025.ocr
-rw------- 1 root root 7708672 Jun 16 22:15 backup_20140616_221507.ocr
-rw------- 1 root root 7708672 Jun 16 22:14 backup_20140616_221405.ocr
-rw------- 1 root root 7688192 Nov  9  2011 backup_20111109_232025.ocr
-rw------- 1 root root 7667712 Nov  9  2011 backup_20111109_230940.ocr
-rw------- 1 root root 7647232 Nov  9  2011 backup_20111109_230916.ocr
-rw------- 1 root root 7626752 Nov  9  2011 backup_20111109_224725.ocr
-rw------- 1 root root 7598080 Nov  9  2011 backup_20111109_222941.ocr
-rw------- 1 root root 7593984 Oct 22  2011 backup01.ocr
-rw------- 1 root root 7593984 Oct 21  2011 backup02.ocr
[root@node1 grid]# 

Yes, *ALL* the OCR backups to date have been created on node1 -- even when executed from node2.  node1 is still the "master" node for OCR backups as long as it is up and running.  I shut down Grid Infrastructure on node1.

[root@node1 grid]# crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'node1'
CRS-2673: Attempting to stop 'ora.crsd' on 'node1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'node1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'node1'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'node1'
CRS-2673: Attempting to stop 'ora.racdb.new_svc.svc' on 'node1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'node1'
CRS-2673: Attempting to stop 'ora.cvu' on 'node1'
CRS-2673: Attempting to stop 'ora.oc4j' on 'node1'
CRS-2673: Attempting to stop 'ora.gns' on 'node1'
CRS-2677: Stop of 'ora.cvu' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.cvu' on 'node2'
CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.scan3.vip' on 'node1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'node1' succeeded
CRS-2677: Stop of 'ora.scan3.vip' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.scan3.vip' on 'node2'
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.scan2.vip' on 'node1'
CRS-2677: Stop of 'ora.scan2.vip' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.scan2.vip' on 'node2'
CRS-2676: Start of 'ora.cvu' on 'node2' succeeded
CRS-2677: Stop of 'ora.racdb.new_svc.svc' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.node1.vip' on 'node1'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'node1'
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'node1'
CRS-2673: Attempting to stop 'ora.racdb.db' on 'node1'
CRS-2677: Stop of 'ora.node1.vip' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.node1.vip' on 'node2'
CRS-2676: Start of 'ora.scan3.vip' on 'node2' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN3.lsnr' on 'node2'
CRS-2676: Start of 'ora.scan2.vip' on 'node2' succeeded
CRS-2677: Stop of 'ora.registry.acfs' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on 'node2'
CRS-2677: Stop of 'ora.gns' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.gns.vip' on 'node1'
CRS-2677: Stop of 'ora.gns.vip' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.gns.vip' on 'node2'
CRS-2676: Start of 'ora.node1.vip' on 'node2' succeeded
CRS-2676: Start of 'ora.gns.vip' on 'node2' succeeded
CRS-2672: Attempting to start 'ora.gns' on 'node2'
CRS-2676: Start of 'ora.LISTENER_SCAN3.lsnr' on 'node2' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'node2' succeeded
CRS-2677: Stop of 'ora.racdb.db' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.DATA1.dg' on 'node1'
CRS-2673: Attempting to stop 'ora.DATA2.dg' on 'node1'
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'node1'
CRS-2676: Start of 'ora.gns' on 'node2' succeeded
CRS-2677: Stop of 'ora.DATA1.dg' on 'node1' succeeded
CRS-2677: Stop of 'ora.DATA2.dg' on 'node1' succeeded
CRS-2677: Stop of 'ora.oc4j' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.oc4j' on 'node2'
CRS-2676: Start of 'ora.oc4j' on 'node2' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'node1' succeeded
CRS-2677: Stop of 'ora.FRA.dg' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'node1'
CRS-2677: Stop of 'ora.asm' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'node1'
CRS-2677: Stop of 'ora.ons' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'node1'
CRS-2677: Stop of 'ora.net1.network' on 'node1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node1' has completed
CRS-2677: Stop of 'ora.crsd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'node1'
CRS-2673: Attempting to stop 'ora.crf' on 'node1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'node1'
CRS-2673: Attempting to stop 'ora.evmd' on 'node1'
CRS-2673: Attempting to stop 'ora.asm' on 'node1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'node1'
CRS-2677: Stop of 'ora.crf' on 'node1' succeeded
CRS-2677: Stop of 'ora.asm' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'node1'
CRS-2677: Stop of 'ora.evmd' on 'node1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'node1' succeeded
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'node1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'node1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'node1' succeeded
CRS-2677: Stop of 'ora.cssd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'node1'
CRS-2673: Attempting to stop 'ora.diskmon' on 'node1'
CRS-2677: Stop of 'ora.gipcd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'node1'
CRS-2677: Stop of 'ora.gpnpd' on 'node1' succeeded
CRS-2677: Stop of 'ora.diskmon' on 'node1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'node1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@node1 grid]# 

So, all the Grid Infrastructure services are down on node1. I will run an OCR Backup from node2 and verify it's location.

[root@node2 grid]# ocrconfig -manualbackup

node2     2014/08/03 21:49:02     /u01/app/grid/11.2.0/cdata/rac/backup_20140803_214902.ocr

node1     2014/08/03 21:37:17     /u01/app/grid/11.2.0/cdata/rac/backup_20140803_213717.ocr

node1     2014/07/06 22:39:55     /u01/app/grid/11.2.0/cdata/rac/backup_20140706_223955.ocr

node1     2014/07/05 17:30:25     /u01/app/grid/11.2.0/cdata/rac/backup_20140705_173025.ocr

node1     2014/06/16 22:15:07     /u01/app/grid/11.2.0/cdata/rac/backup_20140616_221507.ocr
[root@node2 grid]# ls -l /u01/app/grid/11.2.0/cdata/rac/backup*
-rw------- 1 root root 8024064 Aug  3 21:49 /u01/app/grid/11.2.0/cdata/rac/backup_20140803_214902.ocr
[root@node2 grid]# 

Yes, the backup got created on node2 now.

Question : Would there have been a way to create a backup on node2 without shutting down node1 ?

.
.
.

Aggregated by orafaq.com

Aggregated by orafaq.com
This blog is being aggregated by orafaq.com

Top 50 Oracle SQL Blogs 2016

Top 50 Oracle SQL Blogs 2016
Top 50 Oracle SQL Blogs 2016