Search My Oracle Blog

Custom Search

10 September, 2014

Index Growing Larger Than The Table

Here is a very simple demonstration of a case where an Index can grow larger than the table.  This happens because the pattern of data deleted and inserted doesn't allow deleted entries to be reused.  For every 10 rows that are inserted, 7 rows are subsequently deleted after their status is changed to "Processed".  But the space for the deleted entries from the index cannot be reused.

SQL>
SQL>REM Demo Index growth larger than table !
SQL>
SQL>drop table hkc_process_list purge;

Table dropped.

SQL>
SQL>create table hkc_process_list
  2  (transaction_id number,
  3  status_flag varchar2(1),
  4  last_update_date date,
  5  transaction_type number,
  6  details varchar2(25))
  7  /

Table created.

SQL>
SQL>create index hkc_process_list_ndx
  2  on hkc_process_list
  3  (transaction_id, status_flag)
  4  /

Index created.

SQL>
SQL>
SQL>REM Cycle 1 -------------------------------------
> -- create first 1000 transactions
SQL>insert into hkc_process_list
  2  select rownum, 'N', sysdate, mod(rownum,4)+1, dbms_random.string('X',10)
  3  from dual
  4  connect by level < 1001
  5  /

1000 rows created.

SQL>commit;

Commit complete.

SQL>
SQL>-- get sizes of table and index
SQL>exec dbms_stats.gather_table_stats('','HKC_PROCESS_LIST',estimate_percent=>100,cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL>select 'Table' Obj_Type, table_name, blocks Blocks
  2  from user_tables
  3  where table_name like 'HKC_PROCE%'
  4  union
  5  select 'Index', index_name, leaf_blocks
  6  from user_indexes
  7  where index_name like 'HKC_PROCE%'
  8  order by 1
  9  /

OBJ_T TABLE_NAME                         BLOCKS                                 
----- ------------------------------ ----------                                 
Index HKC_PROCESS_LIST_NDX                    3                                 
Table HKC_PROCESS_LIST                        5                                 

2 rows selected.

SQL>
SQL>-- change status flag for 70% of the transactions to 'P'
SQL>update hkc_process_list
  2  set status_flag='P'
  3  where mod(transaction_id,10) < 7
  4  /

700 rows updated.

SQL>commit;

Commit complete.

SQL>
SQL>-- delete processed rows
SQL>delete hkc_process_list
  2  where status_flag='P'
  3  /

700 rows deleted.

SQL>commit;

Commit complete.

SQL>
SQL>REM Cycle 2 -------------------------------------
> -- insert another 1000 rows
SQL>insert into hkc_process_list
  2  select rownum+1000, 'N', sysdate, mod(rownum,4)+1, dbms_random.string('X',10)
  3  from dual
  4  connect by level < 1001
  5  /

1000 rows created.

SQL>commit;

Commit complete.

SQL>
SQL>-- get sizes of table and index
SQL>exec dbms_stats.gather_table_stats('','HKC_PROCESS_LIST',estimate_percent=>100,cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL>select 'Table' Obj_Type, table_name, blocks Blocks
  2  from user_tables
  3  where table_name like 'HKC_PROCE%'
  4  union
  5  select 'Index', index_name, leaf_blocks
  6  from user_indexes
  7  where index_name like 'HKC_PROCE%'
  8  order by 1
  9  /

OBJ_T TABLE_NAME                         BLOCKS                                 
----- ------------------------------ ----------                                 
Index HKC_PROCESS_LIST_NDX                    7                                 
Table HKC_PROCESS_LIST                       13                                 

2 rows selected.

SQL>
SQL>-- change status flag for 70% of the transactions to 'P'
SQL>update hkc_process_list
  2  set status_flag='P'
  3  where mod(transaction_id,10) < 7
  4  /

700 rows updated.

SQL>commit;

Commit complete.

SQL>
SQL>-- delete processed rows
SQL>delete hkc_process_list
  2  where status_flag='P'
  3  /

700 rows deleted.

SQL>commit;

Commit complete.

SQL>
SQL>
SQL>REM Cycle 3 -------------------------------------
> -- insert another 1000 rows
SQL>insert into hkc_process_list
  2  select rownum+2000, 'N', sysdate, mod(rownum,4)+1, dbms_random.string('X',10)
  3  from dual
  4  connect by level < 1001
  5  /

1000 rows created.

SQL>commit;

Commit complete.

SQL>
SQL>-- get sizes of table and index
SQL>exec dbms_stats.gather_table_stats('','HKC_PROCESS_LIST',estimate_percent=>100,cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL>select 'Table' Obj_Type, table_name, blocks Blocks
  2  from user_tables
  3  where table_name like 'HKC_PROCE%'
  4  union
  5  select 'Index', index_name, leaf_blocks
  6  from user_indexes
  7  where index_name like 'HKC_PROCE%'
  8  order by 1
  9  /

OBJ_T TABLE_NAME                         BLOCKS                                 
----- ------------------------------ ----------                                 
Index HKC_PROCESS_LIST_NDX                   11                                 
Table HKC_PROCESS_LIST                       13                                 

2 rows selected.

SQL>
SQL>-- change status flag for 70% of the transactions to 'P'
SQL>update hkc_process_list
  2  set status_flag='P'
  3  where mod(transaction_id,10) < 7
  4  /

700 rows updated.

SQL>commit;

Commit complete.

SQL>
SQL>-- delete processed rows
SQL>delete hkc_process_list
  2  where status_flag='P'
  3  /

700 rows deleted.

SQL>commit;

Commit complete.

SQL>
SQL>
SQL>REM Cycle 4 -------------------------------------
> -- insert another 1000 rows
SQL>insert into hkc_process_list
  2  select rownum+3000, 'N', sysdate, mod(rownum,4)+1, dbms_random.string('X',10)
  3  from dual
  4  connect by level < 1001
  5  /

1000 rows created.

SQL>commit;

Commit complete.

SQL>
SQL>-- get sizes of table and index
SQL>exec dbms_stats.gather_table_stats('','HKC_PROCESS_LIST',estimate_percent=>100,cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL>select 'Table' Obj_Type, table_name, blocks Blocks
  2  from user_tables
  3  where table_name like 'HKC_PROCE%'
  4  union
  5  select 'Index', index_name, leaf_blocks
  6  from user_indexes
  7  where index_name like 'HKC_PROCE%'
  8  order by 1
  9  /

OBJ_T TABLE_NAME                         BLOCKS                                 
----- ------------------------------ ----------                                 
Index HKC_PROCESS_LIST_NDX                   15                                 
Table HKC_PROCESS_LIST                       13                                 

2 rows selected.

SQL>
SQL>-- change status flag for 70% of the transactions to 'P'
SQL>update hkc_process_list
  2  set status_flag='P'
  3  where mod(transaction_id,10) < 7
  4  /

700 rows updated.

SQL>commit;

Commit complete.

SQL>
SQL>-- delete processed rows
SQL>delete hkc_process_list
  2  where status_flag='P'
  3  /

700 rows deleted.

SQL>commit;

Commit complete.

SQL>
SQL>
SQL>REM  Latest State size -------------------------
> -- get sizes of table and index
SQL>exec dbms_stats.gather_table_stats('','HKC_PROCESS_LIST',estimate_percent=>100,cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL>select 'Table' Obj_Type, table_name, blocks Blocks
  2  from user_tables
  3  where table_name like 'HKC_PROCE%'
  4  union
  5  select 'Index', index_name, leaf_blocks
  6  from user_indexes
  7  where index_name like 'HKC_PROCE%'
  8  order by 1
  9  /

OBJ_T TABLE_NAME                         BLOCKS                                 
----- ------------------------------ ----------                                 
Index HKC_PROCESS_LIST_NDX                   17                                 
Table HKC_PROCESS_LIST                       13                                 

2 rows selected.

SQL>
SQL>

Note how the Index grew from 3 blocks to 17 blocks, larger than the table that grew to 13 and seemed to have reached a "steady-state" at 13 blocks.

The Index is built on only 2 of the 5 columns of the table and these two columns are also "narrow" in that they are a number and a single character.  Yet it grows faster through the INSERT - DELETE - INSERT cycles.

Note the difference between the Index definition (built on TRANSACTION_ID as the leading column) and the pattern of DELETEs (which is on STATUS_FLAG).

Deleted rows leave "holes" in the index but these are entries that cannot be reused by subsequent
Inserts.  The Index is ordered on TRANSACTION_ID.  So if an Index entry for TRANSACTION_ID = n is deleted, the entry can be reused only for the same (or very close) TRANSACTION_ID.

Assume that an Index Leaf Block contains entries for TRANSACTION_IDs 1, 2, 3, 4 and so on upto 10.  If rows for TRANSACTION_IDs 2,3,5,6,8 and 9 are deleted but 1,4,7 and 10  are not deleted then the Leaf Block has "free" space for new rows only with TRANSACTION_IDs 2,3,5,6,8 and 9.  New rows with TRANSACTION_IDs 11 and above will take a new Index Leaf Block and not re-use the "free" space in the first Index Leaf Block.  The first Leaf Block remains with deleted entries that are not reused.
On the other hand, when the rows are delete from the Table Block, new rows can be reinserted into the same Table Block.  The Table is Heap Organised, not Ordered like the Index.  Therefore, new rows are permitted to be inserted into any Block(s) that contain space for those new rows -- e.g. blocks from which rows are deleted.  Therefore, after deleting TRANSACTION_IDs 2,3,5,6 from a Table Block, new TRANSACTION_IDs 11,12,13,14 can be re-inserted into the *same* Block.

.
.
.

07 September, 2014

RAC Database Backups

In 11gR2 Grid Infrastructure and RAC


UPDATE : 13-Sep-14 : How to run the RMAN Backup using server sessions concurrently on each node.  Please scroll down to the update.


In a RAC environment, the database backups can be executed from any one node or distributed across multiple nodes of the cluster.

In my two-node environment, I have backups configured to go to an FRA.  This is defined by the instance parameter "db_recovery_file_dest" (and "db_recovery_file_dest_size").  This can be a shared location -- e.g. an ASM DiskGroup or a ClusterFileSystem.  Therefore, the parameter should ideally be the same across all nodes so that backups may be executed from any or multiple nodes without changing the backup location.

Running the RMAN commands from node1 :
[root@node1 ~]# su - oracle
-sh-3.2$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Sun Sep 7 21:56:46 2014

Copyright (c) 1982, 2010, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> show parameter db_recovery_file

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest                string      +FRA
db_recovery_file_dest_size           big integer 4000M
SQL>
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
-sh-3.2$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Sun Sep 7 21:57:49 2014

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

connected to target database: RACDB (DBID=762767011)

RMAN> list backup summary;

using target database control file instead of recovery catalog

List of Backups
===============
Key     TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
------- -- -- - ----------- --------------- ------- ------- ---------- ---
12      B  F  A DISK        26-NOV-11       1       1       YES        TAG20111126T224849
13      B  A  A DISK        26-NOV-11       1       1       YES        TAG20111126T230108
16      B  A  A DISK        16-JUN-14       1       1       YES        TAG20140616T222340
18      B  A  A DISK        16-JUN-14       1       1       YES        TAG20140616T222738
19      B  F  A DISK        16-JUN-14       1       1       NO         TAG20140616T222742
20      B  F  A DISK        05-JUL-14       1       1       NO         TAG20140705T173046
21      B  F  A DISK        16-AUG-14       1       1       NO         TAG20140816T231412
22      B  F  A DISK        17-AUG-14       1       1       NO         TAG20140817T002340

RMAN> 
RMAN> backup as compressed backupset database plus archivelog delete input;


Starting backup at 07-SEP-14
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=111 RECID=77 STAMP=857685630
input archived log thread=2 sequence=37 RECID=76 STAMP=857685626
input archived log thread=2 sequence=38 RECID=79 STAMP=857685684
input archived log thread=1 sequence=112 RECID=78 STAMP=857685681
channel ORA_DISK_1: starting piece 1 at 07-SEP-14
channel ORA_DISK_1: finished piece 1 at 07-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_07/annnf0_tag20140907t220131_0.288.857685699 tag=TAG20140907T220131 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:09
channel ORA_DISK_1: deleting archived log(s)
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_1_seq_111.307.857685623 RECID=77 STAMP=857685630
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_2_seq_37.309.857685623 RECID=76 STAMP=857685626
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_2_seq_38.277.857685685 RECID=79 STAMP=857685684
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_1_seq_112.270.857685681 RECID=78 STAMP=857685681
Finished backup at 07-SEP-14

Starting backup at 07-SEP-14
using channel ORA_DISK_1
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=+DATA1/racdb/datafile/system.257.765499365
input datafile file number=00002 name=+DATA2/racdb/datafile/sysaux.256.765502307
input datafile file number=00003 name=+DATA1/racdb/datafile/undotbs1.259.765500033
input datafile file number=00004 name=+DATA2/racdb/datafile/undotbs2.257.765503281
input datafile file number=00006 name=+DATA1/racdb/datafile/partition_test.265.809628399
input datafile file number=00007 name=+DATA1/racdb/datafile/hemant_tbs.266.852139375
input datafile file number=00008 name=+DATA3/racdb/datafile/new_tbs.256.855792859
input datafile file number=00005 name=+DATA1/racdb/datafile/users.261.765500215
channel ORA_DISK_1: starting piece 1 at 07-SEP-14
channel ORA_DISK_1: finished piece 1 at 07-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_07/nnndf0_tag20140907t220145_0.270.857685709 tag=TAG20140907T220145 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:06:15
Finished backup at 07-SEP-14

Starting backup at 07-SEP-14
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=113 RECID=81 STAMP=857686085
input archived log thread=2 sequence=39 RECID=80 STAMP=857686083
channel ORA_DISK_1: starting piece 1 at 07-SEP-14
channel ORA_DISK_1: finished piece 1 at 07-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_07/annnf0_tag20140907t220807_0.307.857686087 tag=TAG20140907T220807 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_1_seq_113.309.857686085 RECID=81 STAMP=857686085
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_2_seq_39.277.857686083 RECID=80 STAMP=857686083
Finished backup at 07-SEP-14

Starting Control File and SPFILE Autobackup at 07-SEP-14
piece handle=+FRA/racdb/autobackup/2014_09_07/s_857686089.277.857686097 comment=NONE
Finished Control File and SPFILE Autobackup at 07-SEP-14

RMAN> 

Note how the "PLUS ARCHIVELOG" specification also included archivelogs from both threads (instances) of the database.

Let's verify these details from the instance on node2 :

[root@node2 ~]# su - oracle
-sh-3.2$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Sun Sep 7 22:11:00 2014

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

connected to target database: RACDB (DBID=762767011)

RMAN> 

RMAN> list backup of database completed after 'trunc(sysdate)-1';

using target database control file instead of recovery catalog

List of Backup Sets
===================


BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
24      Full    258.21M    DISK        00:06:12     07-SEP-14      
        BP Key: 24   Status: AVAILABLE  Compressed: YES  Tag: TAG20140907T220145
        Piece Name: +FRA/racdb/backupset/2014_09_07/nnndf0_tag20140907t220145_0.270.857685709
  List of Datafiles in backup set 24
  File LV Type Ckp SCN    Ckp Time  Name
  ---- -- ---- ---------- --------- ----
  1       Full 1160228    07-SEP-14 +DATA1/racdb/datafile/system.257.765499365
  2       Full 1160228    07-SEP-14 +DATA2/racdb/datafile/sysaux.256.765502307
  3       Full 1160228    07-SEP-14 +DATA1/racdb/datafile/undotbs1.259.765500033
  4       Full 1160228    07-SEP-14 +DATA2/racdb/datafile/undotbs2.257.765503281
  5       Full 1160228    07-SEP-14 +DATA1/racdb/datafile/users.261.765500215
  6       Full 1160228    07-SEP-14 +DATA1/racdb/datafile/partition_test.265.809628399
  7       Full 1160228    07-SEP-14 +DATA1/racdb/datafile/hemant_tbs.266.852139375
  8       Full 1160228    07-SEP-14 +DATA3/racdb/datafile/new_tbs.256.855792859

RMAN> 

Yes, today's backup is visible from node2 as it retrieves the information from the controlfile that is common across all the instances of the database.

How are the archivelogs configured ?

RMAN> exit


Recovery Manager complete.
-sh-3.2$
-sh-3.2$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Sun Sep 7 22:15:51 2014

Copyright (c) 1982, 2010, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> archive log list;
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     39
Next log sequence to archive   40
Current log sequence           40
SQL> 
SQL> show parameter db_recovery_file_dest

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest                string      +FRA
db_recovery_file_dest_size           big integer 4000M
SQL> 

Both instances have the same destination configured for archivelogs and backups.
.
.
.
=======================================================
UPDATE : 13-Sep-14 :  Running the backup concurrently from both nodes 

There are two ways to have the RMAN Backup run from both nodes.
A.   Issue a seperate RMAN BACKUP DATAFILE or BACKUP TABLESPACE command from each node, such that the two nodes have an independent list of Datafiles / Tablespaces

B.  Issue a BACKUP DATABASE command from one node but with two channels open, one against each node.

Here, method A is easy to do but difficult to control as you add Tablespaces and Datafiles.  So, I will demonstrate method B.

I begin with ensuring that
a.  I have REMOTE_LOGIN_PASSWORDFILE configured so that I can make a SQLNet connection from node1 to node2  (RMAN requires the connect AS SYSDBA in 11g)
b.  I have a TNSNAMES.ORA entry configured to the instance on node2 (note that the service name is common across all [both] instances in the Cluster)

-sh-3.2$ hostname
node1.mydomain.com
-sh-3.2$ id
uid=800(oracle) gid=1001(oinstall) groups=1001(oinstall),1011(asmdba),1021(dba)
-sh-3.2$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Sat Sep 13 23:22:09 2014

Copyright (c) 1982, 2010, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> show parameter remote_login_passwordfile;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
remote_login_passwordfile            string      EXCLUSIVE
SQL> quit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
-sh-3.2$ cat $ORACLE_HOME/network/admin/tnsnames.ora
# tnsnames.ora.node1 Network Configuration File: /u01/app/oracle/rdbms/11.2.0/network/admin/tnsnames.ora.node1
# Generated by Oracle configuration tools.

RACDB_1 =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = node1)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = RACDB)
    )
  )

RACDB_2 =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = node2)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = RACDB)
    )
  )

-sh-3.2$ 

Next, I start RMAN and allocate two Channels, one for each Instance (on each Node in the Cluster) and issue a BACKUP DATABASE that is automatically executed across both Channels.

-sh-3.2$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Sat Sep 13 23:23:24 2014

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

connected to target database: RACDB (DBID=762767011)

RMAN> run
2> {allocate channel  ch1 device type disk connect 'sys/manager@RACDB_1';
3> allocate channel ch2 device type disk connect 'sys/manager@RACDB_2';
4> backup as compressed backupset database plus archivelog delete input;
5> }

using target database control file instead of recovery catalog
allocated channel: ch1
channel ch1: SID=61 instance=RACDB_1 device type=DISK

allocated channel: ch2
channel ch2: SID=61 instance=RACDB_2 device type=DISK


Starting backup at 13-SEP-14
current log archived
channel ch1: starting compressed archived log backup set
channel ch1: specifying archived log(s) in backup set
input archived log thread=2 sequence=40 RECID=82 STAMP=857687640
input archived log thread=1 sequence=114 RECID=84 STAMP=858204801
input archived log thread=2 sequence=41 RECID=83 STAMP=857687641
input archived log thread=1 sequence=115 RECID=86 STAMP=858208025
channel ch1: starting piece 1 at 13-SEP-14
channel ch2: starting compressed archived log backup set
channel ch2: specifying archived log(s) in backup set
input archived log thread=2 sequence=42 RECID=85 STAMP=858208000
input archived log thread=1 sequence=116 RECID=87 STAMP=858209078
input archived log thread=2 sequence=43 RECID=88 STAMP=858209079
channel ch2: starting piece 1 at 13-SEP-14
channel ch2: finished piece 1 at 13-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_13/annnf0_tag20140913t232445_0.279.858209109 tag=TAG20140913T232445 comment=NONE
channel ch2: backup set complete, elapsed time: 00:00:26
channel ch2: deleting archived log(s)
archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_2_seq_42.296.858207997 RECID=85 STAMP=858208000
archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_1_seq_116.263.858209079 RECID=87 STAMP=858209078
archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_2_seq_43.265.858209079 RECID=88 STAMP=858209079
channel ch1: finished piece 1 at 13-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_13/annnf0_tag20140913t232445_0.275.858209099 tag=TAG20140913T232445 comment=NONE
channel ch1: backup set complete, elapsed time: 00:00:56
channel ch1: deleting archived log(s)
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_2_seq_40.309.857687641 RECID=82 STAMP=857687640
archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_1_seq_114.295.858204777 RECID=84 STAMP=858204801
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_2_seq_41.293.857687641 RECID=83 STAMP=857687641
archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_1_seq_115.305.858208001 RECID=86 STAMP=858208025
Finished backup at 13-SEP-14

Starting backup at 13-SEP-14
channel ch1: starting compressed full datafile backup set
channel ch1: specifying datafile(s) in backup set
input datafile file number=00001 name=+DATA1/racdb/datafile/system.257.765499365
input datafile file number=00004 name=+DATA2/racdb/datafile/undotbs2.257.765503281
input datafile file number=00007 name=+DATA1/racdb/datafile/hemant_tbs.266.852139375
input datafile file number=00008 name=+DATA3/racdb/datafile/new_tbs.256.855792859
channel ch1: starting piece 1 at 13-SEP-14
channel ch2: starting compressed full datafile backup set
channel ch2: specifying datafile(s) in backup set
input datafile file number=00002 name=+DATA2/racdb/datafile/sysaux.256.765502307
input datafile file number=00003 name=+DATA1/racdb/datafile/undotbs1.259.765500033
input datafile file number=00006 name=+DATA1/racdb/datafile/partition_test.265.809628399
input datafile file number=00005 name=+DATA1/racdb/datafile/users.261.765500215
channel ch2: starting piece 1 at 13-SEP-14
channel ch2: finished piece 1 at 13-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_13/nnndf0_tag20140913t232557_0.293.858209175 tag=TAG20140913T232557 comment=NONE
channel ch2: backup set complete, elapsed time: 00:12:02
channel ch1: finished piece 1 at 13-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_13/nnndf0_tag20140913t232557_0.305.858209163 tag=TAG20140913T232557 comment=NONE
channel ch1: backup set complete, elapsed time: 00:13:06
Finished backup at 13-SEP-14

Starting backup at 13-SEP-14
current log archived
channel ch1: starting compressed archived log backup set
channel ch1: specifying archived log(s) in backup set
input archived log thread=1 sequence=117 RECID=90 STAMP=858209954
channel ch1: starting piece 1 at 13-SEP-14
channel ch2: starting compressed archived log backup set
channel ch2: specifying archived log(s) in backup set
input archived log thread=2 sequence=44 RECID=89 STAMP=858209952
channel ch2: starting piece 1 at 13-SEP-14
channel ch1: finished piece 1 at 13-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_13/annnf0_tag20140913t233915_0.265.858209957 tag=TAG20140913T233915 comment=NONE
channel ch1: backup set complete, elapsed time: 00:00:03
channel ch1: deleting archived log(s)
archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_1_seq_117.309.858209953 RECID=90 STAMP=858209954
channel ch2: finished piece 1 at 13-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_13/annnf0_tag20140913t233915_0.263.858209957 tag=TAG20140913T233915 comment=NONE
channel ch2: backup set complete, elapsed time: 00:00:03
channel ch2: deleting archived log(s)
archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_2_seq_44.295.858209951 RECID=89 STAMP=858209952
Finished backup at 13-SEP-14

Starting Control File and SPFILE Autobackup at 13-SEP-14
piece handle=+FRA/racdb/autobackup/2014_09_13/s_858209961.295.858209967 comment=NONE
Finished Control File and SPFILE Autobackup at 13-SEP-14
released channel: ch1
released channel: ch2

RMAN> 

We can see that Channel ch1 was connected to Instance RACDB_1 and ch2 was connected to RACDB_2. Also, the messages indicate that both channels were running concurrently.
I also verified that the Channels did connect to each instance :

[root@node1 ~]# ps -ef |grep RACDB_1 |grep LOCAL=NO
oracle   11205     1  1 23:24 ?        00:00:00 oracleRACDB_1 (LOCAL=NO)
You have new mail in /var/spool/mail/root
[root@node1 ~]# ps -ef  |grep RACDB_1 |grep LOCAL=NO
oracle   11205     1  3 23:24 ?        00:00:04 oracleRACDB_1 (LOCAL=NO)
[root@node1 ~]# ps -ef |grep RACDB_1 |grep LOCAL=NO
oracle   11205     1  4 23:24 ?        00:00:49 oracleRACDB_1 (LOCAL=NO)
[root@node1 ~]# 
[root@node2 ~]# ps -ef |grep RACDB_2 | grep LOCAL=NO
oracle    6233     1  0 23:24 ?        00:00:00 oracleRACDB_2 (LOCAL=NO)
You have new mail in /var/spool/mail/root
[root@node2 ~]# ps -ef |grep RACDB_2 |grep LOCAL=NO
oracle    6233     1  0 23:24 ?        00:00:00 oracleRACDB_2 (LOCAL=NO)
[root@node2 ~]# ps -ef |grep RACDB_2 |grep LOCAL=NO
oracle    6233     1  2 23:24 ?        00:00:24 oracleRACDB_2 (LOCAL=NO)
[root@node2 ~]# 

As soon as I closed the RMAN (client) session, the two server processes also terminated.

This method (Method B) allows me to run an RMAN client session from any node in the Cluster and have RMAN server sessions running concurrently across all or some nodes of the Cluster, if I have not designated a single, specific node, as my RMAN Backups node.

Edit : I have demonstrated using ALLOCATE CHANNEL to run an adhoc, interactive, backup.  If you want to create a persistent script, you might want to use CONFIGURE CHANNEL and have the SYS password persisted in the configuration (saved in the controlfile) so that it is not in "plain text" in a script.

.
.
.

24 August, 2014

ASM Commands : 2 -- Migrating a DiskGroup to New Disk(s)

In 11gR2 Grid Infrastructure and RAC

After the previous demonstration of adding a new DiskGroup, I now demonstrate migrating the DiskGroup to a new pair of disks.

First, I create a table in the Tablespace on that DiskGroup.

[root@node1 ~]# su - oracle
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 24 22:17:28 2014

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Enter user-name: hemant/hemant

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> create table new_tbs_tbl                        
  2  tablespace new_tbs
  3  as select * from dba_objects 
  4  /

Table created.

SQL> select segment_name, bytes/1048576
  2  from user_segments
  3  where tablespace_name = 'NEW_TBS'
  4  /

SEGMENT_NAME
--------------------------------------------------------------------------------
BYTES/1048576
-------------
NEW_TBS_TBL
            9


SQL> select file_name, bytes/1048576
  2  from dba_data_files
  3  where tablespace_name = 'NEW_TBS'
  4  /

FILE_NAME
--------------------------------------------------------------------------------
BYTES/1048576
-------------
+DATA3/racdb/datafile/new_tbs.256.855792859
          100


SQL> 


Next, I verify that the DiskGroup is currently on disk asmdisk.7 and that the two new disks that I plan to migrate the DiskGroup to are available as asmdisk.8 and asmdisk.9  (yes, unfortunately, they are on /fra, instead of /data1 or /data2 because I have run out of disk space in /data1 and /data2).
This I do from node1 :

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
-sh-3.2$ exit
logout

[root@node1 ~]# 
[root@node1 ~]# su - grid
-sh-3.2$ sqlplus 

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 24 22:22:32 2014

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Enter user-name: / as sysasm

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> l
  1  select d.name, d.path
  2  from v$asm_disk d, v$asm_diskgroup g
  3  where d.group_number=g.group_number
  4* and g.name = 'DATA3'
SQL> /

NAME
------------------------------
PATH
--------------------------------------------------------------------------------
DATA3_0000
/data1/asmdisk.7


SQL> 
SQL> !sh
sh-3.2$ ls -l /fra/asmdisk*
-rwxrwxr-x 1 grid oinstall 1024000000 Aug 24 22:06 /fra/asmdisk.8
-rwxrwxr-x 1 grid oinstall 1024000000 Aug 24 22:07 /fra/asmdisk.9
sh-3.2$


Note how the ownership and permissions are set for the two new disks (see my previous post).

I now add the two new disks.

sh-3.2$ exit
exit

SQL> show parameter power

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
asm_power_limit                      integer     1
SQL> alter diskgroup data3  add disk '/fra/asmdisk.8', '/fra/asmdisk.9';

Diskgroup altered.

SQL> 
SQL> select * from v$asm_operation;

GROUP_NUMBER OPERA STAT      POWER     ACTUAL      SOFAR   EST_WORK   EST_RATE
------------ ----- ---- ---------- ---------- ---------- ---------- ----------
EST_MINUTES ERROR_CODE
----------- --------------------------------------------
           3 REBAL RUN           1          1          1        101         60
          1


SQL> 


With ASM_POWERLIMIT set to 1, Oracle ASM automatically starts a REBALANCE operation.  However, since I did *not* drop the existing asmdisk.7, Oracle will still continue to use it.

After a while, I confirm that the REBALANCE has completed.  I can now drop asmdisk.7.  Unfortunately, this will trigger a new REBALANCE !

SQL> l
  1* select * from v$asm_operation
SQL> /

no rows selected

SQL> 
SQL> l
  1  select d.name, d.path
  2  from v$asm_disk d, v$asm_diskgroup g
  3  where d.group_number=g.group_number
  4* and g.name = 'DATA3'
SQL> /

NAME
------------------------------
PATH
--------------------------------------------------------------------------------
DATA3_0000
/data1/asmdisk.7

DATA3_0002
/fra/asmdisk.9

DATA3_0001
/fra/asmdisk.8


SQL> 
SQL> alter diskgroup data3 drop disk '/data1/asmdisk.7';
alter diskgroup data3 drop disk '/data1/asmdisk.7'
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15054: disk "/DATA1/ASMDISK.7" does not exist in diskgroup "DATA3"


SQL> alter diskgroup data3 drop disk 'DATA3_0000';

Diskgroup altered.

SQL> 
SQL> select * from v$asm_operation;

GROUP_NUMBER OPERA STAT      POWER     ACTUAL      SOFAR   EST_WORK   EST_RATE
------------ ----- ---- ---------- ---------- ---------- ---------- ----------
EST_MINUTES ERROR_CODE
----------- --------------------------------------------
           3 REBAL RUN           1          1          2        102        120
          0


SQL> 
SQL> l
  1* select * from v$asm_operation
SQL> 
SQL> /

GROUP_NUMBER OPERA STAT      POWER     ACTUAL      SOFAR   EST_WORK   EST_RATE
------------ ----- ---- ---------- ---------- ---------- ---------- ----------
EST_MINUTES ERROR_CODE
----------- --------------------------------------------
           3 REBAL RUN           1          1         47        101         95
          0


SQL> /

no rows selected

SQL> 


NOTE : Note how I must specify the Disk NAME (not the PATH) for the DROP. When I added disks asmdisk.8 and asmdisk.9, I could have given then meaningful names as well. Oracle has automatically named them.

Ideally, what I should have done is to use the ADD and DROP command together.  That way, I would have a single-pass REBALANCE required.

After a while, I run my validation queries on node2.


[root@node2 ~]# su - grid
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 24 22:42:39 2014

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Enter user-name: / as sysasm

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> select d.name, d.path
    from v$asm_disk d, v$asm_diskgroup g
    where d.group_number=g.group_number
   and g.name = 'DATA3'  2    3    4  
  5  
SQL> l
  1  select d.name, d.path
  2      from v$asm_disk d, v$asm_diskgroup g
  3      where d.group_number=g.group_number
  4*    and g.name = 'DATA3'
SQL> /

NAME
------------------------------
PATH
--------------------------------------------------------------------------------
DATA3_0002
/fra/asmdisk.9

DATA3_0001
/fra/asmdisk.8


SQL> 
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options
-sh-3.2$ exit
logout

[root@node2 ~]# su - oracle
-sh-3.2$ 
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 24 22:44:10 2014

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Enter user-name: hemant/hemant

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> select count(*) from new_tbs_tbl;

  COUNT(*)
----------
     72460

SQL> 


I have now accessed the table, tablespace, diskgroup and disks from node2 successfully. Disk asmdisk.7 is no longer part of the DiskGroup.

I can physically remove disk asmdisk7 from the storage.


SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options
-sh-3.2$ exit
logout

[root@node1 ~]# cd /data1
[root@node1 data1]# ls -l asmdisk.7
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 24 22:39 asmdisk.7
[root@node1 data1]# rm asmdisk.7
rm: remove regular file `asmdisk.7'? y
[root@node1 data1]# 
[root@node1 data1]# su - grid
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 24 22:50:18 2014

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Enter user-name: / as sysasm

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> set pages 60
SQL> col name format a15
SQL> col path format a20
SQL> select group_number, name, path
  2  from v$asm_disk
  3  order by 1,2;

GROUP_NUMBER NAME            PATH
------------ --------------- --------------------
           0                 /crs/voting.disk
           0                 /data1/votedisk.1
           0                 /data2/votedisk.2
           0                 /fra/votedisk.3
           1 DATA1_0000      /data1/asmdisk.1
           1 DATA1_0001      /data2/asmdisk.4
           2 DATA2_0000      /data1/asmdisk.2
           2 DATA2_0001      /data2/asmdisk.5
           2 DATA2_0002      /data2/asmdisk.6
           3 DATA3_0001      /fra/asmdisk.8
           3 DATA3_0002      /fra/asmdisk.9
           4 DATA_0000       /crs/ocr.configurati
                             on

           5 FRA_0000        /fra/fradisk.3
           5 FRA_0001        /fra/fradisk.2
           5 FRA_0002        /fra/fradisk.1
           5 FRA_0003        /fra/fradisk.4

16 rows selected.

SQL> 

The disk asmdisk.7 is no longer part of the storage. (Remember : All my disks here are on NFS).
.
.
.

Aggregated by orafaq.com

Aggregated by orafaq.com
This blog is being aggregated by orafaq.com