Search My Oracle Blog

Custom Search

28 May, 2009

Backup Online Redo Logs ? (AGAIN ?!)

The question about whether Online Redo Logs should be backed up keeps coming up.
Those of us who've worked with Oracle since before ARCHIVELOG was introduced do know about Cold Backups -- backup *everything* that is part of the database !

This question came up again on a forums thread, and I replied here.

My previous post on this issue, with more arguments is here.

.
.
.

26 May, 2009

Index Block Splits and REBUILD

In the course of my previous post on 50-50 Index Block Splits, I added the lines "You can see that, with 50-50 Block Splits, the Index Leaf Blocks may not be as tightly packed as expected. (Rebuilding the Index would help !)"

In the "modern" world of Oracle Experts where it has become fashionable to say "Indexes don't need to be rebuilt" or "Indexes rarely need to be rebuilt", I tend to disagree. I find that it makes sense to rebuild indexes when there is sufficient reason to do so ("circular reasoning" ?).

I take exactly the same case as in my previous example on 50-50 Index Block Splits, but add an ALTER INDEX REBULD at the end.

Here goes .... (my reputation with the "Experts" ?)


SQL> drop table demo_ibs_50_50 purge;

Table dropped.

SQL>
SQL> -- REM create the demo table
SQL> create table demo_ibs_50_50 (
2 employee_id number not null,
3 country_name varchar2(10) not null,
4 dept_name varchar2(18) not null,
5 employee_name varchar2(128) not null,
6 join_date date)
7 /

Table created.

SQL>
SQL> -- create a non unique index
SQL> create index demo_ibs_50_50_n1 on demo_ibs_50_50 (country_name,dept_name,employee_name) pctfree 1;

Index created.

SQL>
SQL> delete source_table;

50659 rows deleted.

SQL> insert into source_table select * from dba_objects where object_id is not null;

50659 rows created.

SQL> select max(object_id) from source_table;

MAX(OBJECT_ID)
--------------
53270

SQL>
SQL>
SQL> -- REM create a new session and run an insert
SQL> -- then check the statistics for the insert
SQL> connect hemant/hemant
Connected.
SQL> insert into demo_ibs_50_50
2 select object_id, substr(owner,1,10),substr(object_type,1,18),rpad(object_name,20,dbms_random.string('X',3)),created
3 from source_table
4 where object_id is not null
5 order by object_id
6 /

50659 rows created.

SQL> commit;

Commit complete.

SQL>
SQL>
SQL>
SQL> -- REM run another set of inserts
SQL> -- REM we now simulate a Row-By-Row Insert that would happen for creating new Employees
SQL> connect hemant/hemant
Connected.
SQL>
SQL> declare
2 i number;
3
4 begin
5 for i in 1..1000
6 loop
7 insert into demo_ibs_50_50
8 select object_id+100000+i,substr(owner,1,10),substr(object_type,1,18),rpad(object_name,20,dbms_random.string('X',3)),created+vsize(object_name)
9 from source_table
10 where object_id is not null
11 and object_id = 1000+i;
12 commit;
13 end loop;
14 end;
15 /

PL/SQL procedure successfully completed.

SQL>
SQL>
SQL>
SQL> -- REM run another set of inserts
SQL> -- REM we now run a bulk insert again !
SQL> connect hemant/hemant
Connected.
SQL>
SQL> insert into demo_ibs_50_50
2 select object_id+200000,substr(owner,1,10),substr(object_type,1,18),rpad(object_name,20,dbms_random.string('X',3)),created+vsize(object_name)
3 from source_table
4 where object_id is not null
5 and object_id between 1000 and 2000
6 /

1001 rows created.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> exec dbms_stats.gather_table_stats('','DEMO_IBS_50_50',estimate_percent=>100,cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL> select num_rows, distinct_keys, blevel, leaf_blocks, distinct_keys/leaf_blocks
2 from user_indexes where index_name = 'DEMO_IBS_50_50_N1'
3 /

NUM_ROWS DISTINCT_KEYS BLEVEL LEAF_BLOCKS DISTINCT_KEYS/LEAF_BLOCKS
---------- ------------- ---------- ----------- -------------------------
52660 46285 2 446 103.778027

SQL> analyze index demo_ibs_50_50_n1 validate structure;

Index analyzed.

SQL> select lf_rows, lf_blks, br_blks, del_lf_rows, pct_used from index_stats;

LF_ROWS LF_BLKS BR_BLKS DEL_LF_ROWS PCT_USED
---------- ---------- ---------- ----------- ----------
52660 446 3 0 69

SQL>
SQL>
SQL> REM REM REM ###############################
SQL> REM Here's the additional step from the previous DEMO_IBS_50_50
SQL> REM What happens to the index size and pct_used if I rebuild it now ?
SQL>
SQL> alter index demo_ibs_50_50_n1 rebuild;

Index altered.

SQL>
SQL> exec dbms_stats.gather_table_stats('','DEMO_IBS_50_50',estimate_percent=>100,cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL> select num_rows, distinct_keys, blevel, leaf_blocks, distinct_keys/leaf_blocks
2 from user_indexes where index_name = 'DEMO_IBS_50_50_N1'
3 /

NUM_ROWS DISTINCT_KEYS BLEVEL LEAF_BLOCKS DISTINCT_KEYS/LEAF_BLOCKS
---------- ------------- ---------- ----------- -------------------------
52660 46285 2 309 149.789644

SQL> analyze index demo_ibs_50_50_n1 validate structure;

Index analyzed.

SQL> select lf_rows, lf_blks, br_blks, del_lf_rows, pct_used from index_stats;

LF_ROWS LF_BLKS BR_BLKS DEL_LF_ROWS PCT_USED
---------- ---------- ---------- ----------- ----------
52660 309 3 0 98

SQL>



The Index "shrunk" from 446 Leaf Blocks (69 % used) to 309 Leaf Blocks (98% used) ! Surely, I will "suffer" Index Block Splits for subsequent inserts.

But stop to think. What if this had shrunk from 4million Leaf Blocks to 3million Leaf Blocks ? What if I had a frequently used / popular / expensive query that did Index Fast Full Scans on this Index ? Remember that I haven't shrunk the table, only the Index. Wouldn't an Index Fast Full Scan of 3million Blocks be a "wee bit" faster than a Scan of 4million Blocks ?

What if the index had shrunk from 40,000 Leaf Blocks to 30,000 Leaf Blocks ? What is the likelihood of 100% of the Index Blocks now fitting into my SGA ? Isn't it more likely now ?

Also see "Rebuilding Indexes" and "Rebuilding Indexes - When and Why ?", my previous posts in March 2008.

.
.
.

23 May, 2009

Index Block Splits : 50-50

UPDATE 20-Feb-11 : I have also covered the case of a REVERSE KEY Index for a unique index on a monotonously increasing sequence.

To follow up on my previous posting on "90-10" Index Block Splits which happen when for indexes on monotonically increasing values, here is a demonstration of "50-50" Index Block Splits.
In this case, new Key Values are to be inserted "within" the existing Key Values (as the COUNTRY_NAME, DEPARTMENT_NAME etc are within the same "universe" of values.
(For an explanation of how Indexes are "logically ordered" structures and why the Key Value for a new row can only go into one specific block and, furthermore, how and why this can cause Leaf Block Splits, please see my previous posting).

Rather than walking through this demo, I'll just post the SQLs and results. You can see that, with 50-50 Block Splits, the Index Leaf Blocks may not be as tightly packed as expected. (Rebuilding the Index would help !)


SQL> drop table demo_ibs_50_50 purge;

Table dropped.

SQL>
SQL> -- REM create the demo table
SQL> create table demo_ibs_50_50 (
2 employee_id number not null,
3 country_name varchar2(10) not null,
4 dept_name varchar2(18) not null,
5 employee_name varchar2(128) not null,
6 join_date date)
7 /

Table created.

SQL>
SQL> -- create a non unique index
SQL> create index demo_ibs_50_50_n1 on demo_ibs_50_50 (country_name,dept_name,employee_name) pctfree 1;

Index created.

SQL>
SQL> delete source_table;

50656 rows deleted.

SQL> insert into source_table select * from dba_objects where object_id is not null;

50656 rows created.

SQL> select max(object_id) from source_table;

MAX(OBJECT_ID)
--------------
53253

SQL>
SQL>
SQL> -- REM create a new session and run an insert
SQL> -- then check the statistics for the insert
SQL> connect hemant/hemant
Connected.
SQL> insert into demo_ibs_50_50
2 select object_id, substr(owner,1,10),substr(object_type,1,18),rpad(object_name,20,dbms_random.string('X',3)),created
3 from source_table
4 where object_id is not null
5 order by object_id
6 /

50656 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> select sn.name, ms.value
2 from v$statname sn, v$mystat ms
3 where sn.statistic#=ms.statistic#
4 and sn.name like '%leaf node %'
5 order by 1
6 /

NAME VALUE
---------------------------------------------------------------- ----------
leaf node 90-10 splits 0
leaf node splits 432

SQL>
SQL> exec dbms_stats.gather_table_stats('','DEMO_IBS_50_50',estimate_percent=>100,cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL> select num_rows, distinct_keys, blevel, leaf_blocks, distinct_keys/leaf_blocks
2 from user_indexes where index_name = 'DEMO_IBS_50_50_N1'
3 /

NUM_ROWS DISTINCT_KEYS BLEVEL LEAF_BLOCKS DISTINCT_KEYS/LEAF_BLOCKS
---------- ------------- ---------- ----------- -------------------------
50656 44873 2 433 103.632794

SQL> analyze index demo_ibs_50_50_n1 validate structure;

Index analyzed.

SQL> select lf_rows, lf_blks, br_blks, del_lf_rows, pct_used from index_stats;

LF_ROWS LF_BLKS BR_BLKS DEL_LF_ROWS PCT_USED
---------- ---------- ---------- ----------- ----------
50656 433 3 0 68

SQL>
SQL> REM #################
SQL> REM OBSERVATION !
SQL> REM The first set of block creations are all from Block Splits.
SQL> REM #################
SQL>
SQL>
SQL>
SQL> -- REM run another set of inserts
SQL> -- REM we now simulate a Row-By-Row Insert that would happen for creating new Employees
SQL> connect hemant/hemant
Connected.
SQL>
SQL> declare
2 i number;
3
4 begin
5 for i in 1..1000
6 loop
7 insert into demo_ibs_50_50
8 select object_id+100000+i,substr(owner,1,10),substr(object_type,1,18),rpad(object_name,20,dbms_random.string('X',3)),created+vsize(object_name)
9 from source_table
10 where object_id is not null
11 and object_id = 1000+i;
12 commit;
13 end loop;
14 end;
15 /

PL/SQL procedure successfully completed.

SQL>
SQL> select sn.name, ms.value
2 from v$statname sn, v$mystat ms
3 where sn.statistic#=ms.statistic#
4 and sn.name like '%leaf node %'
5 order by 1
6 /

NAME VALUE
---------------------------------------------------------------- ----------
leaf node 90-10 splits 0
leaf node splits 5

SQL>
SQL> exec dbms_stats.gather_table_stats('','DEMO_IBS_50_50',estimate_percent=>100,cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL> select num_rows, distinct_keys, blevel, leaf_blocks, distinct_keys/leaf_blocks
2 from user_indexes where index_name = 'DEMO_IBS_50_50_N1'
3 /

NUM_ROWS DISTINCT_KEYS BLEVEL LEAF_BLOCKS DISTINCT_KEYS/LEAF_BLOCKS
---------- ------------- ---------- ----------- -------------------------
51656 45578 2 438 104.059361

SQL> analyze index demo_ibs_50_50_n1 validate structure;

Index analyzed.

SQL> select lf_rows, lf_blks, br_blks, del_lf_rows, pct_used from index_stats;

LF_ROWS LF_BLKS BR_BLKS DEL_LF_ROWS PCT_USED
---------- ---------- ---------- ----------- ----------
51656 438 3 0 69

SQL>
SQL>
SQL>
SQL> -- REM run another set of inserts
SQL> -- REM we now run a bulk insert again !
SQL> connect hemant/hemant
Connected.
SQL>
SQL> insert into demo_ibs_50_50
2 select object_id+200000,substr(owner,1,10),substr(object_type,1,18),rpad(object_name,20,dbms_random.string('X',3)),created+vsize(object_name)
3 from source_table
4 where object_id is not null
5 and object_id between 1000 and 2000
6 /

1001 rows created.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> select sn.name, ms.value
2 from v$statname sn, v$mystat ms
3 where sn.statistic#=ms.statistic#
4 and sn.name like '%leaf node %'
5 order by 1
6 /

NAME VALUE
---------------------------------------------------------------- ----------
leaf node 90-10 splits 0
leaf node splits 9

SQL>
SQL> exec dbms_stats.gather_table_stats('','DEMO_IBS_50_50',estimate_percent=>100,cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL> select num_rows, distinct_keys, blevel, leaf_blocks, distinct_keys/leaf_blocks
2 from user_indexes where index_name = 'DEMO_IBS_50_50_N1'
3 /

NUM_ROWS DISTINCT_KEYS BLEVEL LEAF_BLOCKS DISTINCT_KEYS/LEAF_BLOCKS
---------- ------------- ---------- ----------- -------------------------
52657 46282 2 447 103.53915

SQL> analyze index demo_ibs_50_50_n1 validate structure;

Index analyzed.

SQL> select lf_rows, lf_blks, br_blks, del_lf_rows, pct_used from index_stats;

LF_ROWS LF_BLKS BR_BLKS DEL_LF_ROWS PCT_USED
---------- ---------- ---------- ----------- ----------
52657 447 3 0 68

SQL>


Thus, new Index Leaf Blocks are allocated through Block Splits from existing blocks.

.
UPDATE 20-Feb-11 : I have also covered the case of a REVERSE KEY Index for a unique index on a monotonously increasing sequence.
.
.

18 May, 2009

Database Recovery with new datafile not present in the controfile

A recent forums.oracle.com posting was about Database Recovery when a datafile had been added to the database but wasn't present in the controfile. I put together a demo to show the same scenario raised and how the ALTER DATABASE ADD DATAFILE command is to be used.
(Another complication in that scenario was that the CURRENT Redo Log, while still available as an Online Redo Log, hadn't yet been archived. So I queried V$LOG and V$LOGFILE -- in the MOUNT state -- to identify the Redo Log and applied it in the RECOVERy).

Thus, the steps and errors and resolutions were :

1. Cold Backup of database.
2. Startup and CREATE TABLESPACE.
3. Shutdown.
4. Restore only controlfile.
5. Startup OPEN fails on finding that datafiles are newer than the controlfile. (Oracle always expects the controlfile to be "current")
6. RECOVER fails because the database files are "newer" than the last log file -- Oracle expects Log Sequence#44 to match the datafile headers.
7. I identify that #44 is an Online Redo Log (ie was the CURRENT one, not yet Archived)
8. Applying #44 also fails because Oracle finds that there is a mismatch in the physical structure as recorded by the controlfile and the database -- one new datafile is present.
9. I use ALTER DATABASE CREATE DATAFILE to "add" the new datafile to the controlfile
10. RECOVER succeeds this time.
11. I must OPEN RESETLOGS as I've used a 'backup controlfile'


ora10204>sqlplus '/ as sysdba'

SQL*Plus: Release 10.2.0.4.0 - Production on Mon May 18 22:31:24 2009

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> shutdown
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ora10204>ls
DUPDB ORT21FS ORT24FS ORT24FS_coldbackup
ora10204>pwd
/oracle_fs/Databases
ora10204>cp -rp ORT24FS/* ORT24FS_coldbackup/
ora10204>
ora10204>sqlplus '/ as sysdba'

SQL*Plus: Release 10.2.0.4.0 - Production on Mon May 18 22:35:27 2009

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 880803840 bytes
Fixed Size 2087992 bytes
Variable Size 192938952 bytes
Database Buffers 679477248 bytes
Redo Buffers 6299648 bytes
Database mounted.
Database opened.
SQL> create tablespace test_a_tbs datafile '/oracle_fs/Databases/ORT24FS/test_a_tbs.dbf' size 100M;

Tablespace created.

SQL> shutdown
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ora10204>
ora10204>pwd
/oracle_fs/Databases
ora10204>cp ORT24FS_coldbackup/*ctl ORT24FS/
ora10204>sqlplus '/ as sysdba'

SQL*Plus: Release 10.2.0.4.0 - Production on Mon May 18 22:38:20 2009

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 880803840 bytes
Fixed Size 2087992 bytes
Variable Size 192938952 bytes
Database Buffers 679477248 bytes
Redo Buffers 6299648 bytes
Database mounted.
ORA-01122: database file 1 failed verification check
ORA-01110: data file 1: '/oracle_fs/Databases/ORT24FS/system01.dbf'
ORA-01207: file is more recent than control file - old control file


SQL> recover database using backup controlfile;
ORA-00279: change 936870 generated at 05/18/2009 22:28:28 needed for thread 1
ORA-00289: suggestion : /oracle_fs/ArchiveLogs/ORT24FS/1_44_685060711.dbf
ORA-00280: change 936870 for thread 1 is in sequence #44


Specify log: {=suggested | filename | AUTO | CANCEL}
CANCEL
Media recovery cancelled.
SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-01113: file 1 needs media recovery
ORA-01110: data file 1: '/oracle_fs/Databases/ORT24FS/system01.dbf'

SQL> col member format a45
SQL> select l.sequence#, f.group#, f.member, l.status
2 from v$logfile f, v$log l
3 where f.group#=l.group#
4 order by f.group#, f.member
5 /

Seq# Grp MEMBER Status
------- ---- --------------------------------------------- ---------
43 1 /oracle_fs/Databases/ORT24FS/redo01.dbf INACTIVE
44 2 /oracle_fs/Databases/ORT24FS/redo02.dbf CURRENT
42 3 /oracle_fs/Databases/ORT24FS/redo03.dbf INACTIVE

SQL>
SQL> recover database using backup controlfile;
ORA-00279: change 936870 generated at 05/18/2009 22:28:28 needed for thread 1
ORA-00289: suggestion : /oracle_fs/ArchiveLogs/ORT24FS/1_44_685060711.dbf
ORA-00280: change 936870 for thread 1 is in sequence #44


Specify log: {=suggested | filename | AUTO | CANCEL}
/oracle_fs/Databases/ORT24FS/redo02.dbf
ORA-00283: recovery session canceled due to errors
ORA-01244: unnamed datafile(s) added to control file by media recovery
ORA-01110: data file 6: '/oracle_fs/Databases/ORT24FS/test_a_tbs.dbf'


ORA-01112: media recovery not started


SQL> alter database create datafile 6 as '/oracle_fs/Databases/ORT24FS/test_a_tbs.dbf';

Database altered.

SQL>
SQL> recover database using backup controlfile;
ORA-00279: change 937136 generated at 05/18/2009 22:36:23 needed for thread 1
ORA-00289: suggestion : /oracle_fs/ArchiveLogs/ORT24FS/1_44_685060711.dbf
ORA-00280: change 937136 for thread 1 is in sequence #44


Specify log: {=suggested | filename | AUTO | CANCEL}
/oracle_fs/Databases/ORT24FS/redo02.dbf
Log applied.
Media recovery complete.
SQL> alter database open resetlogs;

Database altered.

SQL>


The ALTER DATABASE ADD DATAFILE is useful. This can be used to synchronise the database back to the controlfile -- which typically happens in Standby Database scenarios where the Primary has added a new Datafile but it isn't available on the Standby (the standby controlfile isn't aware of it !).

.
.
.

17 May, 2009

Index Block Splits : 90-10

UPDATE 20-Feb-11 : I have also covered the case of a REVERSE KEY Index for the same data set.

An Index is a logically ordered structure. Unlike a Heap Organised Table (where a new row can, in theory, be inserted into any block that has free space), an Index can handle Inserts only by allowing each new Key Value to be inserted into the correct (leaf) block (and the correct position in that block).
{Note : Updates to Key Values are handled by "deleting" the old values and inserting the new values into the correct index leaf blocks appropriate for the new values}.

Let's say that we have an Index Key that is so long that only 4 values can fit into an Index Block. Let's further assume that the current Index (Leaf) Blocks contain these values :


Block 1 :
Abacus
Carrot

Block 2 :
Dandelion
Elephant
Heat
Iridium

Block 3 :
Lamb
Militant
Tight


If we need to insert the new Key Value "Fear", then Oracle has to :
a. identify the correct Leaf Block (which is Block 2)
b. verify if the new value can be inserted into the block (between "Elephant" and "Heat")
c. insert the value if it can OR split the block into two blocks.

Since Oracle will have to split the block, it creates two Leaf Blocks, 2 and 3 and the erstwhile Block 3 becomes Block 4 :


Block 1 :
Abacus
Carrot

Block 2 :
Dandelion
Elephant

Block 3 :
Fear
Heat
Iridium

Block 4 :
Lamb
Militant
Tight


(Oracle actually maintains pointers between the Leaf Blocks, so when it split Block 2, it had to update pointers to and from Blocks 2, 3 and 4).

This is a "normal" Leaf Block split, also known as a "50-50 split".

Howevever, a 90-10 split occurs when the new value is in the "right most" (ie highest Leaf Block). This is what occurs in an index on a "monotonically increasing" Key -- eg a Sequence (TRANSACTION_ID, JOURNAL_ID, ENTRY_ID etc) or a Date (TRANSACTION_DATE, JOURNAL_DATE, POSTING_DATE) where every new value inserted is larger than previous value.
In such a case, when Oracle splits the Leaf Block, it only copies the "right-most" (ie highest) value to the new Leaf Block.

The number of Index Leaf Block Splits has been available as "leaf node splits" in V$SYSSTAT / V$SESSTAT. The statistic "leaf node 90-10 splits" has been introduced in only recently -- in 10g but not in 9i. [UPDATE : 01-Dec-09 : Liu GaoYuan, in his comment, mentions that he does see this statistic in his 9.2.0.4 environment] However, even the 10.2 Reference Manual section on Statistics doesn't explain "leaf node 90-10 splits", only "leaf node splits".

When you see both statistics, remember that, since "leaf node 90-10 splits" has only recently appeared as a separate statistic, this is a subset of "leaf node splits". That is, "leaf node splits" accounts for both "50-50" and "90-10" splits.


In this example below, I create a demo table "demo_ibs_90_10" which could be an EMPLOYEE table. There 's a Unique Index on employee_id + country_nam e + employee_name. I've deliberately inserted fairly long employee names to enlarge the size of the Key so that not too many Key Values fit into one Leaf Block.

SQL> drop table demo_ibs_90_10 purge;

Table dropped.

SQL>
SQL> -- REM create the demo table
SQL> create table demo_ibs_90_10 (
2 employee_id number not null,
3 country_name varchar2(10) not null,
4 dept_name varchar2(18) not null,
5 employee_name varchar2(128) not null,
6 join_date date)
7 /

Table created.

SQL>
SQL> -- create the index with a PCTFREE of 1 to pack it tightly
SQL> create unique index demo_ibs_90_10_u1 on demo_ibs_90_10 (employee_id,country_name,employee_name) pctfree 1;

Index created.

SQL>



In the first Batched Insert of 50,653 rows, the Index "grows" to 263 Blocks incurring 262 "90-10 " splits along the way. Approximately 193 Key Values fit into each Leaf Block.


SQL> delete source_table;

50653 rows deleted.

SQL> insert into source_table select * from dba_objects where object_id is not null;

50653 rows created.

SQL> select max(object_id) from source_table;

MAX(OBJECT_ID)
--------------
53214

SQL>
SQL>
SQL> -- REM create a new session and run an insert
SQL> -- then check the statistics for the insert
SQL> connect hemant/hemant
Connected.
SQL> insert into demo_ibs_90_10
2 select object_id, substr(owner,1,10),substr(object_type,1,18),rpad(object_name,20,dbms_random.string('X',3)),created
3 from source_table
4 where object_id is not null
5 order by object_id
6 /

50653 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> select sn.name, ms.value
2 from v$statname sn, v$mystat ms
3 where sn.statistic#=ms.statistic#
4 and sn.name like '%leaf node %'
5 order by 1
6 /

NAME VALUE
---------------------------------------------------------------- ----------
leaf node 90-10 splits 262
leaf node splits 262

SQL>
SQL> exec dbms_stats.gather_table_stats('','DEMO_IBS_90_10',estimate_percent=>100,cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL> select num_rows, distinct_keys, blevel, leaf_blocks, distinct_keys/leaf_blocks
2 from user_indexes where index_name = 'DEMO_IBS_90_10_U1'
3 /

NUM_ROWS DISTINCT_KEYS BLEVEL LEAF_BLOCKS DISTINCT_KEYS/LEAF_BLOCKS
---------- ------------- ---------- ----------- -------------------------
50653 50653 1 263 192.596958

SQL> analyze index demo_ibs_90_10_u1 validate structure;

Index analyzed.

SQL> select lf_rows, lf_blks, br_blks, del_lf_rows, pct_used from index_stats;

LF_ROWS LF_BLKS BR_BLKS DEL_LF_ROWS PCT_USED
---------- ---------- ---------- ----------- ----------
50653 263 1 0 100

SQL>
SQL> REM #################
SQL> REM OBSERVATION !
SQL> REM The first set of block creations are all from Block Splits.
SQL> REM And all of these are 90-10 Splits !
SQL> REM #################
SQL>



In the second round, I insert 1 row in each of 1,000 transactions. This creates 5 new Leaf Blocks, all from 90-10 splits. (Remember that approximately 193 Values fit into 1 Leaf Block).


SQL> -- REM run another set of inserts
SQL> -- REM we now simulate a Row-By-Row Insert that would happen for creating new Employees
SQL> connect hemant/hemant
Connected.
SQL>
SQL> declare
2 i number;
3
4 begin
5 for i in 1..1000
6 loop
7 insert into demo_ibs_90_10
8 select object_id+100000+i,substr(owner,1,10),substr(object_type,1,18),rpad(object_name,20,dbms_random.string('X',3)),created+vsize(object_name)
9 from source_table
10 where object_id is not null
11 and object_id = 1000+i;
12 commit;
13 end loop;
14 end;
15 /

PL/SQL procedure successfully completed.

SQL>
SQL> select sn.name, ms.value
2 from v$statname sn, v$mystat ms
3 where sn.statistic#=ms.statistic#
4 and sn.name like '%leaf node %'
5 order by 1
6 /

NAME VALUE
---------------------------------------------------------------- ----------
leaf node 90-10 splits 5
leaf node splits 5

SQL>
SQL> exec dbms_stats.gather_table_stats('','DEMO_IBS_90_10',estimate_percent=>100,cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL> select num_rows, distinct_keys, blevel, leaf_blocks, distinct_keys/leaf_blocks
2 from user_indexes where index_name = 'DEMO_IBS_90_10_U1'
3 /

NUM_ROWS DISTINCT_KEYS BLEVEL LEAF_BLOCKS DISTINCT_KEYS/LEAF_BLOCKS
---------- ------------- ---------- ----------- -------------------------
51653 51653 1 268 192.735075

SQL> analyze index demo_ibs_90_10_u1 validate structure;

Index analyzed.

SQL> select lf_rows, lf_blks, br_blks, del_lf_rows, pct_used from index_stats;

LF_ROWS LF_BLKS BR_BLKS DEL_LF_ROWS PCT_USED
---------- ---------- ---------- ----------- ----------
51653 268 1 0 100

SQL>
SQL>


In the final round, I insert another 1,000 rows, in a single Transaction. Again we see 90-10 splits to add 6 Leaf Blocks.


SQL> -- REM run another set of inserts
SQL> -- REM we now run a bulk insert again !
SQL> connect hemant/hemant
Connected.
SQL>
SQL> insert into demo_ibs_90_10
2 select object_id+200000,substr(owner,1,10),substr(object_type,1,18),rpad(object_name,20,dbms_random.string('X',3)),created+vsize(object_name)
3 from source_table
4 where object_id is not null
5 and object_id between 1000 and 2000
6 /

1001 rows created.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> select sn.name, ms.value
2 from v$statname sn, v$mystat ms
3 where sn.statistic#=ms.statistic#
4 and sn.name like '%leaf node %'
5 order by 1
6 /

NAME VALUE
---------------------------------------------------------------- ----------
leaf node 90-10 splits 6
leaf node splits 6

SQL>
SQL> exec dbms_stats.gather_table_stats('','DEMO_IBS_90_10',estimate_percent=>100,cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL> select num_rows, distinct_keys, blevel, leaf_blocks, distinct_keys/leaf_blocks
2 from user_indexes where index_name = 'DEMO_IBS_90_10_U1'
3 /

NUM_ROWS DISTINCT_KEYS BLEVEL LEAF_BLOCKS DISTINCT_KEYS/LEAF_BLOCKS
---------- ------------- ---------- ----------- -------------------------
52654 52654 1 274 192.167883

SQL> analyze index demo_ibs_90_10_u1 validate structure;

Index analyzed.

SQL> select lf_rows, lf_blks, br_blks, del_lf_rows, pct_used from index_stats;

LF_ROWS LF_BLKS BR_BLKS DEL_LF_ROWS PCT_USED
---------- ---------- ---------- ----------- ----------
52654 274 1 0 100

SQL>


So, we find that adding new valus to an index on a monotonically increasing sequence always causes 90-10 Leaf Block splits.
What happens if there have been DELETEs in between ? Remember that Oracle can't insert new Key Values into "just any Leaf Block" even if it is 90% free. Therefore, new values for such a Key (monotonically increasing) will always go to the "right-end" of the B-Tree structure (ie to the "last" Leaf Block, although "last" isn't accurate in the sense of physical location). However, if all Key Values from an "older" Leaf Block have been deleted, then that Block is a candidate for re-use and it can be used as the "next" block after the "right-most" block (ie the one with the highest Key Value). It should be noted that this means that Index Leaf Blocks may well *not* be contiguous, even for a Key that is only increasing in value.


UPDATE 20-Feb-11 : I have also covered the case of a REVERSE KEY Index for the same data set.

Next, I will take up 50-50 Block Splits and demonstrate how they can cause an Index to grow faster than is the case with 90-10 splits, only because of the nature of the data and the pattern of inserts.

.
.
.

14 May, 2009

Rename Database while Cloning it.

As I recently pointed out in a thread on "Partial Database Cloning" it is perfectly possible to rename a database while cloning it.

Here, I clone a database "ORT24FS" to another name "DUPDB" on the same server, without shutting down ORT24FS. Also, I rename the copied database to DUPDB even *before* I issue the RECOVER database command. Furtheremore, DUPDB is able to use ORT24FS's ArchiveLog "as is".

First I copy my database files from the ORT24FS database directory to the DUPDB database directory :

ora10204>env |grep SID
ORACLE_SID=ORT24FS
ora10204>sqlplus '/ as sysdba'

SQL*Plus: Release 10.2.0.4.0 - Production on Thu May 14 23:04:08 2009

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select file_name from dba_data_files;

FILE_NAME
--------------------------------------------------------------------------------
/oracle_fs/Databases/ORT24FS/users01.dbf
/oracle_fs/Databases/ORT24FS/sysaux01.dbf
/oracle_fs/Databases/ORT24FS/undotbs01.dbf
/oracle_fs/Databases/ORT24FS/system01.dbf
/oracle_fs/Databases/ORT24FS/example01.dbf

SQL> alter database begin backup;

Database altered.

SQL> !cp -rp /oracle_fs/Databases/ORT24FS/* /oracle_fs/Databases/DUPDB/


SQL>

SQL> alter database end backup;

Database altered.

SQL> alter system switch logfile;

System altered.

SQL>

SQL> exit

Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options


I then generate a controlfile trace that I will be using for the CREATE CONTROLFILE to rename the database consisting copied files in the "DUPDB" directory.

ora10204>sqlplus

SQL*Plus: Release 10.2.0.4.0 - Production on Thu May 14 23:09:52 2009

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> alter database backup controlfile to trace;

Database altered.

SQL> exit


Next, I edit initDUPDB.ora. Note how I set db_name to DUPDB but I use the same log_archive_dest_1 and log_archive_format as I have in ORT24FS. This allows me to use the ORT24FS archivelogs "as is" without renaming or relocating them :

ora10204>vi $ORACLE_HOME/dbs/initDUPDB.ora
DUPDB.__db_cache_size=616562688
DUPDB.__java_pool_size=4194304
DUPDB.__large_pool_size=4194304
DUPDB.__shared_pool_size=205520896
DUPDB.__streams_pool_size=0
*.audit_file_dest='/oracle_fs/ora10204/admin/DUPDB/adump'
*.background_dump_dest='/oracle_fs/ora10204/admin/DUPDB/bdump'
*.compatible='10.2.0.4.0'
*.control_files='/oracle_fs/Databases/DUPDB/control01.ctl','/oracle_fs/Databases/DUPDB/control02.ctl','/oracle_fs/Databases/DUPDB/control03.ctl'
*.core_dump_dest='/oracle_fs/ora10204/admin/DUPDB/cdump'
*.db_block_size=8192
*.db_domain=''
*.db_name='DUPDB'
*.db_recovery_file_dest='/oracle_fs/FRAs/DUPDB'
*.db_recovery_file_dest_size=17179869184
*.dbwr_io_slaves=0
*.dispatchers='(PROTOCOL=TCP) (SERVICE=DUPDBXDB)'
*.filesystemio_options='NONE'
*.job_queue_processes=10
*.log_archive_dest_1='LOCATION=/oracle_fs/ArchiveLogs/ORT24FS'
*.log_archive_format='%t_%s_%r.dbf'
*.open_cursors=300
*.pga_aggregate_target=179306496
*.processes=150
*.recovery_parallelism=4
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_max_size=838860800
*.sga_target=838860800
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='/oracle_fs/ora10204/admin/DUPDB/udump'
*.db_create_file_dest='/oracle_fs/Databases/DUPDB'



I now copy the controlfile trace to DUPDB and edit it to use it to SET the database name to DUPDB for all the database files in the DUPDB directory. I must use RESETLOGS when using SET. This also means that Oracle will ignore online redo logs present in the directory and that I will have to OPEN RESETLOGS later.

ora10204>cd $ORACLE_BASE/admin/DUPDB
ora10204>cp ../ORT24FS/udump/ort24fs_ora_4432.trc create_ctrl_file.sql
ora10204>vi create_ctrl_file.sql


STARTUP NOMOUNT
CREATE CONTROLFILE SET DATABASE "DUPDB" RESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
GROUP 1 '/oracle_fs/Databases/DUPDB/redo01.dbf' SIZE 50M,
GROUP 2 '/oracle_fs/Databases/DUPDB/redo02.dbf' SIZE 50M,
GROUP 3 '/oracle_fs/Databases/DUPDB/redo03.dbf' SIZE 50M
-- STANDBY LOGFILE
DATAFILE
'/oracle_fs/Databases/DUPDB/system01.dbf',
'/oracle_fs/Databases/DUPDB/undotbs01.dbf',
'/oracle_fs/Databases/DUPDB/sysaux01.dbf',
'/oracle_fs/Databases/DUPDB/users01.dbf',
'/oracle_fs/Databases/DUPDB/example01.dbf'
CHARACTER SET WE8ISO8859P1
;


I now set my ORACLE_SID to DUPDB and run the create script. Since my "cp /oracle_fs/Databases/ORT24FS/*" command had also copied the controlfiles, I have to remove them and run the CREATE CONTROLFILE command. Note that, at this stage, the database files in DUPDB are from a Hot Backup and are "not consistent" yet.

ora10204>ORACLE_SID=DUPDB;export ORACLE_SID
ora10204>sqlplus '/ as sysdba'

SQL*Plus: Release 10.2.0.4.0 - Production on Thu May 14 23:13:59 2009

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.

Connected to an idle instance.

SQL> spool create_DUPDB_ctrl_file
SQL> @create_ctrl_file
ORACLE instance started.

Total System Global Area 838860800 bytes
Fixed Size 2087672 bytes
Variable Size 213910792 bytes
Database Buffers 616562688 bytes
Redo Buffers 6299648 bytes
CREATE CONTROLFILE SET DATABASE "DUPDB" RESETLOGS ARCHIVELOG
*
ERROR at line 1:
ORA-01503: CREATE CONTROLFILE failed
ORA-00200: control file could not be created
ORA-00202: control file: '/oracle_fs/Databases/DUPDB/control01.ctl'
ORA-27038: created file already exists
Additional information: 1


SQL> !rm /oracle_fs/Databases/DUPDB/*.ctl

SQL> shutdown
ORA-01507: database not mounted


ORACLE instance shut down.
SQL> set echo on
SQL> @create_ctrl_file
SQL> STARTUP NOMOUNT
ORACLE instance started.

Total System Global Area 838860800 bytes
Fixed Size 2087672 bytes
Variable Size 213910792 bytes
Database Buffers 616562688 bytes
Redo Buffers 6299648 bytes
SQL> CREATE CONTROLFILE SET DATABASE "DUPDB" RESETLOGS ARCHIVELOG
2 MAXLOGFILES 16
3 MAXLOGMEMBERS 3
4 MAXDATAFILES 100
5 MAXINSTANCES 8
6 MAXLOGHISTORY 292
7 LOGFILE
8 GROUP 1 '/oracle_fs/Databases/DUPDB/redo01.dbf' SIZE 50M,
9 GROUP 2 '/oracle_fs/Databases/DUPDB/redo02.dbf' SIZE 50M,
10 GROUP 3 '/oracle_fs/Databases/DUPDB/redo03.dbf' SIZE 50M
11 -- STANDBY LOGFILE
12 DATAFILE
13 '/oracle_fs/Databases/DUPDB/system01.dbf',
14 '/oracle_fs/Databases/DUPDB/undotbs01.dbf',
15 '/oracle_fs/Databases/DUPDB/sysaux01.dbf',
16 '/oracle_fs/Databases/DUPDB/users01.dbf',
17 '/oracle_fs/Databases/DUPDB/example01.dbf'
18 CHARACTER SET WE8ISO8859P1
19 ;

Control file created.

SQL>


I am now ready to issue a RECOVER DATABASE command for database DUPDB which datafiles are in the DUPDB directory. I use the ORT24FS ArchiveLogs to recover the database.


SQL> recover database using backup controlfile until cancel;
ORA-00279: change 711843 generated at 05/14/2009 23:04:19 needed for thread 1
ORA-00289: suggestion : /oracle_fs/ArchiveLogs/ORT24FS/1_8_685060711.dbf
ORA-00280: change 711843 for thread 1 is in sequence #8


Specify log: {=suggested | filename | AUTO | CANCEL}

ORA-00279: change 711924 generated at 05/14/2009 23:06:57 needed for thread 1
ORA-00289: suggestion : /oracle_fs/ArchiveLogs/ORT24FS/1_9_685060711.dbf
ORA-00280: change 711924 for thread 1 is in sequence #9
ORA-00278: log file '/oracle_fs/ArchiveLogs/ORT24FS/1_8_685060711.dbf' no
longer needed for this recovery


Specify log: {=suggested | filename | AUTO | CANCEL}
CANCEL
Media recovery cancelled.
SQL> alter database open resetlogs;

Database altered.

SQL>

I have RECOVERed DUPDB to a consistent state. Let me verify it :


SQL> select * from v$instance;

INSTANCE_NUMBER INSTANCE_NAME
--------------- ----------------
HOST_NAME
----------------------------------------------------------------
VERSION STARTUP_T STATUS PAR THREAD# ARCHIVE LOG_SWITCH_WAIT
----------------- --------- ------------ --- ---------- ------- ---------------
LOGINS SHU DATABASE_STATUS INSTANCE_ROLE ACTIVE_ST BLO
---------- --- ----------------- ------------------ --------- ---
1 DUPDB
linux64
10.2.0.4.0 14-MAY-09 OPEN NO 1 STARTED
ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO


SQL> select * from v$database;

DBID NAME CREATED RESETLOGS_CHANGE# RESETLOGS
---------- --------- --------- ----------------- ---------
PRIOR_RESETLOGS_CHANGE# PRIOR_RES LOG_MODE CHECKPOINT_CHANGE#
----------------------- --------- ------------ ------------------
ARCHIVE_CHANGE# CONTROL CONTROLFI CONTROLFILE_SEQUENCE# CONTROLFILE_CHANGE#
--------------- ------- --------- --------------------- -------------------
CONTROLFI OPEN_RESETL VERSION_T OPEN_MODE PROTECTION_MODE
--------- ----------- --------- ---------- --------------------
PROTECTION_LEVEL REMOTE_A ACTIVATION# SWITCHOVER# DATABASE_ROLE
-------------------- -------- ----------- ----------- ----------------
ARCHIVELOG_CHANGE# ARCHIVEL SWITCHOVER_STATUS DATAGUAR GUARD_S SUPPLEME SUP
------------------ -------- -------------------- -------- ------- -------- ---
SUP FOR PLATFORM_ID
--- --- -----------
PLATFORM_NAME
--------------------------------------------------------------------------------
RECOVERY_TARGET_INCARNATION# LAST_OPEN_INCARNATION# CURRENT_SCN
---------------------------- ---------------------- -----------
FLASHBACK_ON SUP SUP DB_UNIQUE_NAME
------------------ --- --- ------------------------------
STANDBY_BECAME_PRIMARY_SCN FS_FAILOVER_STATUS FS_FAILOVER_CURRENT_TARGET
-------------------------- --------------------- ------------------------------
FS_FAILOVER_THRESHOLD FS_FAIL
--------------------- -------
FS_FAILOVER_OBSERVER_HOST
--------------------------------------------------------------------------------
4163910544 DUPDB 14-MAY-09 711925 14-MAY-09
659189 24-APR-09 ARCHIVELOG 711926
0 CURRENT 14-MAY-09 1796 712047
14-MAY-09 NOT ALLOWED 14-MAY-09 READ WRITE MAXIMUM PERFORMANCE
MAXIMUM PERFORMANCE ENABLED 412086823 412086823 PRIMARY
711925 DISABLED SESSIONS ACTIVE DISABLED NONE NO NO
NO NO 13
Linux x86 64-bit
2 2 712118
NO NO NO DUPDB
0 DISABLED
0



SQL>
SQL> select * from v$tempfile;

no rows selected

SQL> alter tablespace temp add tempfile '/oracle_fs/Databases/DUPDB/temp.dbf' size 100M;

Tablespace altered.

SQL>


At this stage, both Databases have the same DBID.

ora10204>ORACLE_SID=DUPDB;export ORACLE_SID
ora10204>sqlplus '/ as sysdba'

SQL*Plus: Release 10.2.0.4.0 - Production on Thu May 14 23:46:25 2009

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select dbid, name from v$database;

DBID NAME
---------- ---------
4163910544 DUPDB

SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ora10204>ORACLE_SID=ORT24FS;export ORACLE_SID
ora10204>sqlplus

SQL*Plus: Release 10.2.0.4.0 - Production on Thu May 14 23:46:40 2009

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select dbid, name from v$database;

DBID NAME
---------- ---------
4163910544 ORT24FS

SQL>


I can use the DBNEWID "nid" utility to change the DBID for "DUPDB" anytime now.

Note : My thanks to Maggie Wu who showed me, many years ago, that I could RENAME a cloned database even before I RECOVERed it.

.
.
.

11 May, 2009

Incorrectly using AUTOTRACE and EXPLAIN PLAN

Occassionally I see questions asking for help in tuning where the originator has used by AUTOTRACE and EXPLAIN PLAN together.
This can cause confusion.

Here, I pretend to be a novice using AUTOTRACE and EXPLAIN PLAN together :


SQL> set autotrace on
SQL> explain plan for select owner, created from test_query where owner = 'HEMANT';

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1390056125

------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2109 | 29526 | 61 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| TEST_QUERY | 2109 | 29526 | 61 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | TEST_QUERY_NDX_1 | 2109 | | 5 (0)| 00:00:01 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("OWNER"='HEMANT')

14 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 2137789089

---------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 8168 | 16336 | 29 (0)| 00:00:01 |
| 1 | COLLECTION ITERATOR PICKLER FETCH| DISPLAY | | | | |
---------------------------------------------------------------------------------------------


Statistics
----------------------------------------------------------
14 recursive calls
12 db block gets
35 consistent gets
0 physical reads
0 redo size
1482 bytes sent via SQL*Net to client
492 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
14 rows processed

SQL>


I got TWO Explain Plan listings and one set of Statistics. Which is which ?
The first Explain Plan listing is, correctly, for the SQL that I am testing ("select ... from test_query ...."). The second Explain Plan (with the Plan Hash Value of '2137789089') is for the execution of dbms_xplan.display and this listing is generated because I had set AUTOTRACE ON ! And it shows a CARD (expected number of rows) of 8,168 rows. That is very high and difficult to explain ? How does it relate to the CARD of 2,109 for the test query ?!
Can you guess which operation the statistics are for ?

What I should have been doing is this in TWO Parts.

The first part is for the Explain Plan itself :


SQL> set autotrace off
SQL> explain plan for select owner, created from test_query where owner = 'HEMANT';

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1390056125

------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2109 | 29526 | 61 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| TEST_QUERY | 2109 | 29526 | 61 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | TEST_QUERY_NDX_1 | 2109 | | 5 (0)| 00:00:01 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("OWNER"='HEMANT')

14 rows selected.

SQL>


This listing is quite clear now. I have the Explain Plan for my test query.


The second part is to get an AUTOTRACE. Here I use AUTOTRACE TRACEONLY STATISTICS. Thus, I get *only* the statistics !


SQL> set autotrace traceonly statistics
SQL> select owner, created from test_query where owner = 'HEMANT';


Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
5 consistent gets
0 physical reads
0 redo size
667 bytes sent via SQL*Net to client
492 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
4 rows processed

SQL>


These show me that the query actually fetched 4 rows, without having to view the output.

If the output had been 2,109 rows, I'd have to allow that many rows to scroll across my screen before I could see the statistics. Now, with AUTOTRACE TRACEONLY I can allow the query to execute and fetch the rows but SQLPlus will "discard" them and not show them on the screen. Database server side processing remains the same, as if the rows were displayed on-screen.
Similarly, with the STATISTICS keyword appended, I ask Oracle to only present the Statistics and NOT present the Explain Plan. I have already seen the Explain Plan and do not need to see it here again. (In my test above, the Plan has only 3 steps, but the query could have been a complex one with more than a few dozen steps !).
.
.
.

02 May, 2009

Ever wonder "what if this database contains my data ?"

My forums posting 'Ever wonder "what if this database contains my data ?"' has also been been repeated by APC on his oraclecommunity.net blog.

I do hope that there are more responses and a better discussion on this.
.
.
.

Aggregated by orafaq.com

Aggregated by orafaq.com
This blog is being aggregated by orafaq.com