Search My Oracle Blog

Custom Search

28 June, 2011

DDL Triggers

A few weeks ago, there was a question about disabling TRUNCATEs. That can be easily done via a Trigger.
But you have to be careful about such triggers. Such a trigger can disable a TRUNCATE and raise an error. It will *not*, however, prevent a COMMIT.

Here's an example of such a trigger :


SQL> -- create a trigger that raises an error on truncates
SQL> create or replace trigger prevent_truncates
2 before truncate on schema
3 begin
4 raise_application_error(-20001,'TRUNCATE not permitted');
5 end;
6 /

Trigger created.

SQL>
SQL> -- test the trigger
SQL> create table TEST_TRIG_TBL_1 (primkey number primary key, data_col varchar2(500));

Table created.

SQL>
SQL> truncate table TEST_TRIG_TBL_1;

truncate table TEST_TRIG_TBL_1
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-20001: TRUNCATE not permitted
ORA-06512: at line 2


SQL>


So the trigger works as designed. It prevents a TRUNCATE in the schema.

But what if there IS data ?
SQL> -- what if there is data ?
SQL> create table TEST_TRIG_TBL_2 (primkey number primary key, data_col varchar2(500));

Table created.

SQL> insert into TEST_TRIG_TBL_2 values (1,'ABCDEFGH');

1 row created.

SQL> insert into TEST_TRIG_TBL_2 values (2,'ZXCV');

1 row created.

SQL>
SQL> truncate table TEST_TRIG_TBL_2;
truncate table TEST_TRIG_TBL_2
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-20001: TRUNCATE not permitted
ORA-06512: at line 2


SQL>
SQL> -- is the data present ?
SQL> select * from TEST_TRIG_TBL_2 ;

PRIMKEY
----------
DATA_COL
------------------------------------------------------------------------------------------------------------------------------------
1
ABCDEFGH

2
ZXCV


SQL>


The data is present.

So, is there a COMMIT ?
SQL> -- do you realise what this means ?
SQL> -- suppose your transaction consists of more than 2 DML statements
SQL> -- one does an insert into table X and the other attempts to truncate table Y
SQL> -- if the truncate of Y fails, does the insert into X get rolled back ?
SQL>
SQL> create table X (primkey number primary key, data_col varchar2(5));

Table created.

SQL> create table Y (primkey number primary key, data_col varchar2(5));

Table created.

SQL>
SQL> -- insert into X
SQL> insert into X values (1,'QWERT');

1 row created.

SQL> insert into X values (2,'ASDF');

1 row created.

SQL>
SQL> -- truncate Y, the developer doesn't know that the TRUNCATE will fail
SQL> truncate table Y;
truncate table Y
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-20001: TRUNCATE not permitted
ORA-06512: at line 2


SQL>
SQL> -- oh ! there's an error. Let's ROLLBACK
SQL> rollback;

Rollback complete.

SQL>
SQL> -- is the data present in X ?
SQL> select * from X ;

PRIMKEY DATA_
---------- -----
1 QWERT
2 ASDF

SQL>
SQL> -- HUH ?! The TRUNCATE failed but the INSERT was committed !
SQL>


Data that was INSERTed into X did not get rolled back although the TRUNCATE of Y failed. ! (Remember that the INSERT hadn't been committed so the transaction was still continuing and present till the TRUNCATE was issued).

Why does this happen ? Let's see a trace :
SQL> exec dbms_session.session_trace_enable;

PL/SQL procedure successfully completed.

SQL> truncate table Y;
truncate table Y
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-20001: TRUNCATE not permitted
ORA-06512: at line 2


SQL>

The trace file actually shows these events in sequence :
XCTEND rlbk=0, rd_only=1, tim=1309191962248722
select count(FA#) from SYS_FBA_TRACKEDTABLES where OBJ# = 85567 and DROPSCN = 0
LOCK TABLE "Y" IN EXCLUSIVE MODE NOWAIT
select count(FA#) from SYS_FBA_TRACKEDTABLES where OBJ# = 85567 and DROPSCN = 0
truncate table Y
begin
raise_application_error(-20001,'TRUNCATE not permitted');
end;
declare
m_stmt varchar2(512);
begin
m_stmt:='delete from sdo_geor_ddl__table$$';
EXECUTE IMMEDIATE m_stmt;
EXCEPTION
WHEN OTHERS THEN
NULL;
end;
.... and so on ...


The first thing is the "XCTEND rlbk=0" which is a COMMIT. An Implicit COMMIT is issued by the TRUNCATE irrespective of the actions in the trigger.

For example, even if the trigger has an error it still causes a commit :
SQL> create or replace trigger prevent_truncates
2 before truncate on schema
3 begin
4 raiseXXX_application_error(-20001,'TRUNCATE not permitted');
5 end;
6 /

Warning: Trigger created with compilation errors.

SQL> show errors
Errors for TRIGGER PREVENT_TRUNCATES:

LINE/COL ERROR
-------- -----------------------------------------------------------------
2/3 PL/SQL: Statement ignored
2/3 PLS-00201: identifier 'RAISEXXX_APPLICATION_ERROR' must be
declared

SQL> select status from user_triggers where trigger_name = 'PREVENT_TRUNCATES';

STATUS
--------
ENABLED

SQL>
SQL> create table Z (primkey number primary key, data_col varchar2(5));

Table created.

SQL> insert into Z values (1,'abc');

1 row created.

SQL> insert into Z values (2,'def');

1 row created.

SQL> insert into Z values (3,'ghi');

1 row created.

SQL> delete Z where primkey = 2 ;

1 row deleted.

SQL> truncate table Y;
truncate table Y
*
ERROR at line 1:
ORA-04098: trigger 'HEMANT.PREVENT_TRUNCATES' is invalid and failed re-validation


SQL>
SQL> rollback;

Rollback complete.

SQL> select * from Z;

PRIMKEY DATA_
---------- -----
1 abc
3 ghi

SQL>

The single row delete in table Z has been committed even though the TRUNCATE trigger has failed to execute properly.


Remember : You as the DBA might put in a DDL trigger but a new developer (or even existing developer !) may not know about the presence of the trigger or the implications of the trigger !

.
.
.

23 June, 2011

How Are Students Learning Programming ?

A recent ComputerWorld article "How are students learning programming in a post-Basic world ?" (It should be BASIC, not Basic) reminds me of the time I first learnt programming.
Yes, with BASIC. When there were very few PCs in Ahmedabad.




I learnt programming with BASIC. The credit goes to my elder brother, Santosh, who convinced me (against my self-doubts) that I could learn computer programming. And once I started, it was SO EASY (I topped the class).

Thank you , Santosh.





Here's a YABASIC program I wrote in 2007 to help my son solve a puzzle (and I don't remember the details of the puzzle) :



print "The problem of 3 daughters"
print "Find 3 numbers whose product equals the desired number"
input "What is the product of the 3 integers ?" productof3
DIM AM(productof3,4)
RN=1
FOR I =1 to productof3
IF INT(productof3/I) = (productof3/I) THEN
FOR J =1 to productof3
IF INT(productof3/J) = (productof3/J) THEN
For K = 1 to productof3
IF INT(productof3/K) = (productof3/K) THEN
# print "Testing ", I , J , K
DOESEXIST=0
IF (I * J * K) = productof3 THEN
# print "Possible Answer : ", I , J, K, " : With the Sum as : ", S\
FOR ARRCHK =1 to RN
S = I + J + K
IF ( (AM(ARRCHK,1)=S AND AM(ARRCHK,2)=I AND AM(ARRCHK,3)=J AND AM(ARRCHK,4)=K) OR (AM(ARRCHK,1)=S AND AM(ARRCHK,2)=I AND AM(ARRCHK,3)=K AND AM(ARRCHK,4)=J) OR (AM(ARRCHK,1)=S AND AM(ARRCHK,2)=J AND AM(ARRCHK,3)=I AND AM(ARRCHK,4)=K) OR (AM(ARRCHK,1)=S AND AM(ARRCHK,2)=J AND AM(ARRCHK,3)=K AND AM(ARRCHK,4)=I) OR (AM(ARRCHK,1)=S AND AM(ARRCHK,2)=K AND AM(ARRCHK,3)=I AND AM(ARRCHK,4)=J) OR (AM(ARRCHK,1)=S AND AM(ARRCHK,2)=K AND AM(ARRCHK,3)=J AND AM(ARRCHK,4)=I) ) THEN
# print ".............. Numbers ", I, J, K, " have already been checked"
DOESEXIST=1
BREAK
ENDIF
NEXT ARRCHK
IF DOESEXIST=0 THEN
AM(RN,1)=S
AM(RN,2)=I
AM(RN,3)=J
AM(RN,4)=K
print "Possible Answer : " , AM(RN,2), AM(RN,3), AM(RN,4) , " : With the sum : ", AM(RN,1)
ENDIF
RN = RN +1
ENDIF
ENDIF
NEXT K
ENDIF
NEXT J
ENDIF
NEXT I


Or an example of a program to test a number for whether it is a Prime (very simplistic, not using any of the algorithms -- like the Sieve of Eratosthenes that I used when I wrote a program in the BASIC class in 1985 -- or was it in 1984) :
print "hello world"
# Testing for Prime Numbers
#assign
presult=0
input "Which number ?" numtotest
print "You entered : " , numtotest
for I=2 to numtotest/2
IF (numtotest/I) = int(numtotest/I) THEN
# print "Testing division of ", numtotest, "by ", I , " : Is Not a Prime"
presult=1
BREAK
ELSE
# print "possibly a prime when testing division by ", I
ENDIF
NEXT I
clear screen
if presult=0 then
print colour("red","blue") numtotest, " IS A PRIME NUMBER !"
ELSE
print colour("green") numtotest, " is not a prime number !"
ENDIF
input "Enter to exit " mentr


I have also studied xBase (dBASEIII+), Pascal, C, C++ and Java (besides SQL and PL/SQL). Guess which language I dislike ? It is a 4 letter word.

UPDATE 01-Jul-11 : Here's an article by Joel Spolsky (albeit dated 29-Dec-05) : The Perils of JavaSchools.

.
.
.

21 June, 2011

Oracle APAC Developer Program

Oracle ACEs in this part of the world had a conference call with Oracle's Program Executive for the APAC Developer Program.
We were provided information about forthcoming DBA Round Table and Developer Day sessions. We were also given assurances that Oracle would continue supporting the ACE program and members.

Thank you Shaheen.

.
.
.

20 June, 2011

OOW 2011 Content Catalog

The Oracle Open World 2011 Content Catalog is now available for viewing.

.
.
.

(OT) : Dead Media Never Really Die

See the ComputerWorld article : 'Dead Media' never really die.

Dead Media never really die, it says. But I know of hardware that is dead for all practical purposes. Surely there are formats that are irretrievable now. What formats would you use for 20year / 50 year / 100 year / 200 year time capsules ? What about space exploration that may return to Earth after centuries ? How do we read the data that the explorer has collected ?
.
.
.

13 June, 2011

Inequality and NULL

A response recent forums question about handling an inequality condition also reminded me, once again, about NULL handling in Oracle.

You can have :
WHERE column = 100 
in a query.

In another query, you could have
WHERE column != 100 
However, if the column does contain a NULL in any one (or more rows), the union of the two queries is not the entire table. That is, rows with NULLs are excluded.
You have to specifically handle NULLs with either
IS NULL 
or
IS NOT NULL 
depending on whether you want to explicitly include or exclude NULLs.


Here's a quick demo of the behaviour of NULLs :
SQL> create table TEST_ROWS as
2 select rownum as COLUMN_1,
3 'Row Number : ' || to_char(rownum) as COLUMN_2
4 from dual connect by level < 11 ;

Table created.

SQL> insert into TEST_ROWS values (NULL,'Row NULL');

1 row created.

SQL> commit;

Commit complete.

SQL>
SQL> select COLUMN_1, COLUMN_2 from TEST_ROWS order by 1 ;

COLUMN_1 COLUMN_2
---------- ------------------------------------------------------
1 Row Number : 1
2 Row Number : 2
3 Row Number : 3
4 Row Number : 4
5 Row Number : 5
6 Row Number : 6
7 Row Number : 7
8 Row Number : 8
9 Row Number : 9
10 Row Number : 10
Row NULL

11 rows selected.

SQL>
SQL> select * from TEST_ROWS where COLUMN_1 != 6 order by 1;

COLUMN_1 COLUMN_2
---------- ------------------------------------------------------
1 Row Number : 1
2 Row Number : 2
3 Row Number : 3
4 Row Number : 4
5 Row Number : 5
7 Row Number : 7
8 Row Number : 8
9 Row Number : 9
10 Row Number : 10

9 rows selected.

SQL>


Notice how the query for != 6 does *not* show the row with a NULL.
Oracle does not say that a NULL is != 6. A NULL is not comparable with 6 !
The query has to be actually :
SQL> select * from TEST_ROWS
2 where (COLUMN_1 != 6 OR COLUMN_1 IS NULL)
3 order by 1;

COLUMN_1 COLUMN_2
---------- ------------------------------------------------------
1 Row Number : 1
2 Row Number : 2
3 Row Number : 3
4 Row Number : 4
5 Row Number : 5
7 Row Number : 7
8 Row Number : 8
9 Row Number : 9
10 Row Number : 10
Row NULL

10 rows selected.

SQL>

It is now that the row with a NULL appears !

That is why such a count :
SQL> select count(*) from TEST_ROWS;

COUNT(*)
----------
11

SQL>
SQL> select count(*) from
2 (select * from TEST_ROWS where COLUMN_1 = 6
3 union
4 select * from TEST_ROWS where COLUMN_1 != 6)
5 /

COUNT(*)
----------
10

SQL>
SQL> select count(*) from
2 (select * from TEST_ROWS where COLUMN_1 = 6
3 union all
4 select * from TEST_ROWS where COLUMN_1 != 6)
5 /

COUNT(*)
----------
10

SQL>

would fail because the union of the two result sets (equal to 6 and not-equal to 6) does not comprise of the whole set !
.
.
.

SQL Injection

Here's a very simple example of SQL Injection by Mike Smithers.



.

05 June, 2011

New Presentation : On Nested Loop and Hash Join

I have uploaded a new presentation on Nested Loop and Hash Joins. This is based on the "Nested Loops and Consistent Gets" case study.

.
.
.

Getting the right statistics

Continuing with the post titled "Nested Loops and Consistent Gets" earlier today, what is the Execution Plan with statistics on more columns ?
SQL> exec dbms_stats.gather_table_stats('','TRANSACTIONS',estimate_percent=>100,granularity=>'ALL',-
> method_opt=>'FOR COLUMNS COUNTRY_CD SIZE 250, PRODUCT_CD SIZE 250, TXN_ID SIZE 250 ',cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL>

SQL> select column_name, num_distinct, histogram from user_tab_col_statistics
2 where table_name = 'TRANSACTIONS'
3 order by 1;

COLUMN_NAME NUM_DISTINCT HISTOGRAM
------------------------------ ------------ ---------------
COUNTRY_CD 5 FREQUENCY
PRODUCT_CD 10 FREQUENCY
TXN_ID 1000000 HEIGHT BALANCED


SQL>

SQL> explain plan for
2 select product_desc, txn_id, transaction_amt
3 from transactions t, product_table p
4 where txn_date between to_date('01-FEB-2011','DD-MON-YYYY') and to_date('28-FEB-2011','DD-MON-YYYY')
5 and txn_id between 155000 and 156000
6 and country_cd = 'IN'
7 and t.product_cd=p.product_cd
8 /

Explained.

SQL> select * from table(dbms_xplan.display);


PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1299935411

-------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
-------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 45 | 36 (0)| 00:00:01 | | |
| 1 | NESTED LOOPS | | | | | | | |
| 2 | NESTED LOOPS | | 1 | 45 | 36 (0)| 00:00:01 | | |
| 3 | PARTITION RANGE SINGLE | | 1 | 33 | 35 (0)| 00:00:01 | 2 | 2 |
|* 4 | TABLE ACCESS BY LOCAL INDEX ROWID| TRANSACTIONS | 1 | 33 | 35 (0)| 00:00:01 | 2 | 2 |
|* 5 | INDEX RANGE SCAN | TRANSACTIONS_NDX | 400 | | 4 (0)| 00:00:01 | 2 | 2 |
|* 6 | INDEX UNIQUE SCAN | SYS_C0016611 | 1 | | 0 (0)| 00:00:01 | | |
| 7 | TABLE ACCESS BY INDEX ROWID | PRODUCT_TABLE | 1 | 12 | 1 (0)| 00:00:01 | | |
-------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

4 - filter("TXN_DATE"<=TO_DATE(' 2011-02-28 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) 5 - access("COUNTRY_CD"='IN' AND "TXN_ID">=155000 AND "TXN_ID"<=156000) 6 - access("T"."PRODUCT_CD"="P"."PRODUCT_CD") 21 rows selected. SQL>

Ah. Now the optimizer has a better feel for the rows from Operation 5 : access by COUNTRY_CD and TXN_ID. Instead of 252 rows (when column statistics were not gathered on TXN_ID), the expected row count is 400.
Yet, the filter for TXN_DATE is still expected to return 1 row ! We know that we are reading 400 rows from the one partition alone(partition 2 being P_2011_FEB). We also know that all the rows in this partition *will* satisfy the criteria for this access predicate. Yet, the optimizer expects only 1 row.

Here's the actual execution with this plan :
SQL> select /*+ gather_plan_statistics */ product_desc, txn_id, transaction_amt
2 from transactions t, product_table p
3 where txn_date between to_date('01-FEB-2011','DD-MON-YYYY') and to_date('28-FEB-2011','DD-MON-YYYY')
4 and txn_id between 155000 and 156000
5 and country_cd = 'IN'
6 and t.product_cd=p.product_cd
7 /

PRODUCT_DESC TXN_ID TRANSACTION_AMT
-------------------------------------------------- ---------- ---------------
Prod: _1 155000 155
Prod: _2 155001 155.001
Prod: _5 155004 155.004
........
Prod: _5 155994 155.994
Prod: _9 155998 155.998
Prod: _1 156000 156

401 rows selected.

SQL>
SQL> select * from table(dbms_xplan.display_cursor('','','ALLSTATS LAST'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 5yzvvdg1y2aub, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ product_desc, txn_id,
transaction_amt from transactions t, product_table p where txn_date
between to_date('01-FEB-2011','DD-MON-YYYY') and
to_date('28-FEB-2011','DD-MON-YYYY') and txn_id between 155000 and
156000 and country_cd = 'IN' and t.product_cd=p.product_cd

Plan hash value: 1299935411

----------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads |
----------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 401 |00:00:00.03 | 428 | 2 |
| 1 | NESTED LOOPS | | 1 | | 401 |00:00:00.03 | 428 | 2 |
| 2 | NESTED LOOPS | | 1 | 1 | 401 |00:00:00.04 | 27 | 1 |
| 3 | PARTITION RANGE SINGLE | | 1 | 1 | 401 |00:00:00.02 | 23 | 0 |
|* 4 | TABLE ACCESS BY LOCAL INDEX ROWID| TRANSACTIONS | 1 | 1 | 401 |00:00:00.02 | 23 | 0 |
|* 5 | INDEX RANGE SCAN | TRANSACTIONS_NDX | 1 | 400 | 401 |00:00:00.01 | 6 | 0 |
|* 6 | INDEX UNIQUE SCAN | SYS_C0016611 | 401 | 1 | 401 |00:00:00.01 | 4 | 1 |
| 7 | TABLE ACCESS BY INDEX ROWID | PRODUCT_TABLE | 401 | 1 | 401 |00:00:00.01 | 401 | 1 |
----------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

4 - filter("TXN_DATE"<=TO_DATE(' 2011-02-28 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) 5 - access("COUNTRY_CD"='IN' AND "TXN_ID">=155000 AND "TXN_ID"<=156000) 6 - access("T"."PRODUCT_CD"="P"."PRODUCT_CD") 30 rows selected. SQL>

So ! We are still reading the PRODUCTS_TABLE 400 times --- only because the Optimizer expected to have to read it only once !

Therefore, we can now narrow down to the determination of the row count for the filter predicate for TXN_DATE as being the cause of the "poor" behaviour of this Nested Loop, with a high Consistent Gets count.
The current set of statistics do not include a column where our query predicate is actually " txn_date between to_date('01-FEB-2011','DD-MON-YYYY') and to_date('28-FEB-2011','DD-MON-YYYY')" and the result is that the query is sub-optimal.


What if we include TXN_DATE in our statistics as well :
.......... we'll now see the Join change to a Sort-Merge join !
SQL> exec dbms_stats.gather_table_stats('','TRANSACTIONS',estimate_percent=>100,granularity=>'ALL',-
> method_opt=>'FOR COLUMNS COUNTRY_CD SIZE 250, PRODUCT_CD SIZE 250, TXN_ID SIZE 250, TXN_DATE SIZE 250 ',cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL>
SQL> select column_name, num_distinct, histogram from user_tab_col_statistics
2 where table_name = 'TRANSACTIONS'
3 order by 1;

COLUMN_NAME NUM_DISTINCT HISTOGRAM
------------------------------ ------------ ---------------
COUNTRY_CD 5 FREQUENCY
PRODUCT_CD 10 FREQUENCY
TXN_DATE 1000000 HEIGHT BALANCED
TXN_ID 1000000 HEIGHT BALANCED

SQL>
SQL> explain plan for
2 select product_desc, txn_id, transaction_amt
3 from transactions t, product_table p
4 where txn_date between to_date('01-FEB-2011','DD-MON-YYYY') and to_date('28-FEB-2011','DD-MON-YYYY')
5 and txn_id between 155000 and 156000
6 and country_cd = 'IN'
7 and t.product_cd=p.product_cd
8 /

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 3745290717

-------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
-------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 386 | 16984 | 38 (3)| 00:00:01 | | |
| 1 | MERGE JOIN | | 386 | 16984 | 38 (3)| 00:00:01 | | |
| 2 | TABLE ACCESS BY INDEX ROWID | PRODUCT_TABLE | 14 | 168 | 2 (0)| 00:00:01 | | |
| 3 | INDEX FULL SCAN | SYS_C0016611 | 14 | | 1 (0)| 00:00:01 | | |
|* 4 | SORT JOIN | | 386 | 12352 | 36 (3)| 00:00:01 | | |
| 5 | PARTITION RANGE SINGLE | | 386 | 12352 | 35 (0)| 00:00:01 | 2 | 2 |
|* 6 | TABLE ACCESS BY LOCAL INDEX ROWID| TRANSACTIONS | 386 | 12352 | 35 (0)| 00:00:01 | 2 | 2 |
|* 7 | INDEX RANGE SCAN | TRANSACTIONS_NDX | 400 | | 4 (0)| 00:00:01 | 2 | 2 |
-------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

4 - access("T"."PRODUCT_CD"="P"."PRODUCT_CD")
filter("T"."PRODUCT_CD"="P"."PRODUCT_CD")
6 - filter("TXN_DATE"<=TO_DATE(' 2011-02-28 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) 7 - access("COUNTRY_CD"='IN' AND "TXN_ID">=155000 AND "TXN_ID"<=156000) 22 rows selected. SQL>

It's a very different execution plan now. The Optimizer estimates 386 rows from TRANSACTIONS after filtering for the TXN_DATE. It now chooses to do a Sort-Merge join (instead of the Nested Loop), starting with a read of the PRODUCTS_TABLE.
The opitmizer will now avoid the "mistake" of reading the PRODUCTS_TABLE 41 times.
Here's the actual execution :
SQL> select /*+ gather_plan_statistics */ product_desc, txn_id, transaction_amt
2 from transactions t, product_table p
3 where txn_date between to_date('01-FEB-2011','DD-MON-YYYY') and to_date('28-FEB-2011','DD-MON-YYYY')
4 and txn_id between 155000 and 156000
5 and country_cd = 'IN'
6 and t.product_cd=p.product_cd
7 /

PRODUCT_DESC TXN_ID TRANSACTION_AMT
-------------------------------------------------- ---------- ---------------
Prod: _1 155000 155
Prod: _1 155010 155.01
Prod: _1 155020 155.02
........
Prod: _9 155618 155.618
Prod: _9 155628 155.628
Prod: _9 155638 155.638

401 rows selected.

SQL>
SQL> select * from table(dbms_xplan.display_cursor('','','ALLSTATS LAST'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 5yzvvdg1y2aub, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ product_desc, txn_id,
transaction_amt from transactions t, product_table p where txn_date
between to_date('01-FEB-2011','DD-MON-YYYY') and
to_date('28-FEB-2011','DD-MON-YYYY') and txn_id between 155000 and
156000 and country_cd = 'IN' and t.product_cd=p.product_cd

Plan hash value: 3745290717

------------------------------------------------------------------------------------------------------------------------------------
-------------------

| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem |
1Mem | Used-Mem |

------------------------------------------------------------------------------------------------------------------------------------
-------------------

| 0 | SELECT STATEMENT | | 1 | | 401 |00:00:00.01 | 25 | 2 | |
| |

| 1 | MERGE JOIN | | 1 | 386 | 401 |00:00:00.01 | 25 | 2 | |
| |

| 2 | TABLE ACCESS BY INDEX ROWID | PRODUCT_TABLE | 1 | 14 | 10 |00:00:00.01 | 4 | 2 | |
| |

| 3 | INDEX FULL SCAN | SYS_C0016611 | 1 | 14 | 10 |00:00:00.01 | 2 | 1 | |
| |

|* 4 | SORT JOIN | | 10 | 386 | 401 |00:00:00.01 | 21 | 0 | 18432 |
18432 |16384 (0)|

| 5 | PARTITION RANGE SINGLE | | 1 | 386 | 401 |00:00:00.01 | 21 | 0 | |
| |

|* 6 | TABLE ACCESS BY LOCAL INDEX ROWID| TRANSACTIONS | 1 | 386 | 401 |00:00:00.01 | 21 | 0 | |
| |

|* 7 | INDEX RANGE SCAN | TRANSACTIONS_NDX | 1 | 400 | 401 |00:00:00.01 | 5 | 0 | |
| |

------------------------------------------------------------------------------------------------------------------------------------
-------------------


Predicate Information (identified by operation id):
---------------------------------------------------

4 - access("T"."PRODUCT_CD"="P"."PRODUCT_CD")
filter("T"."PRODUCT_CD"="P"."PRODUCT_CD")
6 - filter("TXN_DATE"<=TO_DATE(' 2011-02-28 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) 7 - access("COUNTRY_CD"='IN' AND "TXN_ID">=155000 AND "TXN_ID"<=156000) 31 rows selected. SQL>

Oracle now notes the usage of about 16KB memory to execute a sort on the values returned from the TRANSACTIONS table so as to be able to "merge" with the values from the PRODUCTS_TABLE (the join is on PRODUCT_CD).
The total number of Buffer Gets is now only 25 ! A very significant reduction from 428 in the earlier executions !
Also, note that a consequence of the Sort-Merge is that the final result set appears ordered by PRODUCT_DESC, actually on the basis of the sort on PRODUCT_CD to execute the join.

Therefore, not only can a change in statistics :
a. Change the execution plan
and
b. Change the total logical i/o
It can also change the manner in which the result set appears ! This can be a very significant difference to users who see such listing on screens / webforms.
Hint: Always explicitly specify an ORDER BY if the result set exceeds a few rows when the appearance of the output is "sensitive".
Because I didn't include an ORDER BY the rows appear differently, merely because of a slight change in the Gather_Stats.

.
.
.

Nested Loop and Consistent Gets

DBAs with a few years of experience and/or having read some material on performance tuning know that eliminating physical reads and achieving a high buffer cache hit ratio isn't necessarily a tuning goal.
Some DBAs have accepted the idea that, rather than physical reads, they should concentrate on logical reads -- consistent gets in case of queries.
But ...... there really is no "absolutely correct optimization target". Everything is relative.

Sometimes you may sacrifice one goal because you have to work with a constraint. Typically concurrency is a constraint that, unfortunately, might get ignored.
Concurrency issues do not manifest in test environments. Yet, they bite in production environments.


Here's a case where simple rules like :
a. Use a Nested Loop when fetching few rows
b. Aim for the lowest "cost"
may not necessarily be optimal ?

I start with this specific data :
drop table transactions purge;
drop table product_table purge;
drop table country_table purge;

create table country_table
(country_cd varchar2(5) primary key,
country_name varchar2(50));

create table product_table
(product_cd number primary key,
product_Desc varchar2(50));

create table transactions
(txn_date date not null,
txn_id number,
country_cd varchar2(5),
product_cd number,
transaction_amt number,
other_columns varchar2(100))
partition by range (txn_date)
(
partition P_2011_JAN values less than (to_date('01-FEB-2011','DD-MON-YYYY')),
partition P_2011_FEB values less than (to_date('01-MAR-2011','DD-MON-YYYY')),
partition P_2011_MAR values less than (to_date('01-APR-2011','DD-MON-YYYY')),
partition P_2011_APR values less than (to_date('01-MAY-2011','DD-MON-YYYY')),
partition P_2011_MAY values less than (to_date('01-JUN-2011','DD-MON-YYYY')),
partition P_2011_JUN values less than (to_date('01-JUL-2011','DD-MON-YYYY')),
partition P_2011_JUL values less than (to_date('01-AUG-2011','DD-MON-YYYY')),
partition P_2011_AUG values less than (to_date('01-SEP-2011','DD-MON-YYYY')),
partition P_2011_SEP values less than (to_date('01-OCT-2011','DD-MON-YYYY'))
)
/
alter table transactions add constraint country_fk foreign key (country_cd) references country_table;
alter table transactions add constraint product_fk foreign key (product_cd) references product_table;

create index transactions_ndx on transactions (country_cd, txn_id) local;

-- create the countries
insert into country_table values ('IN','India');
insert into country_table values ('ID','Indonesia');
insert into country_table values ('SG','Singapore');
insert into country_table values ('MY','Malaysia');
insert into country_table values ('KR','Korea');
commit;
-- create the products
insert into product_table select rownum, 'Prod: _' || to_char(rownum) from dual connect by level (less than) 15;
-- populate transactions insert /*+ APPEND */ into transactions
select to_date('01-JAN-2011','DD-MON-YYYY')+rownum/5000, rownum, decode(mod(rownum,10),0,'IN',1,'IN',2,'ID',3,'SG',4,'IN',5,'SG',6,'KR',7,'MY',8,'IN',9,'KR'),
mod(rownum,10)+1,
rownum/1000,
lpad('ABCDEFGHIJ',80)
from dual connect by level (less than) 1000001;
commit;
exec dbms_stats.gather_table_stats('','COUNTRY_TABLE',estimate_percent=>100,cascade=>TRUE);
exec dbms_stats.gather_table_stats('','PRODUCT_TABLE',estimate_percent=>100,cascade=>TRUE);
exec dbms_stats.gather_table_stats('','TRANSACTIONS',estimate_percent=>100,granularity=>'ALL',-
method_opt=>'FOR COLUMNS COUNTRY_CD',cascade=>TRUE);


commit;


I then count the rows in the February partition and also "cache" the partition into memory (so as to eliminate the need for physical I/O in my tests) :
SQL> select /*+ FULL (t) */ count(*) from transactions  partition (P_2011_FEB) t;

COUNT(*)
----------
140000

SQL>


Given that there are 14 products and that country_cd 'IN' accounts for 40% of all the transactions, how many rows should this query fetch and what should it "cost" ?
select  product_desc, txn_id, transaction_amt
from transactions t, product_table p
where txn_date between to_date('01-FEB-2011','DD-MON-YYYY') and to_date('28-FEB-2011','DD-MON-YYYY')
and txn_id between 155000 and 156000
and country_cd = 'IN'
and t.product_cd=p.product_cd
/


The query will fetch 401 rows.

Here's the "normal" execution plan :
Nested Loop :
SQL> explain plan for
2 select product_desc, txn_id, transaction_amt
3 from transactions t, product_table p
4 where txn_date between to_date('01-FEB-2011','DD-MON-YYYY') and to_date('28-FEB-2011','DD-MON-YYYY')
5 and txn_id between 155000 and 156000
6 and country_cd = 'IN'
7 and t.product_cd=p.product_cd
8 /

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1299935411

-------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
-------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 63 | 25 (0)| 00:00:01 | | |
| 1 | NESTED LOOPS | | | | | | | |
| 2 | NESTED LOOPS | | 1 | 63 | 25 (0)| 00:00:01 | | |
| 3 | PARTITION RANGE SINGLE | | 1 | 51 | 24 (0)| 00:00:01 | 2 | 2 |
|* 4 | TABLE ACCESS BY LOCAL INDEX ROWID| TRANSACTIONS | 1 | 51 | 24 (0)| 00:00:01 | 2 | 2 |
|* 5 | INDEX RANGE SCAN | TRANSACTIONS_NDX | 252 | | 4 (0)| 00:00:01 | 2 | 2 |
|* 6 | INDEX UNIQUE SCAN | SYS_C0016611 | 1 | | 0 (0)| 00:00:01 | | |
| 7 | TABLE ACCESS BY INDEX ROWID | PRODUCT_TABLE | 1 | 12 | 1 (0)| 00:00:01 | | |
-------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

4 - filter("TXN_DATE"<=TO_DATE(' 2011-02-28 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) 5 - access("COUNTRY_CD"='IN' AND "TXN_ID">=155000 AND "TXN_ID"<=156000) 6 - access("T"."PRODUCT_CD"="P"."PRODUCT_CD") 21 rows selected. SQL>


So, we have a "COST" of 25 and a "nice" Nested Loop join. The query will correctly hit only the February partition and will use the corresponding index partition.
...... but I suggest that you look at the Cardinality figures again.... think about them.

Here's the actual with this plan :
SQL> select /*+ gather_plan_statistics */ product_desc, txn_id, transaction_amt
2 from transactions t, product_table p
3 where txn_date between to_date('01-FEB-2011','DD-MON-YYYY') and to_date('28-FEB-2011','DD-MON-YYYY')
4 and txn_id between 155000 and 156000
5 and country_cd = 'IN'
6 and t.product_cd=p.product_cd
7 /

PRODUCT_DESC TXN_ID TRANSACTION_AMT
-------------------------------------------------- ---------- ---------------
Prod: _1 155000 155
Prod: _2 155001 155.001
Prod: _5 155004 155.004
........
Prod: _5 155994 155.994
Prod: _9 155998 155.998
Prod: _1 156000 156

401 rows selected.

SQL>
SQL> select * from table(dbms_xplan.display_cursor('','','ALLSTATS LAST'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 5yzvvdg1y2aub, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ product_desc, txn_id,
transaction_amt from transactions t, product_table p where txn_date
between to_date('01-FEB-2011','DD-MON-YYYY') and
to_date('28-FEB-2011','DD-MON-YYYY') and txn_id between 155000 and
156000 and country_cd = 'IN' and t.product_cd=p.product_cd

Plan hash value: 1299935411

-------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 401 |00:00:00.06 | 428 |
| 1 | NESTED LOOPS | | 1 | | 401 |00:00:00.06 | 428 |
| 2 | NESTED LOOPS | | 1 | 1 | 401 |00:00:00.09 | 27 |
| 3 | PARTITION RANGE SINGLE | | 1 | 1 | 401 |00:00:00.04 | 23 |
|* 4 | TABLE ACCESS BY LOCAL INDEX ROWID| TRANSACTIONS | 1 | 1 | 401 |00:00:00.04 | 23 |
|* 5 | INDEX RANGE SCAN | TRANSACTIONS_NDX | 1 | 252 | 401 |00:00:00.01 | 6 |
|* 6 | INDEX UNIQUE SCAN | SYS_C0016611 | 401 | 1 | 401 |00:00:00.01 | 4 |
| 7 | TABLE ACCESS BY INDEX ROWID | PRODUCT_TABLE | 401 | 1 | 401 |00:00:00.01 | 401 |
-------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

4 - filter("TXN_DATE"<=TO_DATE(' 2011-02-28 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) 5 - access("COUNTRY_CD"='IN' AND "TXN_ID">=155000 AND "TXN_ID"<=156000) 6 - access("T"."PRODUCT_CD"="P"."PRODUCT_CD") 30 rows selected. SQL>


Have you noticed where most of the "consistent gets" occur ? Of the 428 buffer gets, 401 were on the PRODUCTS_TABLE ! (Index SYS_C0016611 is the index for the Primary Key constraint on this table). 93.7% of the logical I/O was against this "very small" table (it has only 14 rows).
Here's how the trace records the execution :
select /*+ gather_plan_statistics */ product_desc, txn_id, transaction_amt
from transactions t, product_table p
where txn_date between to_date('01-FEB-2011','DD-MON-YYYY') and to_date('28-FEB-2011','DD-MON-YYYY')
and txn_id between 155000 and 156000
and country_cd = 'IN'
and t.product_cd=p.product_cd

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.08 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.05 0.05 0 428 0 401
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.06 0.13 0 428 0 401

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 184

Rows Row Source Operation
------- ---------------------------------------------------
401 NESTED LOOPS (cr=428 pr=0 pw=0 time=51333 us)
401 NESTED LOOPS (cr=27 pr=0 pw=0 time=81800 us cost=25 size=63 card=1)
401 PARTITION RANGE SINGLE PARTITION: 2 2 (cr=23 pr=0 pw=0 time=39600 us cost=24 size=51 card=1)
401 TABLE ACCESS BY LOCAL INDEX ROWID TRANSACTIONS PARTITION: 2 2 (cr=23 pr=0 pw=0 time=36000 us cost=24 size=51 card=1)
401 INDEX RANGE SCAN TRANSACTIONS_NDX PARTITION: 2 2 (cr=6 pr=0 pw=0 time=1733 us cost=4 size=0 card=252)(object id 85438)
401 INDEX UNIQUE SCAN SYS_C0016611 (cr=4 pr=0 pw=0 time=0 us cost=0 size=0 card=1)(object id 85427)
401 TABLE ACCESS BY INDEX ROWID PRODUCT_TABLE (cr=401 pr=0 pw=0 time=0 us cost=1 size=12 card=1)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 2.08 2.09
SQL*Net more data to client 1 0.00 0.00
********************************************************************************

What does this mean ? Much higher access to PRODUCTS_TABLE would have required much higher latch gets and buffer pins. PRODUCTS_TABLE could have "hot" blocks !

Here's an "alternate" execution plan :
Hash Join :
SQL> explain plan for
2 select /*+ USE_HASH (t p) */ product_desc, txn_id, transaction_amt
3 from transactions t, product_table p
4 where txn_date between to_date('01-FEB-2011','DD-MON-YYYY') and to_date('28-FEB-2011','DD-MON-YYYY')
5 and txn_id between 155000 and 156000
6 and country_cd = 'IN'
7 and t.product_cd=p.product_cd
8 /

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 971735053

------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 63 | 28 (4)| 00:00:01 | | |
|* 1 | HASH JOIN | | 1 | 63 | 28 (4)| 00:00:01 | | |
| 2 | PARTITION RANGE SINGLE | | 1 | 51 | 24 (0)| 00:00:01 | 2 | 2 |
|* 3 | TABLE ACCESS BY LOCAL INDEX ROWID| TRANSACTIONS | 1 | 51 | 24 (0)| 00:00:01 | 2 | 2 |
|* 4 | INDEX RANGE SCAN | TRANSACTIONS_NDX | 252 | | 4 (0)| 00:00:01 | 2 | 2 |
| 5 | TABLE ACCESS FULL | PRODUCT_TABLE | 14 | 168 | 3 (0)| 00:00:01 | | |
------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - access("T"."PRODUCT_CD"="P"."PRODUCT_CD")
3 - filter("TXN_DATE"<=TO_DATE(' 2011-02-28 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) 4 - access("COUNTRY_CD"='IN' AND "TXN_ID">=155000 AND "TXN_ID"<=156000) 19 rows selected. SQL>

This plan has a slightly higher cost. How does it perform ?
SQL> select /*+ gather_plan_statistics USE_HASH (t p) */ product_desc, txn_id, transaction_amt
2 from transactions t, product_table p
3 where txn_date between to_date('01-FEB-2011','DD-MON-YYYY') and to_date('28-FEB-2011','DD-MON-YYYY')
4 and txn_id between 155000 and 156000
5 and country_cd = 'IN'
6 and t.product_cd=p.product_cd
7 /

PRODUCT_DESC TXN_ID TRANSACTION_AMT
-------------------------------------------------- ---------- ---------------
Prod: _1 156000 156
Prod: _1 155990 155.99
Prod: _1 155980 155.98
........
Prod: _9 155038 155.038
Prod: _9 155028 155.028
Prod: _9 155018 155.018
Prod: _9 155008 155.008

401 rows selected.

SQL>
SQL> select * from table(dbms_xplan.display_cursor('','','ALLSTATS LAST'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 6xtazb8h07f8b, child number 0
-------------------------------------
select /*+ gather_plan_statistics USE_HASH (t p) */ product_desc,
txn_id, transaction_amt from transactions t, product_table p where
txn_date between to_date('01-FEB-2011','DD-MON-YYYY') and
to_date('28-FEB-2011','DD-MON-YYYY') and txn_id between 155000 and
156000 and country_cd = 'IN' and t.product_cd=p.product_cd

Plan hash value: 971735053

------------------------------------------------------------------------------------------------------------------------------------
---------

| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | U
sed-Mem |

------------------------------------------------------------------------------------------------------------------------------------
---------

| 0 | SELECT STATEMENT | | 1 | | 401 |00:00:00.05 | 29 | | |
|

|* 1 | HASH JOIN | | 1 | 1 | 401 |00:00:00.05 | 29 | 870K| 870K|
714K (0)|

| 2 | PARTITION RANGE SINGLE | | 1 | 1 | 401 |00:00:00.67 | 21 | | |
|

|* 3 | TABLE ACCESS BY LOCAL INDEX ROWID| TRANSACTIONS | 1 | 1 | 401 |00:00:00.59 | 21 | | |
|

|* 4 | INDEX RANGE SCAN | TRANSACTIONS_NDX | 1 | 252 | 401 |00:00:00.28 | 5 | | |
|

| 5 | TABLE ACCESS FULL | PRODUCT_TABLE | 1 | 14 | 14 |00:00:00.01 | 8 | | |
|

------------------------------------------------------------------------------------------------------------------------------------
---------


Predicate Information (identified by operation id):
---------------------------------------------------

1 - access("T"."PRODUCT_CD"="P"."PRODUCT_CD")
3 - filter("TXN_DATE"<=TO_DATE(' 2011-02-28 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) 4 - access("COUNTRY_CD"='IN' AND "TXN_ID">=155000 AND "TXN_ID"<=156000) 28 rows selected. SQL>

This execution, although doing a "Full Table Scan" on PRODUCTS_TABLE had much much fewer consistent gets -- 29 versus 428 -- inspite of the slightly higher cost of 28 versus 25.
Here's the trace :
select /*+ gather_plan_statistics USE_HASH (t p) */ product_desc, txn_id, transaction_amt
from transactions t, product_table p
where txn_date between to_date('01-FEB-2011','DD-MON-YYYY') and to_date('28-FEB-2011','DD-MON-YYYY')
and txn_id between 155000 and 156000
and country_cd = 'IN'
and t.product_cd=p.product_cd

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.06 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.03 0.03 0 29 0 401
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.05 0.10 0 29 0 401

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 184

Rows Row Source Operation
------- ---------------------------------------------------
401 HASH JOIN (cr=29 pr=0 pw=0 time=16666 us cost=28 size=63 card=1)
401 PARTITION RANGE SINGLE PARTITION: 2 2 (cr=21 pr=0 pw=0 time=665300 us cost=24 size=51 card=1)
401 TABLE ACCESS BY LOCAL INDEX ROWID TRANSACTIONS PARTITION: 2 2 (cr=21 pr=0 pw=0 time=585100 us cost=24 size=51 card=1)
401 INDEX RANGE SCAN TRANSACTIONS_NDX PARTITION: 2 2 (cr=5 pr=0 pw=0 time=278133 us cost=4 size=0 card=252)(object id 85438)
14 TABLE ACCESS FULL PRODUCT_TABLE (cr=8 pr=0 pw=0 time=0 us cost=3 size=168 card=14)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 0.36 0.36
********************************************************************************

PRODUCTS_TABLE was scanned upto the high water mark -- 8 blocks.

Given the circumstances, the size of the data, the nature of the data, the second execution with a Hash Join is what I prefer.

Unfortunately, with the available statistics, Oracle chooses the Nested Loop. A novice DBA might also choose the Nested Loop because it has a "lower" cost.

So, what gives ?

Here's a hint : I told you to look (carefully) at the cardinality (number of rows) in each execution plan. There's something very wrong in the first execution plan.


In any case, remember : a lower cost isn't necessarily a better plan.

I may well re-visit this data and this query in another blog post.

.
.
.


Aggregated by orafaq.com

Aggregated by orafaq.com
This blog is being aggregated by orafaq.com