Some DBAs have accepted the idea that, rather than physical reads, they should concentrate on logical reads -- consistent gets in case of queries.
But ...... there really is no "absolutely correct optimization target". Everything is relative.
Sometimes you may sacrifice one goal because you have to work with a constraint. Typically concurrency is a constraint that, unfortunately, might get ignored.
Concurrency issues do not manifest in test environments. Yet, they bite in production environments.
Here's a case where simple rules like :
a. Use a Nested Loop when fetching few rows
b. Aim for the lowest "cost"
may not necessarily be optimal ?
I start with this specific data :
drop table transactions purge;
drop table product_table purge;
drop table country_table purge;
create table country_table
(country_cd varchar2(5) primary key,
country_name varchar2(50));
create table product_table
(product_cd number primary key,
product_Desc varchar2(50));
create table transactions
(txn_date date not null,
txn_id number,
country_cd varchar2(5),
product_cd number,
transaction_amt number,
other_columns varchar2(100))
partition by range (txn_date)
(
partition P_2011_JAN values less than (to_date('01-FEB-2011','DD-MON-YYYY')),
partition P_2011_FEB values less than (to_date('01-MAR-2011','DD-MON-YYYY')),
partition P_2011_MAR values less than (to_date('01-APR-2011','DD-MON-YYYY')),
partition P_2011_APR values less than (to_date('01-MAY-2011','DD-MON-YYYY')),
partition P_2011_MAY values less than (to_date('01-JUN-2011','DD-MON-YYYY')),
partition P_2011_JUN values less than (to_date('01-JUL-2011','DD-MON-YYYY')),
partition P_2011_JUL values less than (to_date('01-AUG-2011','DD-MON-YYYY')),
partition P_2011_AUG values less than (to_date('01-SEP-2011','DD-MON-YYYY')),
partition P_2011_SEP values less than (to_date('01-OCT-2011','DD-MON-YYYY'))
)
/
alter table transactions add constraint country_fk foreign key (country_cd) references country_table;
alter table transactions add constraint product_fk foreign key (product_cd) references product_table;
create index transactions_ndx on transactions (country_cd, txn_id) local;
-- create the countries
insert into country_table values ('IN','India');
insert into country_table values ('ID','Indonesia');
insert into country_table values ('SG','Singapore');
insert into country_table values ('MY','Malaysia');
insert into country_table values ('KR','Korea');
commit;
-- create the products
insert into product_table select rownum, 'Prod: _' || to_char(rownum) from dual connect by level (less than) 15;
-- populate transactions insert /*+ APPEND */ into transactions
select to_date('01-JAN-2011','DD-MON-YYYY')+rownum/5000, rownum, decode(mod(rownum,10),0,'IN',1,'IN',2,'ID',3,'SG',4,'IN',5,'SG',6,'KR',7,'MY',8,'IN',9,'KR'),
mod(rownum,10)+1,
rownum/1000,
lpad('ABCDEFGHIJ',80)
from dual connect by level (less than) 1000001;
commit;
exec dbms_stats.gather_table_stats('','COUNTRY_TABLE',estimate_percent=>100,cascade=>TRUE);
exec dbms_stats.gather_table_stats('','PRODUCT_TABLE',estimate_percent=>100,cascade=>TRUE);
exec dbms_stats.gather_table_stats('','TRANSACTIONS',estimate_percent=>100,granularity=>'ALL',-
method_opt=>'FOR COLUMNS COUNTRY_CD',cascade=>TRUE);
commit;
I then count the rows in the February partition and also "cache" the partition into memory (so as to eliminate the need for physical I/O in my tests) :
SQL> select /*+ FULL (t) */ count(*) from transactions partition (P_2011_FEB) t;
COUNT(*)
----------
140000
SQL>
Given that there are 14 products and that country_cd 'IN' accounts for 40% of all the transactions, how many rows should this query fetch and what should it "cost" ?
select product_desc, txn_id, transaction_amt
from transactions t, product_table p
where txn_date between to_date('01-FEB-2011','DD-MON-YYYY') and to_date('28-FEB-2011','DD-MON-YYYY')
and txn_id between 155000 and 156000
and country_cd = 'IN'
and t.product_cd=p.product_cd
/
The query will fetch 401 rows.
Here's the "normal" execution plan :
Nested Loop :
So, we have a "COST" of 25 and a "nice" Nested Loop join. The query will correctly hit only the February partition and will use the corresponding index partition.
...... but I suggest that you look at the Cardinality figures again.... think about them.
Here's the actual with this plan :
Have you noticed where most of the "consistent gets" occur ? Of the 428 buffer gets, 401 were on the PRODUCTS_TABLE ! (Index SYS_C0016611 is the index for the Primary Key constraint on this table). 93.7% of the logical I/O was against this "very small" table (it has only 14 rows).
Here's how the trace records the execution :
What does this mean ? Much higher access to PRODUCTS_TABLE would have required much higher latch gets and buffer pins. PRODUCTS_TABLE could have "hot" blocks !
Here's an "alternate" execution plan :
SQL> explain plan for
2 select product_desc, txn_id, transaction_amt
3 from transactions t, product_table p
4 where txn_date between to_date('01-FEB-2011','DD-MON-YYYY') and to_date('28-FEB-2011','DD-MON-YYYY')
5 and txn_id between 155000 and 156000
6 and country_cd = 'IN'
7 and t.product_cd=p.product_cd
8 /
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1299935411
-------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
-------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 63 | 25 (0)| 00:00:01 | | |
| 1 | NESTED LOOPS | | | | | | | |
| 2 | NESTED LOOPS | | 1 | 63 | 25 (0)| 00:00:01 | | |
| 3 | PARTITION RANGE SINGLE | | 1 | 51 | 24 (0)| 00:00:01 | 2 | 2 |
|* 4 | TABLE ACCESS BY LOCAL INDEX ROWID| TRANSACTIONS | 1 | 51 | 24 (0)| 00:00:01 | 2 | 2 |
|* 5 | INDEX RANGE SCAN | TRANSACTIONS_NDX | 252 | | 4 (0)| 00:00:01 | 2 | 2 |
|* 6 | INDEX UNIQUE SCAN | SYS_C0016611 | 1 | | 0 (0)| 00:00:01 | | |
| 7 | TABLE ACCESS BY INDEX ROWID | PRODUCT_TABLE | 1 | 12 | 1 (0)| 00:00:01 | | |
-------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
4 - filter("TXN_DATE"<=TO_DATE(' 2011-02-28 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) 5 - access("COUNTRY_CD"='IN' AND "TXN_ID">=155000 AND "TXN_ID"<=156000) 6 - access("T"."PRODUCT_CD"="P"."PRODUCT_CD") 21 rows selected. SQL>
So, we have a "COST" of 25 and a "nice" Nested Loop join. The query will correctly hit only the February partition and will use the corresponding index partition.
...... but I suggest that you look at the Cardinality figures again.... think about them.
Here's the actual with this plan :
SQL> select /*+ gather_plan_statistics */ product_desc, txn_id, transaction_amt
2 from transactions t, product_table p
3 where txn_date between to_date('01-FEB-2011','DD-MON-YYYY') and to_date('28-FEB-2011','DD-MON-YYYY')
4 and txn_id between 155000 and 156000
5 and country_cd = 'IN'
6 and t.product_cd=p.product_cd
7 /
PRODUCT_DESC TXN_ID TRANSACTION_AMT
-------------------------------------------------- ---------- ---------------
Prod: _1 155000 155
Prod: _2 155001 155.001
Prod: _5 155004 155.004
........
Prod: _5 155994 155.994
Prod: _9 155998 155.998
Prod: _1 156000 156
401 rows selected.
SQL>
SQL> select * from table(dbms_xplan.display_cursor('','','ALLSTATS LAST'));
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 5yzvvdg1y2aub, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ product_desc, txn_id,
transaction_amt from transactions t, product_table p where txn_date
between to_date('01-FEB-2011','DD-MON-YYYY') and
to_date('28-FEB-2011','DD-MON-YYYY') and txn_id between 155000 and
156000 and country_cd = 'IN' and t.product_cd=p.product_cd
Plan hash value: 1299935411
-------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 401 |00:00:00.06 | 428 |
| 1 | NESTED LOOPS | | 1 | | 401 |00:00:00.06 | 428 |
| 2 | NESTED LOOPS | | 1 | 1 | 401 |00:00:00.09 | 27 |
| 3 | PARTITION RANGE SINGLE | | 1 | 1 | 401 |00:00:00.04 | 23 |
|* 4 | TABLE ACCESS BY LOCAL INDEX ROWID| TRANSACTIONS | 1 | 1 | 401 |00:00:00.04 | 23 |
|* 5 | INDEX RANGE SCAN | TRANSACTIONS_NDX | 1 | 252 | 401 |00:00:00.01 | 6 |
|* 6 | INDEX UNIQUE SCAN | SYS_C0016611 | 401 | 1 | 401 |00:00:00.01 | 4 |
| 7 | TABLE ACCESS BY INDEX ROWID | PRODUCT_TABLE | 401 | 1 | 401 |00:00:00.01 | 401 |
-------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
4 - filter("TXN_DATE"<=TO_DATE(' 2011-02-28 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) 5 - access("COUNTRY_CD"='IN' AND "TXN_ID">=155000 AND "TXN_ID"<=156000) 6 - access("T"."PRODUCT_CD"="P"."PRODUCT_CD") 30 rows selected. SQL>
Have you noticed where most of the "consistent gets" occur ? Of the 428 buffer gets, 401 were on the PRODUCTS_TABLE ! (Index SYS_C0016611 is the index for the Primary Key constraint on this table). 93.7% of the logical I/O was against this "very small" table (it has only 14 rows).
Here's how the trace records the execution :
select /*+ gather_plan_statistics */ product_desc, txn_id, transaction_amt
from transactions t, product_table p
where txn_date between to_date('01-FEB-2011','DD-MON-YYYY') and to_date('28-FEB-2011','DD-MON-YYYY')
and txn_id between 155000 and 156000
and country_cd = 'IN'
and t.product_cd=p.product_cd
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.08 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.05 0.05 0 428 0 401
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.06 0.13 0 428 0 401
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 184
Rows Row Source Operation
------- ---------------------------------------------------
401 NESTED LOOPS (cr=428 pr=0 pw=0 time=51333 us)
401 NESTED LOOPS (cr=27 pr=0 pw=0 time=81800 us cost=25 size=63 card=1)
401 PARTITION RANGE SINGLE PARTITION: 2 2 (cr=23 pr=0 pw=0 time=39600 us cost=24 size=51 card=1)
401 TABLE ACCESS BY LOCAL INDEX ROWID TRANSACTIONS PARTITION: 2 2 (cr=23 pr=0 pw=0 time=36000 us cost=24 size=51 card=1)
401 INDEX RANGE SCAN TRANSACTIONS_NDX PARTITION: 2 2 (cr=6 pr=0 pw=0 time=1733 us cost=4 size=0 card=252)(object id 85438)
401 INDEX UNIQUE SCAN SYS_C0016611 (cr=4 pr=0 pw=0 time=0 us cost=0 size=0 card=1)(object id 85427)
401 TABLE ACCESS BY INDEX ROWID PRODUCT_TABLE (cr=401 pr=0 pw=0 time=0 us cost=1 size=12 card=1)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 2.08 2.09
SQL*Net more data to client 1 0.00 0.00
********************************************************************************
What does this mean ? Much higher access to PRODUCTS_TABLE would have required much higher latch gets and buffer pins. PRODUCTS_TABLE could have "hot" blocks !
Here's an "alternate" execution plan :
Hash Join :
This plan has a slightly higher cost. How does it perform ?
This execution, although doing a "Full Table Scan" on PRODUCTS_TABLE had much much fewer consistent gets -- 29 versus 428 -- inspite of the slightly higher cost of 28 versus 25.
Here's the trace :
PRODUCTS_TABLE was scanned upto the high water mark -- 8 blocks.
Given the circumstances, the size of the data, the nature of the data, the second execution with a Hash Join is what I prefer.
Unfortunately, with the available statistics, Oracle chooses the Nested Loop. A novice DBA might also choose the Nested Loop because it has a "lower" cost.
So, what gives ?
Here's a hint : I told you to look (carefully) at the cardinality (number of rows) in each execution plan. There's something very wrong in the first execution plan.
In any case, remember : a lower cost isn't necessarily a better plan.
I may well re-visit this data and this query in another blog post.
.
SQL> explain plan for
2 select /*+ USE_HASH (t p) */ product_desc, txn_id, transaction_amt
3 from transactions t, product_table p
4 where txn_date between to_date('01-FEB-2011','DD-MON-YYYY') and to_date('28-FEB-2011','DD-MON-YYYY')
5 and txn_id between 155000 and 156000
6 and country_cd = 'IN'
7 and t.product_cd=p.product_cd
8 /
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 971735053
------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 63 | 28 (4)| 00:00:01 | | |
|* 1 | HASH JOIN | | 1 | 63 | 28 (4)| 00:00:01 | | |
| 2 | PARTITION RANGE SINGLE | | 1 | 51 | 24 (0)| 00:00:01 | 2 | 2 |
|* 3 | TABLE ACCESS BY LOCAL INDEX ROWID| TRANSACTIONS | 1 | 51 | 24 (0)| 00:00:01 | 2 | 2 |
|* 4 | INDEX RANGE SCAN | TRANSACTIONS_NDX | 252 | | 4 (0)| 00:00:01 | 2 | 2 |
| 5 | TABLE ACCESS FULL | PRODUCT_TABLE | 14 | 168 | 3 (0)| 00:00:01 | | |
------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("T"."PRODUCT_CD"="P"."PRODUCT_CD")
3 - filter("TXN_DATE"<=TO_DATE(' 2011-02-28 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) 4 - access("COUNTRY_CD"='IN' AND "TXN_ID">=155000 AND "TXN_ID"<=156000) 19 rows selected. SQL>
This plan has a slightly higher cost. How does it perform ?
SQL> select /*+ gather_plan_statistics USE_HASH (t p) */ product_desc, txn_id, transaction_amt
2 from transactions t, product_table p
3 where txn_date between to_date('01-FEB-2011','DD-MON-YYYY') and to_date('28-FEB-2011','DD-MON-YYYY')
4 and txn_id between 155000 and 156000
5 and country_cd = 'IN'
6 and t.product_cd=p.product_cd
7 /
PRODUCT_DESC TXN_ID TRANSACTION_AMT
-------------------------------------------------- ---------- ---------------
Prod: _1 156000 156
Prod: _1 155990 155.99
Prod: _1 155980 155.98
........
Prod: _9 155038 155.038
Prod: _9 155028 155.028
Prod: _9 155018 155.018
Prod: _9 155008 155.008
401 rows selected.
SQL>
SQL> select * from table(dbms_xplan.display_cursor('','','ALLSTATS LAST'));
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 6xtazb8h07f8b, child number 0
-------------------------------------
select /*+ gather_plan_statistics USE_HASH (t p) */ product_desc,
txn_id, transaction_amt from transactions t, product_table p where
txn_date between to_date('01-FEB-2011','DD-MON-YYYY') and
to_date('28-FEB-2011','DD-MON-YYYY') and txn_id between 155000 and
156000 and country_cd = 'IN' and t.product_cd=p.product_cd
Plan hash value: 971735053
------------------------------------------------------------------------------------------------------------------------------------
---------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | U
sed-Mem |
------------------------------------------------------------------------------------------------------------------------------------
---------
| 0 | SELECT STATEMENT | | 1 | | 401 |00:00:00.05 | 29 | | |
|
|* 1 | HASH JOIN | | 1 | 1 | 401 |00:00:00.05 | 29 | 870K| 870K|
714K (0)|
| 2 | PARTITION RANGE SINGLE | | 1 | 1 | 401 |00:00:00.67 | 21 | | |
|
|* 3 | TABLE ACCESS BY LOCAL INDEX ROWID| TRANSACTIONS | 1 | 1 | 401 |00:00:00.59 | 21 | | |
|
|* 4 | INDEX RANGE SCAN | TRANSACTIONS_NDX | 1 | 252 | 401 |00:00:00.28 | 5 | | |
|
| 5 | TABLE ACCESS FULL | PRODUCT_TABLE | 1 | 14 | 14 |00:00:00.01 | 8 | | |
|
------------------------------------------------------------------------------------------------------------------------------------
---------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("T"."PRODUCT_CD"="P"."PRODUCT_CD")
3 - filter("TXN_DATE"<=TO_DATE(' 2011-02-28 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) 4 - access("COUNTRY_CD"='IN' AND "TXN_ID">=155000 AND "TXN_ID"<=156000) 28 rows selected. SQL>
This execution, although doing a "Full Table Scan" on PRODUCTS_TABLE had much much fewer consistent gets -- 29 versus 428 -- inspite of the slightly higher cost of 28 versus 25.
Here's the trace :
select /*+ gather_plan_statistics USE_HASH (t p) */ product_desc, txn_id, transaction_amt
from transactions t, product_table p
where txn_date between to_date('01-FEB-2011','DD-MON-YYYY') and to_date('28-FEB-2011','DD-MON-YYYY')
and txn_id between 155000 and 156000
and country_cd = 'IN'
and t.product_cd=p.product_cd
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.06 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.03 0.03 0 29 0 401
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.05 0.10 0 29 0 401
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 184
Rows Row Source Operation
------- ---------------------------------------------------
401 HASH JOIN (cr=29 pr=0 pw=0 time=16666 us cost=28 size=63 card=1)
401 PARTITION RANGE SINGLE PARTITION: 2 2 (cr=21 pr=0 pw=0 time=665300 us cost=24 size=51 card=1)
401 TABLE ACCESS BY LOCAL INDEX ROWID TRANSACTIONS PARTITION: 2 2 (cr=21 pr=0 pw=0 time=585100 us cost=24 size=51 card=1)
401 INDEX RANGE SCAN TRANSACTIONS_NDX PARTITION: 2 2 (cr=5 pr=0 pw=0 time=278133 us cost=4 size=0 card=252)(object id 85438)
14 TABLE ACCESS FULL PRODUCT_TABLE (cr=8 pr=0 pw=0 time=0 us cost=3 size=168 card=14)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 0.36 0.36
********************************************************************************
PRODUCTS_TABLE was scanned upto the high water mark -- 8 blocks.
Given the circumstances, the size of the data, the nature of the data, the second execution with a Hash Join is what I prefer.
Unfortunately, with the available statistics, Oracle chooses the Nested Loop. A novice DBA might also choose the Nested Loop because it has a "lower" cost.
So, what gives ?
Here's a hint : I told you to look (carefully) at the cardinality (number of rows) in each execution plan. There's something very wrong in the first execution plan.
In any case, remember : a lower cost isn't necessarily a better plan.
I may well re-visit this data and this query in another blog post.
.
.
.
2 comments:
Hi Hemant,
there is probably a specific reason why you gather only column statistics on a specific column of the TRANSACTIONS table?
As you are aware of this is not really recommended - so I'm looking forward to your upcoming posts where you cover this data set and query again.
Randolf
Randolf,
Good question.
The query in question has three predicates : TXN_DATE (for the whole month of February 2011 -- the whole partition), TXN_ID (querying for a range of 1000 of 140000 in that month) and COUNTRY_CD (there are only 5 countries and there is skew).
"Logically" it seems that we need to know column statistics on only COUNTRY_CD ?
How many times have we been asked the questions : "Do we need Column / Histogram statistics on ALL the columns ? Can't we just collect only for those that matter for our queries ?". Till a few years ago, I, too, did not think that Histograms on non-indexed columns mattered.
Note also : there's a join on PRODUCT_CD.
More details soon..
Hemant K Chitale
Post a Comment