Search My Oracle Blog

Custom Search

28 June, 2008

Cardinality Estimates : Dependent Columns

NOTE : I am reposting the text of the case study (ie SQLs , Outputs and REMarks) as the previous posting had one "less than" sign resulting in blogger presenting a "challenging reading". 15-Jul-08

The Oracle Cost Based Optimizer uses table and column statistics to estimate the number of rows that will be fetched for given predicates in a query. This estimate is called the "Cardinality" and is presented as "CARD=" in a 9i Explain Plan or under the "Rows" Column in a 10g Explain Plan.

It is fairly obvious that the Optimizer uses the number of rows in a table and column selectivity (ie, the number of distinct values, and in the presence of histograms, the count of rows for each distinct value) to derive the expected Cardinality. Thus, the general advice to "ensure that you have fresh statistics" or "ensure that you have the right statistics" that is provided to DBAs.

For multi column predicates on the same table, the Optimizer derives the selectivity of the columns as "independent" columns and then multiplies them. Thus, the selectivity of columns (a,b) is selectivity_of_column_a X selectivity_of_column_b.
There are variants and complexities in this formula. The presence of indexes can also help the Optimizer estimate costs of different steps.

However, what the Optimizer, till 10gR2, cannot do is identify inter-column dependencies or correlations. Therefore, if "STATE" is one column and "COUNTRY" is another column in the same table, the Optimizer may well know the number of distinct STATEs and the number of distinct COUNTRYs in the table, but not how many and which STATEs are present for a given COUNTRY.

Here is a simple example with a fairly small table which has correlated columns. The same set of queries are executed twice, first without Histograms and the next round with Histograms on each of the columns. I've put in some remarks to explain how the Cardinality estimates that the Optimizer uses in deriving the Execution Plan can be very wrong when there are multiple columns and, particularly, more so when there exists high correlation between the Columns.


SQL> set pages600
SQL> set linesize132
SQL> set feedback off
SQL> set SQLPrompt ''

col Expected_Cardinality format 99,990 hea 'ExpCrd'
column parent_id_plus_exp format 999
column id_plus_exp format 990
column plan_plus_exp format a90
column object_node_plus_exp format a14
column other_plus_exp format a90
column other_tag_plus_exp format a29


REM To demonstrate how Oracle computes expected cardinality of two columns as being :
REM (Number_of_Rows X Density_of_Column_1 X Density_of_Column_2)
REM Oracle assumes a Uniform Distribution of values

REM The second set of tests are with Histograms and provide equally interesting results.
REM With histograms, Oracle is aware that there isn't a Uniform distribution for single columns but still isn't aware of multi column distributions

REM Only in 11g does Oracle introduce Column Groups for statistics




drop table test_cardinality ;
create table test_cardinality (
2 date_of_birth date,
3 month_of_birth varchar2(12),
4 month_number number,
5 city_of_birth varchar2(18),
6 country_of_birth varchar2(3)
7 )
8 /

alter session set nls_date_format='DD-MON-RR';

insert into test_cardinality values ('01-JAN-60','JANUARY',1,'BOMBAY','IN');
insert into test_cardinality values ('01-FEB-60','FEBRUARY',2,'DELHI','IN');
insert into test_cardinality values ('01-MAR-60','MARCH',3,'CALCUTTA','IN');
insert into test_cardinality values ('01-APR-60','APRIL',4,'MADRAS','IN');
insert into test_cardinality values ('01-MAY-60','MAY',5,'NAGPUR','IN');
insert into test_cardinality values ('01-JUN-60','JUNE',6,'KANPUR','IN');
insert into test_cardinality values ('01-JUL-60','JULY',7,'KOLHAPUR','IN');
insert into test_cardinality values ('01-AUG-60','AUGUST',8,'SOLAPUR','IN');
insert into test_cardinality values ('01-SEP-60','SEPTEMBER',9,'JABALPUR','IN');
insert into test_cardinality values ('01-OCT-60','OCTOBER',10,'BADLAPUR','IN');
insert into test_cardinality values ('01-NOV-60','NOVEMBER',11,'LONDON','UK');
insert into test_cardinality values ('01-DEC-60','DECEMBER',12,'PARIS','FR');

commit;

alter table test_cardinality nologging;
insert /*+ APPEND */ into test_cardinality select * from test_cardinality ;
commit;
insert /*+ APPEND */ into test_cardinality select * from test_cardinality ;
commit;
insert /*+ APPEND */ into test_cardinality select * from test_cardinality ;
commit;
insert /*+ APPEND */ into test_cardinality select * from test_cardinality ;
commit;
insert /*+ APPEND */ into test_cardinality select * from test_cardinality ;
commit;
insert /*+ APPEND */ into test_cardinality select * from test_cardinality ;
commit;
insert /*+ APPEND */ into test_cardinality select * from test_cardinality ;
commit;

REM We now have 128 copies of each row. ie, 128 'JANUARY's, 128 'DELHI's, 128 'UK's. But 1536*10/12 INs !


exec dbms_stats.gather_table_stats(ownname=>user,tabname=>'TEST_CARDINALITY',method_opt=>'FOR ALL COLUMNS SIZE 1',estimate_percent=>100);

select count(*) from test_cardinality;

COUNT(*)
----------
1536
select num_rows from user_tables where table_name = 'TEST_CARDINALITY';

NUM_ROWS
----------
1536
select column_name,sample_size, num_distinct, density from user_tab_columns where table_name = 'TEST_CARDINALITY' order by column_id;

COLUMN_NAME SAMPLE_SIZE NUM_DISTINCT DENSITY
------------------------------ ----------- ------------ ----------
DATE_OF_BIRTH 1536 12 .083333333
MONTH_OF_BIRTH 1536 12 .083333333
MONTH_NUMBER 1536 12 .083333333
CITY_OF_BIRTH 1536 12 .083333333
COUNTRY_OF_BIRTH 1536 3 .333333333

REM The density for the first 4 columns is 1/12 while that for COUNTRY_OF_BIRTH is 1/3 (ie 4/12)

set autotrace on explain;

select count(*), round(1536*(1/12)*(1/12)) Expected_Cardinality from test_cardinality where month_number = 1 and month_of_birth = 'JANUARY';

COUNT(*) ExpCrd
---------- -------
128 11

Execution Plan
----------------------------------------------------------
Plan hash value: 3936405985

---------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
---------------------------------------------------------------------------------------
0 SELECT STATEMENT 1 11 6 (0) 00:00:01
1 SORT AGGREGATE 1 11
* 2 TABLE ACCESS FULL TEST_CARDINALITY 11 121 6 (0) 00:00:01
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("MONTH_NUMBER"=1 AND "MONTH_OF_BIRTH"='JANUARY')

REM The expected cardinality is 11 rows although the real count is 128

select count(*), round(1536*(1/12)*(1/12)) Expected_Cardinality from test_cardinality where month_number = 1 and month_of_birth = 'DECEMBER';

COUNT(*) ExpCrd
---------- -------
0 11

Execution Plan
----------------------------------------------------------
Plan hash value: 3936405985

---------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
---------------------------------------------------------------------------------------
0 SELECT STATEMENT 1 11 6 (0) 00:00:01
1 SORT AGGREGATE 1 11
* 2 TABLE ACCESS FULL TEST_CARDINALITY 11 121 6 (0) 00:00:01
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("MONTH_NUMBER"=1 AND "MONTH_OF_BIRTH"='DECEMBER')

REM The Optimizer expected 11 rows. It is not aware of the inter-column dependencies (there are no 'DECEMBER's being month 1 !)

select count(*), round(1536*(1/12)*(1/12)) Expected_Cardinality from test_cardinality where date_of_birth = '01-JAN-60' and month_of_birth = 'DECEMBER';

COUNT(*) ExpCrd
---------- -------
0 11

Execution Plan
----------------------------------------------------------
Plan hash value: 3936405985

---------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
---------------------------------------------------------------------------------------
0 SELECT STATEMENT 1 16 6 (0) 00:00:01
1 SORT AGGREGATE 1 16
* 2 TABLE ACCESS FULL TEST_CARDINALITY 11 176 6 (0) 00:00:01
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("MONTH_OF_BIRTH"='DECEMBER' AND "DATE_OF_BIRTH"='01-JAN-60')

REM Once again, the Optimizer expected 11 rows.

select count(*), round(1536*(1/12)*(1/12)) Expected_Cardinality from test_cardinality where to_char(date_of_birth,'MON') = 'JAN' and month_of_birth = 'JANUARY';

COUNT(*) ExpCrd
---------- -------
128 11

Execution Plan
----------------------------------------------------------
Plan hash value: 3936405985

---------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
---------------------------------------------------------------------------------------
0 SELECT STATEMENT 1 16 6 (0) 00:00:01
1 SORT AGGREGATE 1 16
* 2 TABLE ACCESS FULL TEST_CARDINALITY 1 16 6 (0) 00:00:01
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("MONTH_OF_BIRTH"='JANUARY' AND
TO_CHAR(INTERNAL_FUNCTION("DATE_OF_BIRTH"),'MON')='JAN')

REM When a function ("to_char") is applied to a column, Oracle is unable to estimate the cardinality -- it comes up with "1"
rem so, the Density_of_Column_1 X Density_of_Column_2 formula breaks down

select count(*), round(1536*(1/12)*(4/12)) Expected_Cardinality from test_cardinality where city_of_birth = 'DELHI' and country_of_birth='IN';

COUNT(*) ExpCrd
---------- -------
128 43

Execution Plan
----------------------------------------------------------
Plan hash value: 3936405985

---------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
---------------------------------------------------------------------------------------
0 SELECT STATEMENT 1 11 6 (0) 00:00:01
1 SORT AGGREGATE 1 11
* 2 TABLE ACCESS FULL TEST_CARDINALITY 43 473 6 (0) 00:00:01
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("CITY_OF_BIRTH"='DELHI' AND "COUNTRY_OF_BIRTH"='IN')

REM Here, the density for country_of_birth is 4/12 so that is used in the calculation.

select count(*), round(1536*(1/12)*(4/12)) Expected_Cardinality from test_cardinality where city_of_birth = 'DELHI' and country_of_birth='UK';

COUNT(*) ExpCrd
---------- -------
0 43

Execution Plan
----------------------------------------------------------
Plan hash value: 3936405985

---------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
---------------------------------------------------------------------------------------
0 SELECT STATEMENT 1 11 6 (0) 00:00:01
1 SORT AGGREGATE 1 11
* 2 TABLE ACCESS FULL TEST_CARDINALITY 43 473 6 (0) 00:00:01
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("CITY_OF_BIRTH"='DELHI' AND "COUNTRY_OF_BIRTH"='UK')

REM Oracle is unaware that the intersect between the data sets for 'DELHI' and 'UK' is zero ! It just multiplies the two densities

select count(*), round(1536*(2/12)*(4/12)) Expected_Cardinality from test_cardinality where city_of_birth in ('DELHI','LONDON') and country_of_birth='FR';

COUNT(*) ExpCrd
---------- -------
0 85

Execution Plan
----------------------------------------------------------
Plan hash value: 3936405985

---------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
---------------------------------------------------------------------------------------
0 SELECT STATEMENT 1 11 6 (0) 00:00:01
1 SORT AGGREGATE 1 11
* 2 TABLE ACCESS FULL TEST_CARDINALITY 85 935 6 (0) 00:00:01
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("COUNTRY_OF_BIRTH"='FR' AND ("CITY_OF_BIRTH"='DELHI' OR
"CITY_OF_BIRTH"='LONDON'))

REM The "error" (actually, lack of knowledge about relationships between column values) is compounded further in this case !

select count(*), round(1536*(1/12)*(1/12)*(1/12)*(1/12)*(4/12)) Expected_Cardinality from test_cardinality where date_of_birth='01-JAN-60' and month_of_birth='JANUARY' and month_number = 1 and city_of_birth='BOMBAY' and country_of_birth='IN';

COUNT(*) ExpCrd
---------- -------
128 0

Execution Plan
----------------------------------------------------------
Plan hash value: 3936405985

---------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
---------------------------------------------------------------------------------------
0 SELECT STATEMENT 1 28 6 (0) 00:00:01
1 SORT AGGREGATE 1 28
* 2 TABLE ACCESS FULL TEST_CARDINALITY 1 28 6 (0) 00:00:01
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("MONTH_OF_BIRTH"='JANUARY' AND "MONTH_NUMBER"=1 AND
"CITY_OF_BIRTH"='BOMBAY' AND "COUNTRY_OF_BIRTH"='IN' AND
"DATE_OF_BIRTH"='01-JAN-60')

REM This really makes it worse ! We've taken the exact values for 1 in every 12 rows but the Optimizer multiplies the densities and comes up with less than 1
rem the Optimizer never presents a value of 0, but rounds it up to 1. So the expected cardinality is presented as 1 (which is very wrong)

select count(*), round(1536*(4/12)) Expected_Cardinality from test_cardinality where country_of_birth='SG';

COUNT(*) ExpCrd
---------- -------
0 512

Execution Plan
----------------------------------------------------------
Plan hash value: 3936405985

---------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
---------------------------------------------------------------------------------------
0 SELECT STATEMENT 1 3 6 (0) 00:00:01
1 SORT AGGREGATE 1 3
* 2 TABLE ACCESS FULL TEST_CARDINALITY 512 1536 6 (0) 00:00:01
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("COUNTRY_OF_BIRTH"='SG')

REM When we present a value that is not even present, Oracle isn't aware of this (Without histograms, it only knows MIN, MAX, Distinct and Density)
REM The Optimizer still applies the same 4/12 density formula for country of birth and expects 512 'SG' rows !

set autotrace off


REM -----------------------------------------------------------------------------------------------------------
REM -----------------------------------------------------------------------------------------------------------
REM What happens when DO gather Histograms ? Can the Optimizer make use of Histograms on multiple columns ?!
REM -----------------------------------------------------------------------------------------------------------


exec dbms_stats.gather_table_stats(ownname=>user,tabname=>'TEST_CARDINALITY',method_opt=>'FOR ALL COLUMNS SIZE 250',estimate_percent=>100);

select count(*) from test_cardinality;

COUNT(*)
----------
1536
select num_rows from user_tables where table_name = 'TEST_CARDINALITY';

NUM_ROWS
----------
1536
select column_name,sample_size, num_distinct, density from user_tab_columns where table_name = 'TEST_CARDINALITY' order by column_id;

COLUMN_NAME SAMPLE_SIZE NUM_DISTINCT DENSITY
------------------------------ ----------- ------------ ----------
DATE_OF_BIRTH 1536 12 .000325521
MONTH_OF_BIRTH 1536 12 .000325521
MONTH_NUMBER 1536 12 .000325521
CITY_OF_BIRTH 1536 12 .000325521
COUNTRY_OF_BIRTH 1536 3 .000325521

REM With Histograms, the Density is actually presented differently. We should be looking at USER_TAB_HISTOGRAMS for each column's values
REM Now, Oracle does know the exact count of rows for each of the column's range of values.
rem I haven't presented USER_TAB_HISTOGRAMS here

set autotrace on explain;

select count(*), round(1536*(1/12)*(1/12)) Expected_Cardinality from test_cardinality where month_number = 1 and month_of_birth = 'JANUARY';

COUNT(*) ExpCrd
---------- -------
128 11

Execution Plan
----------------------------------------------------------
Plan hash value: 3936405985

---------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
---------------------------------------------------------------------------------------
0 SELECT STATEMENT 1 11 6 (0) 00:00:01
1 SORT AGGREGATE 1 11
* 2 TABLE ACCESS FULL TEST_CARDINALITY 11 121 6 (0) 00:00:01
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("MONTH_NUMBER"=1 AND "MONTH_OF_BIRTH"='JANUARY')

REM Although I seem to be using 'Density', I am actually using the exact count of rows. There are 1/12 'month_number=1's.
rem The Optimizer still comes up with a cardinality estimate of 11, based on Histograms, not Densities !

select count(*), round(1536*(1/12)*(1/12)) Expected_Cardinality from test_cardinality where month_number = 1 and month_of_birth = 'DECEMBER';

COUNT(*) ExpCrd
---------- -------
0 11

Execution Plan
----------------------------------------------------------
Plan hash value: 3936405985

---------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
---------------------------------------------------------------------------------------
0 SELECT STATEMENT 1 11 6 (0) 00:00:01
1 SORT AGGREGATE 1 11
* 2 TABLE ACCESS FULL TEST_CARDINALITY 11 121 6 (0) 00:00:01
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("MONTH_NUMBER"=1 AND "MONTH_OF_BIRTH"='DECEMBER')

REM The Histograms say that there the frequency for 'month_number=1' is 1/12 and the frequency for 'DECEMBER' is also 1/12

select count(*), round(1536*(1/12)*(1/12)) Expected_Cardinality from test_cardinality where date_of_birth = '01-JAN-60' and month_of_birth = 'DECEMBER';

COUNT(*) ExpCrd
---------- -------
0 11

Execution Plan
----------------------------------------------------------
Plan hash value: 3936405985

---------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
---------------------------------------------------------------------------------------
0 SELECT STATEMENT 1 16 6 (0) 00:00:01
1 SORT AGGREGATE 1 16
* 2 TABLE ACCESS FULL TEST_CARDINALITY 11 176 6 (0) 00:00:01
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("MONTH_OF_BIRTH"='DECEMBER' AND "DATE_OF_BIRTH"='01-JAN-60')

REM The Histograms again present frequencies of 1/12 and 1/12

select count(*), round(1536*(1/12)*(1/12)) Expected_Cardinality from test_cardinality where to_char(date_of_birth,'MON') = 'JAN' and month_of_birth = 'JANUARY';

COUNT(*) ExpCrd
---------- -------
128 11

Execution Plan
----------------------------------------------------------
Plan hash value: 3936405985

---------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
---------------------------------------------------------------------------------------
0 SELECT STATEMENT 1 16 6 (0) 00:00:01
1 SORT AGGREGATE 1 16
* 2 TABLE ACCESS FULL TEST_CARDINALITY 1 16 6 (0) 00:00:01
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("MONTH_OF_BIRTH"='JANUARY' AND
TO_CHAR(INTERNAL_FUNCTION("DATE_OF_BIRTH"),'MON')='JAN')

REM Once again, the usage of a function throws off the Optimizer !

select count(*), round(1536*(1/12)*(10/12)) Expected_Cardinality from test_cardinality where city_of_birth = 'DELHI' and country_of_birth='IN';

COUNT(*) ExpCrd
---------- -------
128 107

Execution Plan
----------------------------------------------------------
Plan hash value: 3936405985

---------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
---------------------------------------------------------------------------------------
0 SELECT STATEMENT 1 11 6 (0) 00:00:01
1 SORT AGGREGATE 1 11
* 2 TABLE ACCESS FULL TEST_CARDINALITY 107 1177 6 (0) 00:00:01
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("CITY_OF_BIRTH"='DELHI' AND "COUNTRY_OF_BIRTH"='IN')

REM The Histogram for 'IN' is a frequency of 10/12 *not* the 4/12 that was assumed with uniform distribution. Yet, this is multiplied with 1/12 for 'DELHI'

select count(*), round(1536*(1/12)*(1/12)) Expected_Cardinality from test_cardinality where city_of_birth = 'DELHI' and country_of_birth='UK';

COUNT(*) ExpCrd
---------- -------
0 11

Execution Plan
----------------------------------------------------------
Plan hash value: 3936405985

---------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
---------------------------------------------------------------------------------------
0 SELECT STATEMENT 1 11 6 (0) 00:00:01
1 SORT AGGREGATE 1 11
* 2 TABLE ACCESS FULL TEST_CARDINALITY 11 121 6 (0) 00:00:01
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("COUNTRY_OF_BIRTH"='UK' AND "CITY_OF_BIRTH"='DELHI')

REM The expected frequencies for 'DELHI' and 'UK' are 1/12 and 1/12 respectively. No knowledge of the impossibility of the two data sets intersecting.

select count(*), round(1536*(2/12)*(1/12)) Expected_Cardinality from test_cardinality where city_of_birth in ('DELHI','LONDON') and country_of_birth='FR';

COUNT(*) ExpCrd
---------- -------
0 21

Execution Plan
----------------------------------------------------------
Plan hash value: 3936405985

---------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
---------------------------------------------------------------------------------------
0 SELECT STATEMENT 1 11 6 (0) 00:00:01
1 SORT AGGREGATE 1 11
* 2 TABLE ACCESS FULL TEST_CARDINALITY 21 231 6 (0) 00:00:01
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("COUNTRY_OF_BIRTH"='FR' AND ("CITY_OF_BIRTH"='DELHI' OR
"CITY_OF_BIRTH"='LONDON'))

REM With two countries, the lack of knowledge makes the cardinality estimate worse !

select count(*), round(1536*(1/12)*(1/12)*(1/12)*(1/12)*(10/12)) Expected_Cardinality from test_cardinality where date_of_birth='01-JAN-60' and month_of_birth='JANUARY' and month_number = 1 and city_of_birth='BOMBAY' and country_of_birth='IN';

COUNT(*) ExpCrd
---------- -------
128 0

Execution Plan
----------------------------------------------------------
Plan hash value: 3936405985

---------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
---------------------------------------------------------------------------------------
0 SELECT STATEMENT 1 28 6 (0) 00:00:01
1 SORT AGGREGATE 1 28
* 2 TABLE ACCESS FULL TEST_CARDINALITY 1 28 6 (0) 00:00:01
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("MONTH_OF_BIRTH"='JANUARY' AND "MONTH_NUMBER"=1 AND
"CITY_OF_BIRTH"='BOMBAY' AND "COUNTRY_OF_BIRTH"='IN' AND
"DATE_OF_BIRTH"='01-JAN-60')

REM Perfect knowledge about individual columns when multiplied together returns absolutely imperfect knowledge of all the columns !

select count(*), round(1536*(0/12)) Expected_Cardinality from test_cardinality where country_of_birth='SG';

COUNT(*) ExpCrd
---------- -------
0 0

Execution Plan
----------------------------------------------------------
Plan hash value: 3936405985

---------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
---------------------------------------------------------------------------------------
0 SELECT STATEMENT 1 3 6 (0) 00:00:01
1 SORT AGGREGATE 1 3
* 2 TABLE ACCESS FULL TEST_CARDINALITY 64 192 6 (0) 00:00:01
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("COUNTRY_OF_BIRTH"='SG')

REM Although the Histogram on country_of_birth shows that there are *NO* rows for 'SG', Oracle cannot assume that there really are no rows.
REM What if I had inserted an 'SG' row after the gather_table_stats ?
REM So the optimizer must compute a Cardinality, how does it come up with 64 ? Any guesses ?

set autotrace off








Thus, you can see, in the simplest of cases, that the Cardinality
estimates can be very wrong. Imagine that this table is scaled up to 1,000 times its size, but the selectivity remains, generally, the same. Then, take this table in a complex multi-table query with join conditions. The incorrect Cardinality estimate could very well result in a poor Execution Plan. Thus, for cases like the "where city_of_birth in ('DELHI','LONDON') and country_of_birth='FR'", although we know that there would be 0 (zero) rows (irrespective of how large the table is), the Optimizer might well expect very many rows, depending on the table size. Where we would prefer that Oracle first filter on this table in a multi-table query, the Optimizer might prefer to execute this filter much later in the query, after needlessly wading through many blocks and rows in other tables in the same query!

20 June, 2008

Delete PARENT checks every CHILD row for Parent Key !

I just came across Eric S. Emric's blog today and one of many usefule posts was this one on Bug#5504961 which he calls "SQL Gone Wild !".

When deleting a row from a Parent table, Oracle must first verify that there exists no Child row. If there does a exist a single Child row, the Parent delete must fail (unless ON DELETE CASCADE is specified). However, what he found is that where a Parent has multiple Child rows, Oracle still (stupidly ?) checks every Child row before returning the expected "ORA-2292 : integrity constraint (constraint_name) violated - child record found".

He presents a simple test case which I took up and tested in 10.2.0.4 and then 10.2.0.1

The Bug (5504961) which you can see described in MetaLink Note#5504961.8 *IS* fixed in 10.2.0.4.

This the test script (almost exactly the same, in key statements) as Eric S. Emric's :


set echo on
spool Test_Bug_5504961

REM See http://esemrick.blogspot.com/2006/08/sql-gone-wild.html

DROP TABLE CHILD;
DROP TABLE PARENT;
CREATE TABLE PARENT (COL1 NUMBER);
ALTER TABLE PARENT ADD CONSTRAINT PARENT_PK PRIMARY KEY (COL1);
CREATE TABLE CHILD (COL1 NUMBER);
CREATE INDEX CHILD_IX_01 ON CHILD (COL1);
ALTER TABLE CHILD ADD CONSTRAINT CHILD_FK_01 FOREIGN KEY (COL1) REFERENCES PARENT;
INSERT INTO PARENT VALUES (999999999999);
INSERT INTO CHILD VALUES (999999999999);
COMMIT;

-- Insert approximately 1 million records into CHILD
alter table child nologging;
begin
for i in 1..20 loop
insert /*+ APPEND */ into child select * from child;
commit;
end loop;
end;
/

-- in the 10.2.0.1 test, create a new session to get session statistics as well
connect hemant/hemant

alter session set max_dump_file_size='UNLIMITED';
alter session set tracefile_identifier='Test_Bug_5504961';

alter session set events '10046 trace name context forever, level 8';

set time on
set timing on
DELETE FROM PARENT WHERE COL1 = 999999999999;


spool off




First, my 10.2.0.4 results :


23:17:03 SQL> DELETE FROM PARENT WHERE COL1 = 999999999999;
DELETE FROM PARENT WHERE COL1 = 999999999999
*
ERROR at line 1:
ORA-02292: integrity constraint (HEMANT.CHILD_FK_01) violated - child record
found


Elapsed: 00:00:00.05
23:17:03 SQL>


DELETE FROM PARENT
WHERE
COL1 = 999999999999


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.02 0 0 0 0
Execute 1 0.00 0.00 0 1 11 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.02 0 1 11 0

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 64

Rows Row Source Operation
------- ---------------------------------------------------
0 DELETE PARENT (cr=0 pr=0 pw=0 time=12 us)
1 INDEX UNIQUE SCAN PARENT_PK (cr=1 pr=0 pw=0 time=37 us)(object id 52936)


So, there was no signficant overhead in 10.2.0.4 (although I'm
not sure where those 11 current gets came from.


Next, my 10.2.0.1 results :


23:36:47 SQL> DELETE FROM PARENT WHERE COL1 = 999999999999;
DELETE FROM PARENT WHERE COL1 = 999999999999
*
ERROR at line 1:
ORA-02292: integrity constraint (HEMANT.CHILD_FK_01) violated - child record
found


Elapsed: 00:00:00.36
23:36:48 SQL>

DELETE FROM PARENT
WHERE
COL1 = 999999999999


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.02 0 0 0 0
Execute 1 0.29 0.31 0 4243 4495 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.30 0.33 0 4243 4495 0

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 61

Rows Row Source Operation
------- ---------------------------------------------------
0 DELETE PARENT (cr=0 pr=0 pw=0 time=171 us)
1 INDEX UNIQUE SCAN PARENT_PK (cr=1 pr=0 pw=0 time=285 us)(object id 52945)

10.2.0.1 statistics for the UPDATE satement session only :

SQL> @report_session_waits_and_statistics
Enter value for sid: 105

Statistic or Wait Val or Cnt Time MS
---------------------------------------- -------------- -------------
CPU : CPU used by this session 0 340.00
Statistic : consistent changes 0
Statistic : consistent gets 4,300
Statistic : db block changes 4,476
Statistic : db block gets 4,500
Statistic : physical reads 0
Statistic : redo blocks written 0
Statistic : redo entries 4,471
Statistic : redo size 354,344
Statistic : table fetch by rowid 14
Statistic : table scan blocks gotten 3
Statistic : table scan rows gotten 62
Statistic : undo change vector size 256
Statistic : user calls 33


The 10.2.0.1 environment ended up doing 4,500 "current gets" and even 4,476 "db block changes" and 354K of Redo (delayed block cleanout).

Unfortunately, the "effort" to scan all matching Index entries of the CHILD table does not appear in the Row Source Operations.

19 June, 2008

Monitoring "free memory" on Linux

Linux's memory management allows the kernel to use as much RAM as possible for the filesystem cache. This is reflected in the 'cached' column of "top" or "free" listings.
If you or your manager tries to look for "memory utilisation", then the 'free' memory is NOT what you should be looking it. More so if you have a database running on FileSystems on the server.

Here is a simple demo on 64bit OELinux 5.1 (using "default" Kernel parameters, no changes to memory/paging/vm etc setups) :


Beginning with 1.8GB RAM on the server. I see 1.36GB free
in both "top" and "free" listings.
"top" :

Mem: 1863144k total, 503048k used, 1360096k free, 25652k buffers
Swap: 2031608k total, 0k used, 2031608k free, 309992k cached


"free -o" :

total used free shared buffers cached
Mem: 1863144 503964 1359180 0 25684 310152
Swap: 2031608 0 2031608


Startup of 1 database instance :

SQL> ORACLE instance started.

Total System Global Area 1325400064 bytes
Fixed Size 2020576 bytes
Variable Size 469764896 bytes
Database Buffers 838860800 bytes
Redo Buffers 14753792 bytes
Database mounted.
Database opened.
SQL>


Although the Database has an SGA if 1.3GB, it hasn't yet significantly reduced
the "free" memory (which is now 919MB)

"top" :

Mem: 1863144k total, 943656k used, 919488k free, 27008k buffers
Swap: 2031608k total, 0k used, 2031608k free, 619968k cached

"free -o" :

total used free shared buffers cached
Mem: 1863144 944424 918720 0 27096 620076
Swap: 2031608 0 2031608


Startup of second database instance :

SQL> startup
ORACLE instance started.

Total System Global Area 541065216 bytes
Fixed Size 2085320 bytes
Variable Size 150998584 bytes
Database Buffers 381681664 bytes
Redo Buffers 6299648 bytes
Database mounted.
Database opened.
SQL>

Itill have 608MB free memory, after the second database instance startup.

"top" :

Mem: 1863144k total, 1255084k used, 608060k free, 28380k buffers
Swap: 2031608k total, 0k used, 2031608k free, 851996k cached



How do I "bump up" the "free memory" without shutting down the databases ?
1. Create a large file
2. Delete the file

Here's the first step :
Create a 1GB file :

$>dd if=/dev/zero of=DUMMY.ZERO bs=1024K count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 38.5035 seconds, 27.2 MB/s
$>


The process of creating a 1GB file sudeenly brought "free" memory down
to 12MB. Notice how "cached" has increased. Neither of the two really
changed by 1GB ! "free" memory reduced by 596MB and "cached" increased
by 908MB !

"top" :

Mem: 1863144k total, 1851188k used, 11956k free, 14804k buffers
Swap: 2031608k total, 0k used, 2031608k free, 1462696k cached

Here's the second step :
Remove the 1GB file :

$>ls -l DUMMY.ZERO
-rw-r--r-- 1 oracle2 dba 1048576000 Jun 19 22:14 DUMMY.ZERO
$>rm DUMMY.ZERO


Now, "free" memory has suddenly jumped to 1GB. Remember that I still
have the two database instances with a sum of 1.8GB SGA size in a
1.8GB RAM machine. And "swap" isn't even used yet.

"top" :

Mem: 1863144k total, 823668k used, 1039476k free, 15140k buffers
Swap: 2031608k total, 0k used, 2031608k free, 465152k cached

Let's verify those two database instances :
shared memory segments :

"ipcs -m" :

------ Shared Memory Segments --------
key shmid owner perms bytes nattch status
0x00000000 65536 root 600 393216 2 dest
0x00000000 98305 root 600 393216 2 dest
0x00000000 131074 root 600 393216 2 dest
0x00000000 163843 root 600 393216 2 dest
0x00000000 196612 root 600 393216 2 dest
0x00000000 229381 root 600 393216 2 dest
0x00000000 262150 root 600 393216 2 dest
0x00000000 294919 root 600 393216 2 dest
0x00000000 327688 root 600 393216 2 dest
0x00000000 360457 root 600 393216 2 dest
0x0133492c 393226 oracle1 640 1327497216 17
0xc4afc244 425995 oracle2 640 543162368 18


Now I run a query on the first database to see if "loading" the SYSTEM
datafile has an impact on "free" memory.
Query the "oracle1" database :

SQL> select sum(bytes)/1048576 from dba_data_files;

SUM(BYTES)/1048576
------------------
3561.25

SQL> select sum(value)/1048576 from v$sga;

SUM(VALUE)/1048576
------------------
1264

SQL>


My "free" memory has shrunk from 1,039MB to 973MB and "cached" usage increased
from 465MB to 521MB.. Evidently, not the entire SYSTEM datafile (which is
actually 492MB in size) did get loaded into memory.
"top" :

Mem: 1863144k total, 889992k used, 973152k free, 15848k buffers
Swap: 2031608k total, 0k used, 2031608k free, 521256k cached

"free -o" :

total used free shared buffers cached
Mem: 1863144 889612 973532 0 15864 521256
Swap: 2031608 0 2031608


Running a few large queries, DML, sort operations etc (ie accessing multiple datafiles)
and then check available memory :
My "free" memory has dropped from 973MB to 23MB and "cached" increased
from 521MB to 1,467MB.
"top" :

Mem: 1863144k total, 1839452k used, 23692k free, 8280k buffers
Swap: 2031608k total, 3576k used, 2028032k free, 1467392k cached

"free -o" :

total used free shared buffers cached
Mem: 1863144 1839404 23740 0 8248 1467436
Swap: 2031608 3576 2028032


Can I try that "trick" again to release memory ?
Create and remove a 1GB dummy file :

sh-3.1$ dd if=/dev/zero of=DUMMY.ZERO bs=1024K count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 44.776 seconds, 23.4 MB/s
sh-3.1$ rm DUMMY.ZERO

I see "free" memory up to 795MB and "cached" down to 680MB.
"top" :

Mem: 1863144k total, 1067748k used, 795396k free, 3368k buffers
Swap: 2031608k total, 96580k used, 1935028k free, 680900k cached


"free -o" :

total used free shared buffers cached
Mem: 1863144 1068244 794900 0 3392 680896
Swap: 2031608 96580 1935028





Evidently, Linux is dynamically allocating filesystem buffer space to/from
"cached" and "cached" can grow and shrink as required. The values in the
"free" column are quite misleading at all times.

15 June, 2008

A long forums discussion on Multiple or Different Block Sizes

An alternately boring and interesting thread on Multiple or Different Block Sizes in Oracle. The thread meanders through topics like Multiple Block Sizes co-existing in a database, Databases with Different Block Sizes, the impact of setting or setting to 0 (zero) or not setting db_file_multiblock_read_count, what a scientific method is, a lot of tests and no conclusion !

04 June, 2008

Tuning Very Large SELECTs in SQLPlus

A forums question about the performance of a SELECT query in 10g led from database server performance to network bottlenecks to client performance.
We finally diagnosed it as an issue with SQLPlus *formatting* the output. The output had a rather large row size and the default values of PAGESIZE and LINESIZE being what they are, SQLPlus was spending rather too much time putting in Column Headers, splitting rows across lines etc.
So much so that while the server process took less than 2seconds (from the 10046 trace file), the SQLPlus client showed the query as taking 15minutes.
To the user, this would be a "Database Server Performance Issue". However, to the Performance Analyst, the Database was perfectly fine, returning the output in under 2seconds ! Where were the 15minutes being spent ?

Oracle's SQLPlus client provides many formatting options. Of these, PAGESIZE 'n' determines the number of lines of output after which Oracle would create new "Page Header", inserting Column Headers etc. LINESIZE 'l' determines the number of characters in a row that are printed in one line, with the result that a row retrieving more than 'l' characters gets split across multiple lines (thus, soon hitting the 'n' limit of PAGESIZE).

Whenever I run queries, I normally use PAGESIZE600 and LINESIZE132. The defaults are inadequate for most output to be properly readable.

Another parameter ARRAYSIZE allows you to specify how many Rows will SQLPlus request from the database server at each fetch. With small rows, you can set a larger ARRAYSIZE. (Note : Do not go overboard with ARRAYSIZE. If you set it to too large a value and you are retreiving "slightly large" rows, the resultset (arraysize 'n' rows X rowsize 'm' bytes) may exceed the SQLPlus Buffer size and you'd get an error in the midst of fetching your data.

Here I've created a Test case to show how these parameters can impact "perceived" performance.

I start with a table MY_TEST_TABLE which has the same structure as DBA_OBJECTS but has been populated by repeatedly querying DBA_OBJECTS, getting a total of 377,280 rows of 98byte each. The table is 42MB (in terms of the sum of allocated extents).


SQL> select num_rows,avg_row_len from user_tables where table_name = 'MY_TEST_TABLE';

NUM_ROWS AVG_ROW_LEN
---------- -----------
377280 98

SQL>
SQL> select blocks, bytes/1048576 from user_segments where segment_name = 'MY_TEST_TABLE';

BLOCKS BYTES/1048576
---------- -------------
5376 42

SQL>


I now query this table using the defaults for ARRAYSIZE, PAGESIZE and LINESIZE.

With default ARRAYSIZE, PAGESIZE and LINESIZE

ARRAYSIZE 15
PAGESIZE 14
LINESIZE 80

select * from my_test_table;
...
377280 rows selected.

Elapsed: 00:08:58.68

Statistics
----------------------------------------------------------
482 recursive calls
0 db block gets
30143 consistent gets
5263 physical reads
0 redo size
42462373 bytes sent via SQL*Net to client
277129 bytes received via SQL*Net from client
25153 SQL*Net roundtrips to/from client
4 sorts (memory)
0 sorts (disk)
377280 rows processed


Output file size : 397655892 bytes




I have done 25,152 round trips -- or array fetches (377,280 rows / arraysize of 15 : 377280/15 = 25152).
Note that this also increases consistent gets as some blocks are fetched repeatedly. (If the 'n'th fetch stopped at the 10th row in block 'X', then the 'n+1'th fetch would do another 'consistent get' of block 'X' for the remaining rows.


What happens if I "tune" the ARRAYSIZE and reduce the Formatting overheads imposed by PAGESIZE and LINESIZE ?

With "tuned" values for ARRAYSIZE, PAGESIZE and LINESIZE

set ARRAYSIZE 100
SET LINESIZE 132
SET PAGESIZE 6000

select * from my_test_table;
...
377280 rows selected.

Elapsed: 00:04:00.85



Statistics
----------------------------------------------------------
480 recursive calls
0 db block gets
9049 consistent gets
5263 physical reads
0 redo size
38485879 bytes sent via SQL*Net to client
41960 bytes received via SQL*Net from client
3774 SQL*Net roundtrips to/from client
4 sorts (memory)
0 sorts (disk)
377280 rows processed

Output file size : 200917333 bytes



The query completes in less than half the time and does fewer round-trips between the client SQLPlus and the Database Server.
I have done 3,773 round trips -- or array fetches (377,280 rows / arraysize of 100 : 377280/100 = 3773).
'consistent gets' has also reduced signficantly because the number of occassions when I had to request the same block again (having read only some of the rows at the last fetch) are fewer, with larger and fewer fetches.

Had I done an event 10046 trace, I would have seen the individual Fetches as well.
Array processing overheads can be signficant if there is a very slow network between the client and the server and the number of round trips is very high -- each round trip suffering the latency of the network.

Note : The output file size also reduces because there are fewer Column Header entries in the output.

03 June, 2008

MVs with Refresh ON COMMIT cannot be used for Synchronous Replication

The question of why Materialized Views with Refresh ON COMMIT cannot be created across databases (in effect providing Synchronous Replication) has come up a few times on the web.
It came up again in this forums posting.

This is my response :
MultiMasterAdvancedReplication works on a PUSH method.
You insert into Table "T_A" in database "D_A" and oracle puts the transaction in the queue tables in "D_A" to be propagated to "D_B". They are then "pushed" from "D_A" to "D_B" via a DBLink. If you option for Synchronous, the "push" is immediate with the transaction on "T_A" and becomes a "2-Phase Commit".


MaterializedViews across DBLinks use a PULL method.
An MV "MV_A" in database "D_B" "pulls" data from "T_A" and "T_A"'sSnapshotLog via a DBLink from "D_B" to read data from "D_A". There is no DBLink from "D_A" to "D_B".
Therefore, a transaction that inserts into "T_A" has no way of pushing the data to "D_B" because there is no DBLink. Therefore it is not possible to have Synchronous MVs across Databases.
If the MV "MV_A" built to REFRESH ON COMMIT is in the same database "D_A" then the transaction inserting into "T_A" merely has to update "MV_A" in the same database -- it just becomes a transaction on *two* *LOCAL* tables inside of "D_A".

Aggregated by orafaq.com

Aggregated by orafaq.com
This blog is being aggregated by orafaq.com