Search My Oracle Blog

Custom Search

31 May, 2008

Ever heard of "_simple_view_merging" ?

We've all (or most of us, I guess) have heard of and some of us have even realised being hit by complex_view_merging issues in 9i and 10g.

A couple of days ago, I came across Bug#6653652 "Wrong results from simple view merging". {I "came across" the Bug only when going through recent notes in the "Knowledge Base" section and do NOT know if any of my databases / applications has or does suffer this bug !).

Ostensibly it is a 10g bug but I ran the test script in 9.2.0.6 and saw the same results -- except (uh oh !) that I could not run an "alter session set "_simple_view_merging"=FALSE;" in 9.2.0.6 !

I've taken the very simple test case from Note# 6653652.8 and have expanded it with a few more "verification" queries :
The tests were in 9.2.0.6 and 10.2.0.3 I plan to upgrade one of my test environments to 10.2.0.4 this weekend.

UPDATE 01-Jun : Testing on a 10.2.0.4 environment, the first query DID return the correct results (so also did the one where I used a DECODE to translate a NULL on organizer, I did get the correct value "1024") -- the bug IS fixed in 10.2.0.4


SQL>
SQL> REM Bug 6653652. Note 6653652.8
SQL> REM title : "Wrong results from simple view merging"
SQL> REM description :
SQL> REM "Wrong results are possible from queries that undergo
SQL> rem simple view merging if the query contains a scalar
SQL> rem subquery in an OR branch."
SQL>
SQL> REM Fixed In 10.2.0.4, 11.1.0.7, 11.2.x.x (future release)
SQL>
SQL>
SQL>
SQL>
SQL> drop table test1;
drop table test1
*
ERROR at line 1:
ORA-00942: table or view does not exist


SQL> drop table test2;
drop table test2
*
ERROR at line 1:
ORA-00942: table or view does not exist


SQL> drop table test3;
drop table test3
*
ERROR at line 1:
ORA-00942: table or view does not exist


SQL>
SQL> -- using exactly the same structures and data as in the Oracle test case
SQL> -- just change the string 'organizer' to 'is org record' to reduce confusion
SQL> create table test1 (id varchar2(128), creator number);

Table created.

SQL> create table test2 (id varchar2(128), key varchar2(64),value number);

Table created.

SQL> create table test3 (id number, value number);

Table created.

SQL> insert into test1 values('a',2);

1 row created.

SQL> rem insert into test2 values('a','organizer',1024);
SQL> insert into test2 values('a','is org record',1024);

1 row created.

SQL> insert into test3 values(1024,2);

1 row created.

SQL>
SQL>
SQL>
SQL> -- using exactly the same query as in the Oracle test case
SQL> -- this query (with the scalar subquery in the OR) returns incorrect results
SQL> select creator, organizer
2 from (select creator,
3 (select value from test2 where id=test1.id) as organizer
4 from test1) view1
5 where (creator=2 or organizer in (select id from test3 group by id));

CREATOR ORGANIZER
---------- ----------
2

SQL>
SQL>
SQL> -- let's verify tables test1 and test2
SQL> select creator, organizer
2 from (select creator,
3 (select value from test2 where id=test1.id) as organizer
4 from test1) view1
5 where (creator=2 )
6 -- or organizer in (select id from test3 group by id));

CREATOR ORGANIZER
---------- ----------
2 1024

SQL>
SQL>
SQL> -- let's verify table test3
SQL> select id from test3 group by id;

ID
----------
1024

SQL>
SQL> -- test3's group by returns the value
SQL> -- therefore the subquery in the test case should return TRUE and succeed !
SQL>
SQL>
SQL> -- let's explicitly define how the OR is in the WHERE
SQL> -- handle a null for organizer with a decode
SQL> select creator, decode(organizer,NULL,9999,organizer)
2 from (select creator,
3 (select value from test2 where id=test1.id) as organizer
4 from test1) view1
5 where
6 (
7 (creator=2)
8 or organizer in (select id from test3 group by id)
9 );

CREATOR DECODE(ORGANIZER,NULL,9999,ORGANIZER)
---------- -------------------------------------
2 9999

SQL>
SQL>
SQL> --- retry original query with _simple_view_merging=FALSE
SQL>
SQL> alter session set "_simple_view_merging"=FALSE;

Session altered.

SQL>
SQL> select creator, organizer
2 from (select creator,
3 (select value from test2 where id=test1.id) as organizer
4 from test1) view1
5 where (creator=2 or organizer in (select id from test3 group by id));

CREATOR ORGANIZER
---------- ----------
2 1024

SQL>
SQL>
SQL>
Note : alter session set "_simple_view_merging"=FALSE; was not possible in 9.2.0.6

27 May, 2008

Tracing a DBMS_STATS run

As 10g has introduced some 'auto'matic behaviour as defaults (AUTO_SAMPLE_SIZE for estimate_percent and FOR ALL COLUMNS SIZE AUTO for method_opt), I decided to trace how these work.
In a production database, I had noticed Oracle gathering statistics on a single column for 20minutes after which it began the same task again, but with a sample size 10 times as large.

MetaLink Note#343849.1 provides some explanation for this behaviour.

Since I'd already created a large table of 83million rows (see my previous postings), I decided to see how the 'auto'mation worked.
(So far, in my test runs I always specify an estimate_percent and method_opt, not leaving them to default in 10g).

So I ran two DBMS_STATS.GATHER_TABLE_STATS executions on table TEST_APPEND which has 83million rows. The first execution was with "defaults" and the second one was with estimate_percent=>1

For the 'auto' sample size, a 10046 trace showed Oracle gathering samples *twice*. At the first run, it gathered statistics on *all* columns, using a sample of 0.0058412006 % (yes, 6 thousandthds of 1 percent !). Oracle seemed to have ben dissatistifed with what it gathered on only a few of the columns so re-ran a gather for a subset of columns but using a sample of 0.05854120060 %. (that is 6 hundredths of 1 percent !).
With such small sample sizes, you would think that the gather statistics was fast.

This is what it did :

select /*+ no_parallel(t) no_parallel_index(t) dbms_stats
cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring
*/ count(*),count("OWNER"),count(distinct "OWNER"),
sum(sys_op_opnsize("OWNER")),substrb(dump(min("OWNER"),16,0,32),1,120),
substrb(dump(max("OWNER"),16,0,32),1,120),count("OBJECT_NAME"),
count(distinct "OBJECT_NAME"),sum(sys_op_opnsize("OBJECT_NAME")),
substrb(dump(min(substrb("OBJECT_NAME",1,32)),16,0,32),1,120),
substrb(dump(max(substrb("OBJECT_NAME",1,32)),16,0,32),1,120),
count("SUBOBJECT_NAME"),count(distinct "SUBOBJECT_NAME"),
sum(sys_op_opnsize("SUBOBJECT_NAME")),substrb(dump(min("SUBOBJECT_NAME"),16,
0,32),1,120),substrb(dump(max("SUBOBJECT_NAME"),16,0,32),1,120),
count("OBJECT_ID"),count(distinct "OBJECT_ID"),
sum(sys_op_opnsize("OBJECT_ID")),substrb(dump(min("OBJECT_ID"),16,0,32),1,
120),substrb(dump(max("OBJECT_ID"),16,0,32),1,120),count("DATA_OBJECT_ID"),
count(distinct "DATA_OBJECT_ID"),sum(sys_op_opnsize("DATA_OBJECT_ID")),
substrb(dump(min("DATA_OBJECT_ID"),16,0,32),1,120),
substrb(dump(max("DATA_OBJECT_ID"),16,0,32),1,120),count("OBJECT_TYPE"),
count(distinct "OBJECT_TYPE"),sum(sys_op_opnsize("OBJECT_TYPE")),
substrb(dump(min("OBJECT_TYPE"),16,0,32),1,120),
substrb(dump(max("OBJECT_TYPE"),16,0,32),1,120),count("CREATED"),
count(distinct "CREATED"),substrb(dump(min("CREATED"),16,0,32),1,120),
substrb(dump(max("CREATED"),16,0,32),1,120),count("LAST_DDL_TIME"),
count(distinct "LAST_DDL_TIME"),substrb(dump(min("LAST_DDL_TIME"),16,0,32),
1,120),substrb(dump(max("LAST_DDL_TIME"),16,0,32),1,120),count("TIMESTAMP"),
count(distinct "TIMESTAMP"),sum(sys_op_opnsize("TIMESTAMP")),
substrb(dump(min("TIMESTAMP"),16,0,32),1,120),substrb(dump(max("TIMESTAMP"),
16,0,32),1,120),count("STATUS"),count(distinct "STATUS"),
sum(sys_op_opnsize("STATUS")),substrb(dump(min("STATUS"),16,0,32),1,120),
substrb(dump(max("STATUS"),16,0,32),1,120),count("TEMPORARY"),
count(distinct "TEMPORARY"),sum(sys_op_opnsize("TEMPORARY")),
substrb(dump(min("TEMPORARY"),16,0,32),1,120),substrb(dump(max("TEMPORARY"),
16,0,32),1,120),count("GENERATED"),count(distinct "GENERATED"),
sum(sys_op_opnsize("GENERATED")),substrb(dump(min("GENERATED"),16,0,32),1,
120),substrb(dump(max("GENERATED"),16,0,32),1,120),count("SECONDARY"),
count(distinct "SECONDARY"),sum(sys_op_opnsize("SECONDARY")),
substrb(dump(min("SECONDARY"),16,0,32),1,120),substrb(dump(max("SECONDARY"),
16,0,32),1,120)
from
"HEMANT"."TEST_APPEND" sample ( .0058412006) t


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.02 0.04 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 63.66 150.85 585686 8845 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 63.69 150.89 585686 8845 0 1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 61 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT GROUP BY (cr=8845 pr=585686 pw=0 time=150851071 us)
4798 TABLE ACCESS SAMPLE TEST_APPEND (cr=8845 pr=585686 pw=0 time=149699664 us)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file scattered read 4587 0.30 134.90
db file sequential read 170 0.06 1.01
latch: shared pool 5 0.01 0.03
latch: cache buffers lru chain 2 0.00 0.00
********************************************************************************

and

select /*+ no_parallel(t) no_parallel_index(t) dbms_stats
cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring
*/ count(*),count("OBJECT_NAME"),count(distinct "OBJECT_NAME"),
count("SUBOBJECT_NAME"),count(distinct "SUBOBJECT_NAME"),
sum(sys_op_opnsize("SUBOBJECT_NAME")),substrb(dump(min("SUBOBJECT_NAME"),16,
0,32),1,120),substrb(dump(max("SUBOBJECT_NAME"),16,0,32),1,120),
count("OBJECT_ID"),count(distinct "OBJECT_ID"),count("DATA_OBJECT_ID"),
count(distinct "DATA_OBJECT_ID"),sum(sys_op_opnsize("DATA_OBJECT_ID")),
substrb(dump(min("DATA_OBJECT_ID"),16,0,32),1,120),
substrb(dump(max("DATA_OBJECT_ID"),16,0,32),1,120)
from
"HEMANT"."TEST_APPEND" sample ( .0584120060) t


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.01 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 95.72 176.11 1136088 85933 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 95.73 176.12 1136088 85933 0 1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 61 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT GROUP BY (cr=85933 pr=1136088 pw=0 time=176113365 us)
49017 TABLE ACCESS SAMPLE TEST_APPEND (cr=85933 pr=1136088 pw=0 time=153397014 us)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file scattered read 8994 0.27 152.61
db file sequential read 23 0.00 0.01
latch: cache buffers lru chain 1 0.00 0.00
********************************************************************************




Oracle actually read 585,686 blocks and used 8,845 blocks to get a
sample of 4,798 rows in 150.89 seconds in it's first 'auto' run. In the second 'auto' run (for columns OBJECT_NAME, SUBOBJECT_NAME, OBJECT_ID and DATA_OBJECT_ID only -- presumably being satisfied with the statistics it had on column OWNER, OBJECT_TYPE etc !), it read 1,136,088 blocks, used 85,933 of them to read a sample of 49,017 rows in 176.12 seconds. That 1,136,088 blocks meant that it had actually read the whole table !
Thus, it's AUTO_SAMPLE_SIZE resulted in reading 1.5 times the table size and discarding 96.87% of what it had read. The number of rows sampled was less than 0.07% of the table.
The gathering of statistics took 326.97 seconds.


Could I do better with even a 1% sample size (note : Oracle had actually read 0.07% of the rows only !) ?
So the estimate_percent=>1 was my second execution.


select /*+ no_parallel(t) no_parallel_index(t) dbms_stats
cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring
*/ count(*),count("OWNER"),count(distinct "OWNER"),
sum(sys_op_opnsize("OWNER")),substrb(dump(min("OWNER"),16,0,32),1,120),
substrb(dump(max("OWNER"),16,0,32),1,120),count("OBJECT_NAME"),
count(distinct "OBJECT_NAME"),sum(sys_op_opnsize("OBJECT_NAME")),
substrb(dump(min(substrb("OBJECT_NAME",1,32)),16,0,32),1,120),
substrb(dump(max(substrb("OBJECT_NAME",1,32)),16,0,32),1,120),
count("SUBOBJECT_NAME"),count(distinct "SUBOBJECT_NAME"),
sum(sys_op_opnsize("SUBOBJECT_NAME")),substrb(dump(min("SUBOBJECT_NAME"),16,
0,32),1,120),substrb(dump(max("SUBOBJECT_NAME"),16,0,32),1,120),
count("OBJECT_ID"),count(distinct "OBJECT_ID"),
sum(sys_op_opnsize("OBJECT_ID")),substrb(dump(min("OBJECT_ID"),16,0,32),1,
120),substrb(dump(max("OBJECT_ID"),16,0,32),1,120),count("DATA_OBJECT_ID"),
count(distinct "DATA_OBJECT_ID"),sum(sys_op_opnsize("DATA_OBJECT_ID")),
substrb(dump(min("DATA_OBJECT_ID"),16,0,32),1,120),
substrb(dump(max("DATA_OBJECT_ID"),16,0,32),1,120),count("OBJECT_TYPE"),
count(distinct "OBJECT_TYPE"),sum(sys_op_opnsize("OBJECT_TYPE")),
substrb(dump(min("OBJECT_TYPE"),16,0,32),1,120),
substrb(dump(max("OBJECT_TYPE"),16,0,32),1,120),count("CREATED"),
count(distinct "CREATED"),substrb(dump(min("CREATED"),16,0,32),1,120),
substrb(dump(max("CREATED"),16,0,32),1,120),count("LAST_DDL_TIME"),
count(distinct "LAST_DDL_TIME"),substrb(dump(min("LAST_DDL_TIME"),16,0,32),
1,120),substrb(dump(max("LAST_DDL_TIME"),16,0,32),1,120),count("TIMESTAMP"),
count(distinct "TIMESTAMP"),sum(sys_op_opnsize("TIMESTAMP")),
substrb(dump(min("TIMESTAMP"),16,0,32),1,120),substrb(dump(max("TIMESTAMP"),
16,0,32),1,120),count("STATUS"),count(distinct "STATUS"),
sum(sys_op_opnsize("STATUS")),substrb(dump(min("STATUS"),16,0,32),1,120),
substrb(dump(max("STATUS"),16,0,32),1,120),count("TEMPORARY"),
count(distinct "TEMPORARY"),sum(sys_op_opnsize("TEMPORARY")),
substrb(dump(min("TEMPORARY"),16,0,32),1,120),substrb(dump(max("TEMPORARY"),
16,0,32),1,120),count("GENERATED"),count(distinct "GENERATED"),
sum(sys_op_opnsize("GENERATED")),substrb(dump(min("GENERATED"),16,0,32),1,
120),substrb(dump(max("GENERATED"),16,0,32),1,120),count("SECONDARY"),
count(distinct "SECONDARY"),sum(sys_op_opnsize("SECONDARY")),
substrb(dump(min("SECONDARY"),16,0,32),1,120),substrb(dump(max("SECONDARY"),
16,0,32),1,120)
from
"HEMANT"."TEST_APPEND" sample ( 1.0000000000) t


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.03 0.03 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 118.46 205.36 1253727 895705 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 118.49 205.40 1253727 895705 0 1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 61 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT GROUP BY (cr=895705 pr=1253727 pw=0 time=205366708 us)
836541 TABLE ACCESS SAMPLE TEST_APPEND (cr=895705 pr=1253727 pw=0 time=184883528 us)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file scattered read 9855 0.31 151.24
db file sequential read 1 0.00 0.00
********************************************************************************




Aah ! A 1% sample was executed in 205.4 seconds, much faster time,
going through 1,253,727 blocks and using 895,705 of them to sample 836,541 rows.
The estimate_percent=>1 was *faster* and sampled many *more* rows than the 'auto_sample_size' !

Here’s a summary :


Gather Stats run Sample % age Disk Blocks Actually Read Rows Sampled Time taken (sec)
Run 1 (all cols) .0058412006 585,686 4,798 150.85
Run 2 (some cols) .0584120060 1,136,088 49,017 176.12
Auto : total -- 1,721,774 53,815 326.97

Estimate percent 1 1.0000000000 1,253,727 836,541 205.40

Creating a COMPRESSed Table

I haven't yet used Table Compression in Oracle (see this OTN article) but wanted to see what I space savings I could achieve.

I started off with a table with 82.5 million rows. (Note : The table is called "TEST_APPEND" because I had been running a bunch of tests with INSERT /*+ APPEND */ against LOGGING, NOLOGGING and the presence/absence of an index earlier. I had built up the table as a multiplied copy of DBA_OBJECTS, modifying data as it was inserted into TEST_APPEND).

I then created 4 tables from TEST_APPEND, using 8KB and 16KB block sizes and without and with ordering the data before inserting.

As I expected, the target table in a 16KB tableespace with data pre-ordered by the best compressible columns achieved the best space savings. So, this might be a case for larger block sizes ? Table Compression works at the Block Level -- compressing repeated values it finds within a data block, irrespective of whether the same values may or may not exist in contiguous blocks. Therefore, fitting the largest number of rows in a block with the greatest possible repetition (ie ordered by low cardinality keys) achieves the best compression.

SQL>
SQL>
SQL> set numformat 99,999,999,999
SQL> set timing on
SQL> -- get the current size of TEST_APPEND
SQL> rem exec dbms_Stats.gather_table_stats(ownname=>'HEMANT',tabname=>'TEST_APPEND',method_opt=>'FOR ALL COLUMNS SIZE 1',estimate_percent=>100,degree=>4);
SQL> select blocks, bytes/1048576 from user_segments where segment_name = 'TEST_APPEND';

BLOCKS BYTES/1048576
--------------- ---------------
1,244,672 9,724

Elapsed: 00:00:05.79
SQL> select blocks, num_rows,avg_row_len,num_rows*avg_row_len*1.2/8192 Expected_block from user_tables where table_name = 'TEST_APPEND';

BLOCKS NUM_ROWS AVG_ROW_LEN EXPECTED_BLOCK
--------------- --------------- --------------- ---------------
1,242,880 82,499,184 103 1,244,739

Elapsed: 00:00:00.09
SQL>
SQL> rem Setting a large S_A_S manually (Temporary tablespace has been recreated with 100M extents)
SQL> rem alter session set workarea_size_policy='MANUAL';
SQL> rem alter session set sort_Area_size=104857600;
SQL>
SQL>
SQL>
SQL> drop table ta_comp_8k_noord;

Table dropped.

Elapsed: 00:00:00.87
SQL> drop table ta_comp_8k_ordered;

Table dropped.

Elapsed: 00:00:00.24
SQL>
SQL> drop table ta_comp_16k_noord;

Table dropped.

Elapsed: 00:00:00.13
SQL> drop table ta_comp_16k_ordered;

Table dropped.

Elapsed: 00:00:00.17
SQL>
SQL> create table ta_comp_8k_noord tablespace test_compress_8k as select * from test_append where 1=2;

Table created.

Elapsed: 00:00:00.32
SQL> create table ta_comp_8k_ordered tablespace test_compress_8k as select * from test_append where 1=2;

Table created.

Elapsed: 00:00:00.19
SQL>
SQL> create table ta_comp_16k_noord tablespace test_compress_16k as select * from test_append where 1=2;

Table created.

Elapsed: 00:00:02.28
SQL> create table ta_comp_16k_ordered tablespace test_compress_16k as select * from test_append where 1=2;

Table created.

Elapsed: 00:00:00.11
SQL>
SQL> alter table ta_comp_8k_noord compress;

Table altered.

Elapsed: 00:00:00.10
SQL> alter table ta_comp_8k_ordered compress;

Table altered.

Elapsed: 00:00:00.01
SQL>
SQL> alter table ta_comp_16k_noord compress;

Table altered.

Elapsed: 00:00:00.04
SQL> alter table ta_comp_16k_ordered compress;

Table altered.

Elapsed: 00:00:00.03
SQL>
SQL> alter table ta_comp_8k_noord nologging;

Table altered.

Elapsed: 00:00:00.02
SQL> alter table ta_comp_8k_ordered nologging;

Table altered.

Elapsed: 00:00:00.01
SQL>
SQL> alter table ta_comp_16k_noord nologging;

Table altered.

Elapsed: 00:00:00.02
SQL> alter table ta_comp_16k_ordered nologging;

Table altered.

Elapsed: 00:00:00.01
SQL>
SQL> alter session enable parallel dml;

Session altered.

Elapsed: 00:00:00.01
SQL>
SQL> insert /*+ APPEND PARALLEL (2) */ into ta_comp_8k_noord select /*+ PARALLEL (test_a 2) */ * from test_append test_a;

82499184 rows created.

Elapsed: 00:11:10.02
SQL> commit;

Commit complete.

Elapsed: 00:00:00.13
SQL>
SQL> insert /*+ APPEND PARALLEL (2) */ into ta_comp_16k_noord select /*+ PARALLEL (test_a 2) */ * from test_append test_a;

82499184 rows created.

Elapsed: 00:13:20.69
SQL> commit;

Commit complete.

Elapsed: 00:00:00.05
SQL>
SQL>
SQL> select sn.name, s.value
2 from v$statname sn, v$mystat s
3 where sn.statistic#=s.statistic#
4 and sn.name in ('CPU used by this session', 'consistent gets', 'db block changes', 'db block gets', 'physical reads direct','physical writes direct','redo entries', 'redo size','undo change vector size')
5 order by sn.name;

NAME VALUE
---------------------------------------------------------------- ---------------
CPU used by this session 188,251
consistent gets 2,503,958
db block changes 39,542
db block gets 406,400
physical reads direct 2,482,350
physical writes direct 360,009
redo entries 42,086
redo size 4,791,160
undo change vector size 825,280

9 rows selected.

Elapsed: 00:00:00.11
SQL>
SQL> -- get the current size of TA_COMP_8K_NOORD
SQL> exec dbms_Stats.gather_table_stats(ownname=>'HEMANT',tabname=>'TA_COMP_8K_NOORD',method_opt=>'FOR ALL COLUMNS SIZE 1',estimate_percent=>100,degree=>4);

PL/SQL procedure successfully completed.

Elapsed: 00:14:45.67
SQL> select blocks, bytes/1048576 from user_segments where segment_name = 'TA_COMP_8K_NOORD';

BLOCKS BYTES/1048576
--------------- ---------------
243,200 1,900

Elapsed: 00:00:00.03
SQL> select blocks, num_rows,avg_row_len from user_tables where table_name = 'TA_COMP_8K_NOORD';

BLOCKS NUM_ROWS AVG_ROW_LEN
--------------- --------------- ---------------
242,938 82,499,184 103

Elapsed: 00:00:00.01
SQL>
SQL> -- get the current size of TA_COMP_16K_NOORD
SQL> exec dbms_Stats.gather_table_stats(ownname=>'HEMANT',tabname=>'TA_COMP_16K_NOORD',method_opt=>'FOR ALL COLUMNS SIZE 1',estimate_percent=>100,degree=>4);

PL/SQL procedure successfully completed.

Elapsed: 00:15:06.52
SQL> select blocks, bytes/1048576 from user_segments where segment_name = 'TA_COMP_16K_NOORD';

BLOCKS BYTES/1048576
--------------- ---------------
118,272 1,848

Elapsed: 00:00:00.23
SQL> select blocks, num_rows,avg_row_len from user_tables where table_name = 'TA_COMP_16K_NOORD';

BLOCKS NUM_ROWS AVG_ROW_LEN
--------------- --------------- ---------------
118,138 82,499,184 103

Elapsed: 00:00:00.07
SQL>
SQL>
SQL>
SQL> insert /*+ APPEND PARALLEL (2) */ into ta_comp_8k_ordered select /*+ PARALLEL (test_a 2) */ * from test_append test_a order by owner,object_type,object_name;

82499184 rows created.

Elapsed: 02:42:06.73
SQL> commit;

Commit complete.

Elapsed: 00:00:00.09
SQL>
SQL> -- get the current size of TA_COMP_8K_ORDERED
SQL> exec dbms_Stats.gather_table_stats(ownname=>'HEMANT',tabname=>'TA_COMP_8K_ORDERED',method_opt=>'FOR ALL COLUMNS SIZE 1',estimate_percent=>100,degree=>4);

PL/SQL procedure successfully completed.

Elapsed: 00:10:50.87
SQL> select blocks, bytes/1048576 from user_segments where segment_name = 'TA_COMP_8K_ORDERED';

BLOCKS BYTES/1048576
--------------- ---------------
132,096 1,032

Elapsed: 00:00:00.14
SQL> select blocks, num_rows,avg_row_len from user_tables where table_name = 'TA_COMP_8K_ORDERED';

BLOCKS NUM_ROWS AVG_ROW_LEN
--------------- --------------- ---------------
131,950 82,499,184 103

Elapsed: 00:00:00.04
SQL>
SQL>
SQL>
SQL> insert /*+ APPEND PARALLEL (2) */ into ta_comp_16k_ordered select /*+ PARALLEL (test_a 2) */ * from test_append test_a order by owner,object_type,object_name;

82499184 rows created.

Elapsed: 02:44:44.25
SQL> commit;

Commit complete.

Elapsed: 00:00:00.04
SQL>
SQL> -- get the current size of TA_COMP_16K_ORDERED
SQL> exec dbms_Stats.gather_table_stats(ownname=>'HEMANT',tabname=>'TA_COMP_16K_ORDERED',method_opt=>'FOR ALL COLUMNS SIZE 1',estimate_percent=>100,degree=>4);

PL/SQL procedure successfully completed.

Elapsed: 00:10:59.63
SQL> select blocks, bytes/1048576 from user_segments where segment_name = 'TA_COMP_16K_ORDERED';

BLOCKS BYTES/1048576
--------------- ---------------
64,512 1,008

Elapsed: 00:00:00.10
SQL> select blocks, num_rows,avg_row_len from user_tables where table_name = 'TA_COMP_16K_ORDERED';

BLOCKS NUM_ROWS AVG_ROW_LEN
--------------- --------------- ---------------
64,291 82,499,184 103

Elapsed: 00:00:00.04
SQL>
SQL>




As we can see in the listing, table TA_COMP_16K_ORDERED is, effectively, 1008MB (64,512 blocks of 16KB) against the original table TEST_APPEND of 9,724MB (1,244,672 blocks of 8KB)
Note : The test_compress_8k and test_compress_16k tablespaces were created with UNIFORM SIZE 4M instead of AUTOALLOCATE, thus saving space on the "last extent" (which could be 64K in AUTOALLOCATE).


TableName BlkSz HWM SegBlks SegSzMB
TEST_APPEND 8KB 1,242,880 1,244,672 9,724

TA_COMP_8K_NOORD 8KB 242,938 243,200 1,900
TA_COMP_8K_ORDERED 8KB 131,950 132,096 1,032

TA_COMP_16K_NOORD 16KB 118,138 118,272 1,848
TA_COMP_16K_ORDERED 16KB 64,291 64,512 1,008




The above table summarises the result, except that it can take
time to get the data ordered if the source is very large.

25 May, 2008

Passwords are One Way Hashes, Not Encrypted

Here's a simple demonstration of how Oracle Database passwords are One Way Hashes, not Encrypted.


SQL>
SQL> create user HEMA identified by SOMEBODY_2 ;

User created.

SQL> create user HEMASOME identified by BODY_2 ;

User created.

SQL> create user HEMASOMEBOD identified by Y_2 ;

User created.

SQL>
SQL> create user HEMAS identified by OMEBODY_3 ;

User created.

SQL>
SQL> select username, password from dba_users
2 where
3 (username != 'HEMANT')
4 and
5 (
6 (username like 'HEMA%')
7 )
8 order by username;

USERNAME PASSWORD
------------------------------ ------------------------------
HEMA 8E8C633A6DAEC8E4
HEMAS 9DB550C5CAA21E55
HEMASOME 8E8C633A6DAEC8E4
HEMASOMEBOD 8E8C633A6DAEC8E4

SQL>


The "password" that is stored in DBA_USERS is actually a Hash of
the Username + Password (with an Oracle secret "magic" salt).
Thus, since "HEMA" + "SOMEBODY_2" is the same as "HEMASOME" + "BODY_2" and also "HEMASOMEBOD" + "Y_2" (in all cases it is "HEMASOMEBODY_2" !), the so-called password visible in DBA_USERS is the same for all three database accounts.

However, for the account "HEMAS" since the concatenation of the username with the password results in a different string "HEMASOMEBODY_3" (only the last character is different !), the resulting "Hash" is obviously quite different and so is the "password" in DBA_USERS !

When a user enters his Username and Password, Oracle does not attempt any encryption of the password for comparision with the stored password or decryption of the stored password with the user-supplied password. It simply computes the Hash value for Username+Password and compares that with the stored Hash value.

That is how users with very different usernames and passwords can seemingly have the "same" password stored in DBA_USERS -- it is not really a password but a Hash.

20 May, 2008

APPEND, NOLOGGING and Indexes

Running some tests on 10gR2, I find that the presence of Indexes can have a very significant impact on Direct Path Inserts done with the APPEND Hint, even if the target table is NOLOGGING.

Thus, for example, a Normal INSERT on an unindexed table is :

SQL> insert /* normal insert */ into test_append ta
2 select * from source_table;

301974 rows created.

Elapsed: 00:00:03.22
SQL> commit;

Commit complete.

Elapsed: 00:00:00.01
SQL> select sn.name, s.value
2 from v$statname sn, v$mystat s
3 where sn.statistic#=s.statistic#
4 and sn.name in ('CPU used by this session', 'consistent gets', 'db block changes', 'db block gets', 'physical reads direct','physical writes direct','redo entries', 'redo size','undo change vector size')
5 order by sn.name;

NAME VALUE
---------------------------------------------------------------- ---------------
CPU used by this session 145
consistent gets 12,608
db block changes 30,725
db block gets 36,672
physical reads direct 0
physical writes direct 0
redo entries 22,796
redo size 33,808,976
undo change vector size 1,122,508

While an INSERT with the table set to NOLOGGING is :

SQL> insert /*+ APPEND */ into test_append ta
2 select * from source_table;

301980 rows created.

Elapsed: 00:00:01.61
SQL> commit;

Commit complete.

Elapsed: 00:00:00.03
SQL> select sn.name, s.value
2 from v$statname sn, v$mystat s
3 where sn.statistic#=s.statistic#
4 and sn.name in ('CPU used by this session', 'consistent gets', 'db block changes', 'db block gets', 'physical reads direct','physical writes direct','redo entries', 'redo size','undo change vector size')
5 order by sn.name;

NAME VALUE
---------------------------------------------------------------- ---------------
CPU used by this session 129
consistent gets 4,353
db block changes 194
db block gets 4,353
physical reads direct 0
physical writes direct 4,146
redo entries 260
redo size 26,816
undo change vector size 4,664

However, if I have a "large" (multi column, concatenated index with a large key size) Index on the target table I get :

SQL> insert /*+ APPEND */ into test_append ta
2 select * from source_table;

301974 rows created.

Elapsed: 00:00:24.45
SQL> commit;

Commit complete.

Elapsed: 00:00:00.18
SQL> select sn.name, s.value
2 from v$statname sn, v$mystat s
3 where sn.statistic#=s.statistic#
4 and sn.name in ('CPU used by this session', 'consistent gets', 'db block changes', 'db block gets', 'physical reads direct','physical writes direct','redo entries', 'redo size','undo change vector size')
5 order by sn.name;

NAME VALUE
---------------------------------------------------------------- ---------------
CPU used by this session 1,107
consistent gets 21,592
db block changes 186,229
db block gets 194,635
physical reads direct 4,775
physical writes direct 8,921
redo entries 100,228
redo size 152,819,412
undo change vector size 72,298,884


When a Direct Path INSERT is executed Index Maintenance is deferred. However, Index Maintenance itself generates Undo and is NOT a NoLogging operation.

Thus, my INSERT that normally generates 33MB of Redo and can be optimized to 26KB with NOLOGGING set at the Table level and APPEND as a Hint (directive), becomes a 152MB Redo Generator if I have an Index.

YMMV, depending on the table columns, Index key length etc. But it is important to note that an Index can really hurt your happiness when you are doing bulk loads.

08 May, 2008

RMAN Consistent ("COLD" ?) Backup and Restore

The RMAN documentation differentiates between "inconsistent" and "consistent" backups on the basis of whether the database is OPEN or not (respectively) during the Backup.
The nomenclature "inconsistent backup" makes me nervous. Why not call it the good old "HOT" Backup ?

To do a consistent" backup with RMAN, the database must be mounted as RMAN needs to access and update the controlfiles. With an OS scripted backup pre-RMAN, the database was truly "COLD" -- there would be no Oracle processes running.
RMAN does not backup the Online Redo Log files. With an OS scripted backup, you had the option -- you could choose to include these files in your backup if you were careful about how you planned to use Cold Backups for Roll-Forward recoveries with ArchiveLog.

Since RMAN does not backup the Online Redo Log files, you must, perforce, OPEN RESETLOGS on a Restore. With a scripted backup, if you also included your Online Redo Log files in your backup and restore (provided that you did not plan to apply any ArchiveLogs), you could simply STARTUP the database and continue LogSequenceNumbers again. (Of course, you might be duplicating LogSequenceNumbers if the database had been active in ArchiveLog mode since the backup, so you have to be careful to distinguish the two "streams" of ArchiveLogs).


Here below is the simplest "consistent" Backup and Restore using RMAN :



C:\>rman

Recovery Manager: Release 10.2.0.3.0 - Production on Thu May 8 22:20:17 2008

Copyright (c) 1982, 2005, Oracle. All rights reserved.

RMAN> connect target /

connected to target database (not started)

RMAN> startup mount

Oracle instance started
database mounted

Total System Global Area 536870912 bytes

Fixed Size 1291652 bytes
Variable Size 297798268 bytes
Database Buffers 234881024 bytes
Redo Buffers 2899968 bytes

RMAN> backup database;

Starting backup at 08-MAY-08
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=58 devtype=DISK
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00001 name=C:\OR10G2DB\SYSTEM01.DBF
input datafile fno=00003 name=C:\OR10G2DB\SYSAUX01.DBF
input datafile fno=00002 name=F:\OR10G2DB\TEST_TBS_01.DBF
input datafile fno=00004 name=C:\OR10G2DB\USERS01.DBF
input datafile fno=00005 name=C:\OR10G2DB\EXAMPLE01.DBF
input datafile fno=00007 name=C:\OR10G2DB\UNDO.DBF
channel ORA_DISK_1: starting piece 1 at 08-MAY-08
channel ORA_DISK_1: finished piece 1 at 08-MAY-08
piece handle=C:\OR10G2DB_FLASH\OR10G2DB\BACKUPSET\2008_05_08\O1_MF_NNNDF_TAG200
0508T222041_42631X6C_.BKP tag=TAG20080508T222041 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:01:47
Finished backup at 08-MAY-08

Starting Control File and SPFILE Autobackup at 08-MAY-08
piece handle=C:\OR10G2DB_FLASH\OR10G2DB\AUTOBACKUP\2008_05_08\O1_MF_S_654214767
426358M1_.BKP comment=NONE
Finished Control File and SPFILE Autobackup at 08-MAY-08

RMAN>
RMAN> shutdown

database dismounted
Oracle instance shut down

RMAN>

******************* BACKUP COMPLETED *****************
******************************************************

========= database files deleted ====================
++++++++++++++++++++++++++++++++++++++++++++++++++++++
======================================================



C:\>rman target /

Recovery Manager: Release 10.2.0.3.0 - Production on Thu May 8 22:32:07 2008

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database (not started)

RMAN> startup nomount

Oracle instance started

Total System Global Area 536870912 bytes

Fixed Size 1291652 bytes
Variable Size 301992572 bytes
Database Buffers 230686720 bytes
Redo Buffers 2899968 bytes

RMAN>
RMAN> restore controlfile from autobackup;

Starting restore at 08-MAY-08
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=57 devtype=DISK

recovery area destination: \OR10G2DB_FLASH
database name (or database unique name) used for search: OR10G2DB
channel ORA_DISK_1: autobackup found in the recovery area
channel ORA_DISK_1: autobackup found: C:\OR10G2DB_FLASH\OR10G2DB\AUTOBACKUP\2008
_05_08\O1_MF_S_654214767_426358M1_.BKP
channel ORA_DISK_1: control file restore from autobackup complete
output filename=C:\OR10G2DB\CONTROL01.CTL
output filename=C:\OR10G2DB\CONTROL02.CTL
output filename=C:\OR10G2DB\CONTROL03.CTL
Finished restore at 08-MAY-08

RMAN>
RMAN> restore database;

Starting restore at 08-MAY-08
using channel ORA_DISK_1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 05/08/2008 22:33:28
ORA-01507: database not mounted

RMAN> alter database mount;

database mounted
released channel: ORA_DISK_1

RMAN>
RMAN> restore database;

Starting restore at 08-MAY-08
Starting implicit crosscheck backup at 08-MAY-08
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=57 devtype=DISK
Crosschecked 45 objects
Finished implicit crosscheck backup at 08-MAY-08

Starting implicit crosscheck copy at 08-MAY-08
using channel ORA_DISK_1
Finished implicit crosscheck copy at 08-MAY-08

searching for all files in the recovery area
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: C:\OR10G2DB_FLASH\OR10G2DB\AUTOBACKUP\2008_05_08\O1_MF_S_654214767_42
6358M1_.BKP

using channel ORA_DISK_1

skipping datafile 2; already restored to file F:\OR10G2DB\TEST_TBS_01.DBF
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to C:\OR10G2DB\SYSTEM01.DBF
restoring datafile 00003 to C:\OR10G2DB\SYSAUX01.DBF
restoring datafile 00004 to C:\OR10G2DB\USERS01.DBF
restoring datafile 00005 to C:\OR10G2DB\EXAMPLE01.DBF
restoring datafile 00007 to C:\OR10G2DB\UNDO.DBF
channel ORA_DISK_1: reading from backup piece C:\OR10G2DB_FLASH\OR10G2DB\BACKUPS
ET\2008_05_08\O1_MF_NNNDF_TAG20080508T222041_42631X6C_.BKP
channel ORA_DISK_1: restored backup piece 1
piece handle=C:\OR10G2DB_FLASH\OR10G2DB\BACKUPSET\2008_05_08\O1_MF_NNNDF_TAG2008
0508T222041_42631X6C_.BKP tag=TAG20080508T222041
channel ORA_DISK_1: restore complete, elapsed time: 00:02:05
Finished restore at 08-MAY-08

RMAN>
RMAN> alter database open;

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of alter db command at 05/08/2008 22:36:48
ORA-01589: must use RESETLOGS or NORESETLOGS option for database open

RMAN>
RMAN> alter database open resetlogs;

database opened

RMAN>



The OPEN RESETLOGS is necessary because RMAN does not backup and restore
Online Redo Logs.

07 May, 2008

DBAs working long hours

A forums thread on why and how DBAs should handle the need to work long hours.

03 May, 2008

One Thing Leads to Another ....

I had written my post on Row Sizes and SORTs following the same forums posting that Jonathan Lewis used to build his test case in his blog post called "Manual Optimisation".

On reading the first version of Manual Optimisation (hey, did you notice that he is British ?!), I was then going to update my own post and reference his method. Quite cool and, at the same time, very complicated. However, I held back on that (and wrote another post on something else).

A few days later, I come back to his blog to read up on new comments, and I find a reference to Tom Kyte's response. Yes, I agree with Tom Kyte and do feel that depending on what could be a side-effect of behaviour isn't strictly usable in most scenarios -- but, maybe, Jonathan Lewis has found an exception. (Will I use it ? Probably not , I wouldn't be able to explain it to others !).

So, all things going around, what is THIS posting about ? Well actually, I found the example of the Employee Names and Telephone Numbers lookup that Tom Kyte has quoted as "Optimize to Your Most Frequently Asked Questions" quite intriguing. I've seen a similar application in Lotus Notes and wondered how it could be done in Oracle. Now, I know how it can be done !

So, from a forum posting, to my blog, to Jonathan Lewis's posting to Tom Kyte's response and to an example which gives me an insight into a solution for problem not strictly related to the first one (although there is a good link between the two -- the fact that the rowsize can be an issue in fetching data !).

01 May, 2008

TEMPORARY Segments in Data/Index Tablespaces

Some misconceptions about Temporary Segments in a forums thread led me to run this demo to show how and when 'TEMPORARY' segments may be created in a "normal" tablespace -- one other than the TEMP tablespace.

I ran some CREATE, INSERT, REBUILD and MOVE commands from one session and monitored segments in that tablespace from another session concurrently. The timestamps in the outputs show how and when the "TEMPORARY" segments are created by Oracle. {Note : Take particular attention when using Parallel operations -- eg CREATE or REBUILD operations using Parallelism -- there are at least as many
Temporary Segments created as the number of Parallel Slaves as each slave creates it's own Segment (and extents) and these are merged by the Query Co-ordinator on
completion.}

This is my test session with the CREATE, INSERT, REBUILD and MOVE operations :

18:54:23 SQL>
18:54:23 SQL>
18:54:23 SQL> REM To demonstrate how and when Temporary segments NOT in the TEMP tablespace might get created
18:54:23 SQL>
18:54:23 SQL> rem Deliberately create the TBS with small 64K extents so that the operation is slow
18:54:23 SQL> rem and we can query and find the Temporary segments while existant for a short while
18:54:23 SQL>
18:54:23 SQL> create tablespace test_tbs
18:54:23 2 datafile 'F:\OR10G2DB\test_tbs_01.dbf' SIZE 200M autoextend on next 10M maxsize 2000M
18:54:23 3 extent management local uniform size 64K segment space management auto;

Tablespace created.

18:54:34 SQL>
18:54:34 SQL> alter session set workarea_size_policy='MANUAL';

Session altered.

18:54:34 SQL> alter session set sort_area_size=65536;

Session altered.

18:54:34 SQL>
18:54:34 SQL> pause press ENTER to proceed
press ENTER to proceed

18:54:43 SQL>
18:54:43 SQL> rem Create a largish table. Do NOT use NOLOGGING or APPEND -- just so that the operation is slower
18:54:43 SQL> pause press ENTER to proceed
press ENTER to proceed

18:54:45 SQL> create table test_table tablespace test_tbs as select * from dba_objects union all select * from dba_objects union all select * from dba_objects;

Table created.

18:54:50 SQL>
18:54:50 SQL>
18:54:50 SQL> rem Enlarge the table
18:54:50 SQL> pause press ENTER to proceed -- see if you can find TEMPORARY segments while the CREATE is running
press ENTER to proceed -- see if you can find TEMPORARY segments while the CREATE is running

18:54:54 SQL> insert into test_table select * from test_table union all select * from test_table;

310044 rows created.

18:55:06 SQL> pause press ENTER to proceed -- see if you can find TEMPORARY segments while the CREATE is running
press ENTER to proceed -- see if you can find TEMPORARY segments while the CREATE is running

18:55:08 SQL> create index test_table_ndx_1 on test_table(owner,object_name) tablespace test_tbs;

Index created.

18:55:47 SQL>
18:55:47 SQL> rem rebuild index and table
18:55:47 SQL> pause press ENTER to proceed -- see if you can find TEMPORARY segments while the REBUILD is running
press ENTER to proceed -- see if you can find TEMPORARY segments while the REBUILD is running

18:55:55 SQL> alter index test_table_ndx_1 rebuild;

Index altered.

18:56:11 SQL> pause press ENTER to proceed -- see if you can find TEMPORARY segments while the MOVE is running
press ENTER to proceed -- see if you can find TEMPORARY segments while the MOVE is running

18:56:16 SQL> alter table test_table move;

Table altered.

18:56:37 SQL> pause press ENTER to proceed -- see if you can find TEMPORARY segments while the REBUILD is running
press ENTER to proceed -- see if you can find TEMPORARY segments while the REBUILD is running

18:56:41 SQL> alter index test_table_ndx_1 rebuild;

Index altered.

18:57:23 SQL>
18:57:23 SQL>



And this is what I see in another session monitoring the first :

18:54:29 SQL> /

no rows selected

18:54:29 SQL> l
1 select segment_name,segment_type,header_file,header_block,extents,bytes/1024/1024 sz
2 from dba_segments
3 where segment_type = 'TEMPORARY'
4 and tablespace_name = 'TEST_TBS'
5 union
6 select segment_name,segment_type,header_file,header_block,extents,bytes/1024/1024 sz
7 from dba_segments
8 where segment_name in ('TEST_TABLE','TEST_TABLE_NDX_1')
9 and tablespace_name = 'TEST_TBS'
10* order by segment_name, header_file
18:54:31 SQL> /

no rows selected

18:54:32 SQL> /

no rows selected

18:54:41 SQL> /

Segment SegType File Hdr Blk Extents MB
------------------ ------------ ----- ------- -------- ----------
2.11 TEMPORARY 2 11 143 8.94

18:54:48 SQL> /

Segment SegType File Hdr Blk Extents MB
------------------ ------------ ----- ------- -------- ----------
TEST_TABLE TABLE 2 11 272 17.00

18:54:50 SQL> /

Segment SegType File Hdr Blk Extents MB
------------------ ------------ ----- ------- -------- ----------
TEST_TABLE TABLE 2 11 375 23.44

18:54:56 SQL> /

Segment SegType File Hdr Blk Extents MB
------------------ ------------ ----- ------- -------- ----------
TEST_TABLE TABLE 2 11 375 23.44

18:54:59 SQL> /

Segment SegType File Hdr Blk Extents MB
------------------ ------------ ----- ------- -------- ----------
TEST_TABLE TABLE 2 11 714 44.63

18:55:05 SQL> /

Segment SegType File Hdr Blk Extents MB
------------------ ------------ ----- ------- -------- ----------
TEST_TABLE TABLE 2 11 812 50.75

18:55:11 SQL> /

Segment SegType File Hdr Blk Extents MB
------------------ ------------ ----- ------- -------- ----------
TEST_TABLE TABLE 2 11 812 50.75

18:55:13 SQL>
18:55:17 SQL> /

Segment SegType File Hdr Blk Extents MB
------------------ ------------ ----- ------- -------- ----------
TEST_TABLE TABLE 2 11 812 50.75

18:55:17 SQL>
18:55:20 SQL> /

Segment SegType File Hdr Blk Extents MB
------------------ ------------ ----- ------- -------- ----------
TEST_TABLE TABLE 2 11 812 50.75

18:55:21 SQL>
18:55:23 SQL> /

Segment SegType File Hdr Blk Extents MB
------------------ ------------ ----- ------- -------- ----------
TEST_TABLE TABLE 2 11 812 50.75

18:55:24 SQL>
18:55:27 SQL> /

Segment SegType File Hdr Blk Extents MB
------------------ ------------ ----- ------- -------- ----------
TEST_TABLE TABLE 2 11 812 50.75

18:55:28 SQL>
18:55:30 SQL> /

Segment SegType File Hdr Blk Extents MB
------------------ ------------ ----- ------- -------- ----------
TEST_TABLE TABLE 2 11 812 50.75

18:55:31 SQL>
18:55:34 SQL> /

Segment SegType File Hdr Blk Extents MB
------------------ ------------ ----- ------- -------- ----------
2.6507 TEMPORARY 2 6507 41 2.56
TEST_TABLE TABLE 2 11 812 50.75

18:55:35 SQL> /

Segment SegType File Hdr Blk Extents MB
------------------ ------------ ----- ------- -------- ----------
2.6507 TEMPORARY 2 6507 114 7.13
TEST_TABLE TABLE 2 11 812 50.75

18:55:37 SQL>
18:55:44 SQL> /

Segment SegType File Hdr Blk Extents MB
------------------ ------------ ----- ------- -------- ----------
2.6507 TEMPORARY 2 6507 271 16.94
TEST_TABLE TABLE 2 11 812 50.75

18:55:45 SQL> /

Segment SegType File Hdr Blk Extents MB
------------------ ------------ ----- ------- -------- ----------
TEST_TABLE TABLE 2 11 812 50.75
TEST_TABLE_NDX_1 INDEX 2 6507 340 21.25

18:55:48 SQL> /

Segment SegType File Hdr Blk Extents MB
------------------ ------------ ----- ------- -------- ----------
2.9227 TEMPORARY 2 9227 2 .13
TEST_TABLE TABLE 2 11 812 50.75
TEST_TABLE_NDX_1 INDEX 2 6507 340 21.25

18:55:57 SQL> /

Segment SegType File Hdr Blk Extents MB
------------------ ------------ ----- ------- -------- ----------
2.9227 TEMPORARY 2 9227 62 3.88
TEST_TABLE TABLE 2 11 812 50.75
TEST_TABLE_NDX_1 INDEX 2 6507 340 21.25

18:56:00 SQL> /

Segment SegType File Hdr Blk Extents MB
------------------ ------------ ----- ------- -------- ----------
2.9227 TEMPORARY 2 9227 259 16.19
TEST_TABLE TABLE 2 11 812 50.75
TEST_TABLE_NDX_1 INDEX 2 6507 340 21.25

18:56:05 SQL> /

Segment SegType File Hdr Blk Extents MB
------------------ ------------ ----- ------- -------- ----------
2.9227 TEMPORARY 2 9227 278 17.38
TEST_TABLE TABLE 2 11 812 50.75
TEST_TABLE_NDX_1 INDEX 2 6507 340 21.25

18:56:08 SQL> /

Segment SegType File Hdr Blk Extents MB
------------------ ------------ ----- ------- -------- ----------
TEST_TABLE TABLE 2 11 812 50.75
TEST_TABLE_NDX_1 INDEX 2 9227 340 21.25

18:56:12 SQL> /

Segment SegType File Hdr Blk Extents MB
------------------ ------------ ----- ------- -------- ----------
2.6507 TEMPORARY 2 6507 175 10.94
TEST_TABLE TABLE 2 11 812 50.75
TEST_TABLE_NDX_1 INDEX 2 9227 340 21.25

18:56:19 SQL> /

Segment SegType File Hdr Blk Extents MB
------------------ ------------ ----- ------- -------- ----------
2.6507 TEMPORARY 2 6507 407 25.44
TEST_TABLE TABLE 2 11 812 50.75
TEST_TABLE_NDX_1 INDEX 2 9227 340 21.25

18:56:25 SQL> /

Segment SegType File Hdr Blk Extents MB
------------------ ------------ ----- ------- -------- ----------
2.6507 TEMPORARY 2 6507 639 39.94
TEST_TABLE TABLE 2 11 812 50.75
TEST_TABLE_NDX_1 INDEX 2 9227 340 21.25

18:56:30 SQL> /

Segment SegType File Hdr Blk Extents MB
------------------ ------------ ----- ------- -------- ----------
TEST_TABLE TABLE 2 6507 814 50.88
TEST_TABLE_NDX_1 INDEX 2 9227 340 21.25

18:56:39 SQL> /

Segment SegType File Hdr Blk Extents MB
------------------ ------------ ----- ------- -------- ----------
TEST_TABLE TABLE 2 6507 814 50.88
TEST_TABLE_NDX_1 INDEX 2 9227 340 21.25

18:56:44 SQL>
18:56:51 SQL> /

Segment SegType File Hdr Blk Extents MB
------------------ ------------ ----- ------- -------- ----------
TEST_TABLE TABLE 2 6507 814 50.88
TEST_TABLE_NDX_1 INDEX 2 9227 340 21.25

18:56:52 SQL> /

Segment SegType File Hdr Blk Extents MB
------------------ ------------ ----- ------- -------- ----------
TEST_TABLE TABLE 2 6507 814 50.88
TEST_TABLE_NDX_1 INDEX 2 9227 340 21.25

18:56:57 SQL>
18:57:02 SQL> /

Segment SegType File Hdr Blk Extents MB
------------------ ------------ ----- ------- -------- ----------
TEST_TABLE TABLE 2 6507 814 50.88
TEST_TABLE_NDX_1 INDEX 2 9227 340 21.25

18:57:02 SQL>
18:57:07 SQL> /

Segment SegType File Hdr Blk Extents MB
------------------ ------------ ----- ------- -------- ----------
2.11 TEMPORARY 2 11 56 3.50
TEST_TABLE TABLE 2 6507 814 50.88
TEST_TABLE_NDX_1 INDEX 2 9227 340 21.25

18:57:08 SQL> /

Segment SegType File Hdr Blk Extents MB
------------------ ------------ ----- ------- -------- ----------
2.11 TEMPORARY 2 11 125 7.81
TEST_TABLE TABLE 2 6507 814 50.88
TEST_TABLE_NDX_1 INDEX 2 9227 340 21.25

18:57:12 SQL>
18:57:19 SQL> /

Segment SegType File Hdr Blk Extents MB
------------------ ------------ ----- ------- -------- ----------
2.11 TEMPORARY 2 11 290 18.13
TEST_TABLE TABLE 2 6507 814 50.88
TEST_TABLE_NDX_1 INDEX 2 9227 340 21.25

18:57:20 SQL> /

Segment SegType File Hdr Blk Extents MB
------------------ ------------ ----- ------- -------- ----------
TEST_TABLE TABLE 2 6507 814 50.88
TEST_TABLE_NDX_1 INDEX 2 11 340 21.25

18:57:25 SQL>


Observations :
1. Temporary Segment names are named as 'filenumber.headerblocknumber'
2. The CREATE TABLE statement creates a TEMPORARY Segment "2.11"
which, on successful creation, is converted to a TABLE Segment.
(you can see how the filenumber and headerblocknumber are used)
3. When *extending* an already created Table (adding rows to it),
the new extents are added to the same Table segment -- they are
*NOT* a TEMPORARY Segment.
4. A CREATE INDEX also begins with a TEMPORARY Segment. However,
when creating an Index, you would see 2 such segments -- on in the TEMP
Tablespace where the SORT operations occur, and one in the _target_
Tablespace where the Index is supposed to be created. In this case, the
Index Segment starts with Block 6507.
5. An ALTER INDEX REBUILD creates a new TEMPORARY Segment
(Header Block 9227 in my case) and drops the old Index Segment only
after successful completion -- thus you see both the Block 6507 "TEST_TABLE_NDX_1"
and Block 9227 TEMPORARY Segment while the REBUILD is running
after which the Block 6507 is dropped.
6. An ALTER TABLE MOVE also creates it's own TEMPORARY Segment
(you can see that it reuses the Extents beginning with the one atBlock 6507
that were released by the "dropped" Index segment). On successful completion,
the "old" Table Segment (Block 11) is dropped and replaced by the new Segment
(Block 6507).
7. Another ALTER INDEX REBUILD again behaves in the same manner
(reusing the extents, beginning with the one at Block 11).


Why does Oracle use this manner of TEMPORARY Segments for CREATE
and REBUILD/MOVE ?

Consider what happens if the Tablespace doesn't have enough space.
Say the tablespace is a single datafile of 100MB and you begin running a
CREATE TABLE ... AS SELECT .... while will result in a final table size of
200MB (of, maybe, 1MB extents). When the CREATE begins, Oracle
does *not* know how large the target table will be. It begins with 1MB and
grows to 100MB and then fails to allocate another extent. Since this is
a CREATE table, it cannot allow an incomplete table of 100MB to continue
to exist. Oracle simply drops the TEMPORARY Segment it was using to
hold the table rows and returns a failure on the CREATE TABLE -- the
Table does NOT get created.
Exactly the same thing happens with a CREATE INDEX.

How about the REBUILD and MOVE ?
You might have an Index of 100MB Segment Size. However, you may have
deleted, say 90% of the rows from the Table. When you rebuild the Index,
the new Index might be only 10MB. Till such time as the new Index creation
completes, Oracle cannot drop the old index --- even if there is enough free
space for the 10MB segment to co-exist with the 100MB segment. What
would happen if the server were to fail or the database instance were to crash
because of some other bug while the REBUILD was running ? Had Oracle
dropped the original segment, you would end up with *NO* Index. Oracle
cannot allow that so it doesn't drop the original segment till the new one
is successfully built. The "switching" of the Segments is an "atomic" operation.
What if you hadn't deleted any rows from the table and yet were, simply,
rebuilding the Index. The new Index segment might require 100MB space.
However, when Oracle begins the REBUILD, it doesn't know precisely how
large the new segment *will* really be (it can't rely on the NUM_ROWS
statistics as those statistics might be outdated !). So, it has to first create the
new Index segment (and fail it if the Tablespace doesn't have the additional
100MB free space) before it can "drop" the old segment.
The same manner of behaviour has to occur with an ALTER TABLE ... MOVE.
Oracle cannot be sure, in advance, of how large the new Table segment will finally be.
It has to first create the Table segment successfully before it can "drop" the
old segment. Also, if any other server/instance failure occurred midway, this
method ensures that the "old" Table is still available 100% intact.

Note : Indexes get invalidated on a Table MOVE because the MOVE changes
ROWIDs causing the Indexes to point to incorrect rows.

Well, then, what about INSERT operations ? When you INSERT into a Table
that has already been created (even it was created as an empty table !), the
Table Segment already exists with 1 or more extents. The INSERT operation
simply needs to allocate Extents as necessary to "grow" the table. If there is
insuficient space mid-way, the INSERT is Rolled Back by running an UNDO
on it (the UNDO is "delete .. where rowid = ..." for each of the rows).
The extents that got allocated and the blocks that got "formatted" during the
INSERT remain allocated and formatted but are "cleared" of the rows.
Thus, this can mean that if a Table starts with 100 Extents (and 20,000 rows)
and undergoes an 80,000 row Insert that should have grown to 400 Extents
but fails after growing to 250 Extents (because the Tablespace datafiles are full),
then at the end of the Rollback the ROW COUNT reverts to the original count of
20,000 rows but the Table Segment's size *remains* at 250 Extents ! Those
Extents do not get deallocated. REMEMBER THAT the next time you attempt
a large INSERT !

Aggregated by orafaq.com

Aggregated by orafaq.com
This blog is being aggregated by orafaq.com