Search My Oracle Blog

Custom Search

29 April, 2016

Partition Storage -- 6 : Revisiting Partition HWM

After the curious finding in my previous blog post, where a Partition's HighWaterMark was noticeably higher than that for a non-Partitioned Table but then shrunk on a MOVE operation, retrying the same rows with a different pattern of INSERT statements.

However, I am still sticking to a single session doing the INSERT (as I don't want ASSM spreading the incoming rows to different non-contiguous blocks)

This in 12.1.0.2
SQL> connect hemant/hemant
Connected.
SQL> create table part_table_3(id_column number(6), data_column varchar2(100))
  2  partition by range (id_column)
  3  (partition p_100 values less than (101),
  4   partition p_200 values less than (201),
  5   partition p_300 values less than (301),
  6   partition p_400 values less than (401),
  7   partition p_max values less than (maxvalue))
  8  /

Table created.

SQL> insert into part_table_3 values (51,'Fifty One');

1 row created.

SQL> 
SQL> commit;

Commit complete.

SQL> declare
  cntr number;
  begin
   cntr := 0;
   while cntr < 100000
    loop
     insert into part_table_3 values (25, 'New Row') ; 
     commit;
     cntr := cntr + 1;
    end loop;
end;
  2    3    4    5    6    7    8    9   10   11   12  
 13  /

PL/SQL procedure successfully completed.

SQL> 
SQL> declare
  cntr number;
  begin
   cntr := 0;
   while cntr < 500001
    loop
     insert into part_table_3 values (55, 'New Row') ; 
     commit;
     cntr := cntr + 1;
    end loop;
end;  2    3    4    5    6    7    8    9   10   11  
 12  /

PL/SQL procedure successfully completed.

SQL> 
SQL> declare
  cntr number;
  begin
   cntr := 0;
   while cntr < 500001
    loop
     insert into part_table_3 values (45, 'New Row') ; 
     commit;
     cntr := cntr + 1;
    end loop;
end;  2    3    4    5    6    7    8    9   10   11  
 12  /

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

SQL> 
SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_3',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> 
SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_3'
and partition_name = 'P_100'  2    3    4  
  5  /

AVG_ROW_LEN   NUM_ROWS    BLOCKS
----------- ---------- ----------
  11    1100003      3022

SQL> 
SQL> alter table part_table_3 move partition p_100 ;

Table altered.

SQL>  exec dbms_stats.gather_table_stats('','PART_TABLE_3',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_3'
and partition_name = 'P_100'
/  2    3    4    5  

AVG_ROW_LEN   NUM_ROWS    BLOCKS
----------- ---------- ----------
  11    1100003      2484

SQL> 
SQL> select extent_id, blocks
from dba_extents
where segment_name = 'PART_TABLE_3'
and segment_type = 'TABLE PARTITION'
and partition_name = 'P_100'
and owner = 'HEMANT'
order by 1
/  2    3    4    5    6    7    8  

 EXTENT_ID     BLOCKS
---------- ----------
  0  1024
  1  1024
  2  1024

SQL> 


So, a Row-By-Row Insert still resulted in the the HWM being 3,022 and shrinking to 2,484 after a MOVE.



Let's try the same data-set in 11.2.0.4
SQL> connect hemant/hemant
Connected.
SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
PL/SQL Release 11.2.0.4.0 - Production
CORE 11.2.0.4.0 Production
TNS for Linux: Version 11.2.0.4.0 - Production
NLSRTL Version 11.2.0.4.0 - Production

SQL> 
SQL> create table part_table_3(id_column number(6), data_column varchar2(100))
  partition by range (id_column)
  (partition p_100 values less than (101),
   partition p_200 values less than (201),
   partition p_300 values less than (301),
   partition p_400 values less than (401),
   partition p_max values less than (maxvalue))
/

  2    3    4    5    6    7    8  
Table created.

SQL> SQL> show parameter deferr

NAME         TYPE  VALUE
------------------------------------ ----------- ------------------------------
deferred_segment_creation      boolean  TRUE
SQL> 
SQL> insert into part_table_3 values (51,'Fifty One');

1 row created.

SQL> commit;

Commit complete.

SQL> 
SQL> declare
  cntr number;
  begin
   cntr := 0;
   while cntr < 100000
    loop
     insert into part_table_3 values (25, 'New Row') ; 
     commit;
     cntr := cntr + 1;
    end loop;
end;  2    3    4    5    6    7    8    9   10   11  
 12  /

PL/SQL procedure successfully completed.

SQL> 
SQL> declare
  cntr number;
  begin
   cntr := 0;
   while cntr < 500001
    loop
     insert into part_table_3 values (55, 'New Row') ; 
     commit;
     cntr := cntr + 1;
    end loop;
end;  2    3    4    5    6    7    8    9   10   11  
 12  /

PL/SQL procedure successfully completed.

SQL> 
SQL> declare
  cntr number;
  begin
   cntr := 0;
   while cntr < 500001
    loop
     insert into part_table_3 values (45, 'New Row') ; 
     commit;
     cntr := cntr + 1;
    end loop;
end;
  2    3    4    5    6    7    8    9   10   11   12  
 13  /

PL/SQL procedure successfully completed.

SQL> 
SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_3',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_3'
and partition_name = 'P_100'
/  2    3    4    5  

AVG_ROW_LEN   NUM_ROWS    BLOCKS
----------- ---------- ----------
  11    1100003      3022

SQL> 
SQL> alter table part_table_3 move partition p_100 ;

Table altered.

SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_3',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_3'
and partition_name = 'P_100'
/  2    3    4    5  

AVG_ROW_LEN   NUM_ROWS    BLOCKS
----------- ---------- ----------
  11    1100003      2484

SQL> 
SQL> select extent_id, blocks
from dba_extents
where segment_name = 'PART_TABLE_3'
and segment_type = 'TABLE PARTITION'
and partition_name = 'P_100'
and owner = 'HEMANT'
order by 1
/  2    3    4    5    6    7    8  

 EXTENT_ID     BLOCKS
---------- ----------
  0  1024
  1  1024
  2  1024

SQL> 


So, 11.2.0.4 and 12.1.0.2 display the same behaviour for the Partition HWM.  A HWM of 3,022 blocks shrinking to 2,484 blocks.

The next test would be with a larger AVG_ROW_LEN.
.
.
.


25 April, 2016

Partition Storage -- 5 : Partitioned Table versus Non-Partitioned Table ? (in 12.1)

Reviewing my second blog post in this series, I found it strange that Partition P_100 (populated by Serial Inserts of 1 row, 100,000 rows, 500,000 rows and 500,000 rows) had such a High Water Mark.

For 1.1million rows of an Average Row Length of 11, the High Water Mark was 3,022 blocks.

In the fourth blog post, a simple ALTER TABLE MOVE PARTITION had brought the High Water Mark to 2,482 blocks !

This needs further investigation.

Let's compare a single Partition of a Partitioned Table with a Non-Partitioned Table for exactly the same data and same pattern of INSERT statements.

Starting with a new Partitioned Table.

SQL> l
  1  create table new_part_tbl (id_column number(6), data_column varchar2(100))
  2  partition by range (id_column)
  3  (partition p_100 values less than (101),
  4  partition p_200 values less than (201),
  5  partition p_300 values less than (301),
  6  partition p_400 values less than (401),
  7* partition p_max values less than (maxvalue))
SQL> /

Table created.

SQL>
SQL> insert into new_part_tbl values (51,'Fifty One');

1 row created.

SQL> 
SQL> insert into new_part_tbl
  2  select 25, 'New Row'
  3  from dual
  4  connect by level < 100001
  5  /

100000 rows created.

SQL> insert into new_part_tbl
  2  select 45, 'New Row'
  3  from dual
  4  connect by level < 500001
  5  /

500000 rows created.

SQL> /

500000 rows created.

SQL> commit;

Commit complete.

SQL> 
SQL> exec dbms_stats.gather_table_stats('','NEW_PART_TBL',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select avg_row_len, num_rows, blocks
  2  from user_tab_partitions
  3  where table_name = 'NEW_PART_TBL'
  4  and partition_name = 'P_100'
  5  /

AVG_ROW_LEN   NUM_ROWS     BLOCKS
----------- ---------- ----------
  11           1100001       3022

SQL> 
SQL> REM  Let's MOVE the Partition
SQL> alter table new_part_tbl move partition P_100;

Table altered.

SQL> exec dbms_stats.gather_table_stats('','NEW_PART_TBL',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> 
SQL> select avg_row_len, num_rows, blocks
  2  from user_tab_partitions
  3  where table_name = 'NEW_PART_TBL'
  4  and partition_name = 'P_100'
  5  /

AVG_ROW_LEN   NUM_ROWS     BLOCKS
----------- ---------- ----------
  11           1100001       2484

SQL> 
SQL> l
  1  select extent_id, blocks
  2  from dba_extents
  3  where segment_name = 'NEW_PART_TBL'
  4  and segment_type = 'TABLE PARTITION'
  5  and partition_name = 'P_100'
  6  and owner = 'HEMANT'
  7* order by 1
SQL> /

 EXTENT_ID     BLOCKS
---------- ----------
        0        1024
        1        1024
        2        1024

SQL> 


As expected (see the first blog post), the Extents are still 8MB each.  But the High Water Mark has "magicallly" shrunk from 3,022 blocks to 2,484 blocks.

Let's create a Non-Partitioned Table with the same columns and rows.

SQL> create table non_part_tbl (id_column number(6), data_column varchar2(100));

Table created.

SQL> insert into non_part_tbl values (51,'Fifty One');

1 row created.

SQL> insert into non_part_tbl
  2  select 25, 'New Row'
  3  from dual
  4  connect by level < 100001
  5  /

100000 rows created.

SQL> insert into non_part_tbl
  2  select 45, 'New Row'
  3  from dual
  4  connect by level < 500001
  5  /

500000 rows created.

SQL> /

500000 rows created.

SQL> commit;

Commit complete.

SQL> 
SQL> exec dbms_stats.gather_table_stats('','NON_PART_TBL');

PL/SQL procedure successfully completed.

SQL> select avg_row_len, num_rows, blocks
  2  from user_tables
  3  where table_name = 'NON_PART_TBL'
  4  /

AVG_ROW_LEN   NUM_ROWS     BLOCKS
----------- ---------- ----------
  11           1100001       2512

SQL> 
SQL> REM Let's MOVE the Table
SQL> alter table non_part_tbl move;

Table altered.

SQL> select avg_row_len, num_rows, blocks
  2  from user_tables
  3  where table_name = 'NON_PART_TBL'
  4  /

AVG_ROW_LEN   NUM_ROWS     BLOCKS
----------- ---------- ----------
  11           1100001       2512

SQL> 
SQL> l
  1  select extent_id, blocks
  2  from dba_extents
  3  where segment_name = 'NON_PART_TBL'
  4  and segment_type = 'TABLE'
  5  and owner = 'HEMANT'
  6* order by 1
SQL> /

 EXTENT_ID     BLOCKS
---------- ----------
         0          8
         1          8
         2          8
         3          8
         4          8
         5          8
         6          8
         7          8
         8          8
         9          8
        10          8
        11          8
        12          8
        13          8
        14          8
        15          8
        16        128
        17        128
        18        128
        19        128
        20        128
        21        128
        22        128
        23        128
        24        128
        25        128
        26        128
        27        128
        28        128
        29        128
        30        128
        31        128
        32        128
        33        128
        34        128

35 rows selected.

SQL> 


The Non-Partitioned Table had a High Water Mark of 2,512 blocks.  This did not change with a MOVE.  The allocation of Extents is also expected in AutoAllocate.

Why, then, does the Partition behave differently ?  It started with a High Water Mark of 3,022 blocks which shrunk to 2,484 blocks after a MOVE ?

Is the Average Row Length or the actual data a factor ?  (Note : I am *not* using Table Compression).

To be explored further with a larger row size ...........

Possibly, to be explored with a different pattern of INSERT statements  ......

Possibly to be compared in 11.2 as well. ......
.
.
.

24 April, 2016

Partition Storage -- 4 : Resizing Partitions

Building on Posts 2 (Adding Rows) and 3 (Adding Partitions) where we saw Table Partitions using 8MB Extents ..... is there a way to "resize" Partitions to smaller Extents (and, maybe, lesser space consumed) without using Compression ?

Let's explore.

Beginning with Partitions P_100 and P_200 ....

SQL> select segment_name, partition_name, bytes/1024, extents
  2  from user_segments
  3  where segment_name like 'MY_PART_%'
  4  and partition_name in ('P_100','P_200')
  5  order by 1,2
  6  /

SEGMENT_NAME                   PARTITION_NA BYTES/1024    EXTENTS
------------------------------ ------------ ---------- ----------
MY_PART_TBL                    P_100             24576          3
MY_PART_TBL                    P_200             32768          4
MY_PART_TBL_NDX                P_100             28672         43
MY_PART_TBL_NDX                P_200             33792         48

SQL> 
SQL> alter table my_part_tbl move partition p_100 storage (initial 64K next 64K);

Table altered.

SQL> alter index my_part_tbl_ndx rebuild partition p_100 storage (initial 64K next 64K)
  2  /

Index altered.

SQL> alter table my_part_tbl move partition p_200 storage (initial 64K next 64K);

Table altered.

SQL> alter index my_part_tbl_ndx rebuild partition p_200 storage (initial 64K next 64K)
  2  /

Index altered.

SQL> 
SQL> select segment_name, partition_name, bytes/1024, extents
  2  from user_segments
  3  where segment_name like 'MY_PART_%'
  4  and partition_name in ('P_100','P_200')
  5  order by 1,2
  6  /

SEGMENT_NAME                   PARTITION_NA BYTES/1024    EXTENTS
------------------------------ ------------ ---------- ----------
MY_PART_TBL                    P_100             20480         35
MY_PART_TBL                    P_200             21504         36
MY_PART_TBL_NDX                P_100             18432         33
MY_PART_TBL_NDX                P_200             19456         34

SQL> 
SQL> select partition_name, blocks, num_rows
  2  from user_tab_partitions
  3  where table_name = 'MY_PART_TBL'
  4  and partition_name in ('P_100','P_200')
  5  order by 1
  6  /

PARTITION_NA     BLOCKS   NUM_ROWS
------------ ---------- ----------
P_100              3022    1100001
P_200              3668    1100001

SQL> exec dbms_stats.gather_table_stats('','MY_PART_TBL',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select partition_name, blocks, num_rows
  2  from user_tab_partitions
  3  where table_name = 'MY_PART_TBL'
  4  and partition_name in ('P_100','P_200')
  5  order by 1
  6  /

PARTITION_NA      BLOCKS  NUM_ROWS
------------ ---------- ----------
P_100              2482    1100001
P_200              2639    1100001

SQL> 
SQL> 
SQL> l
  1  select partition_name, blocks, count(*)
  2  from dba_extents
  3  where owner = 'HEMANT'
  4  and segment_name = 'MY_PART_TBL'
  5  and segment_type = 'TABLE PARTITION'
  6  and partition_name in ('P_100','P_200')
  7  group by partition_name, blocks
  8* order by 1,2
SQL> /

PARTITION_NA     BLOCKS   COUNT(*)
------------ ---------- ----------
P_100                 8         16
P_100               128         19
P_200                 8         16
P_200               128         20

SQL> 


Partition P_100 has shrunk from 3 extents of 8MB adding up to 24,576KB to 35 extents adding up to 20,480KB. The High Water Mark has shrink from 3,022 blocks to 2,482 blocks (Remember : P_100 was populated with a Serial Insert.  Partition P_200 that had been populated with Parallel (DoP=4) insert has also shrunk from 32,768KB to 21,504KB and the High Water Mark from 3,668 blocks to 2,639 blocks.  The Extents are a combinaion of 64KB (the first 16, adding up to 1MB) and 1MB sizes.
Even the Index Partitions seem to have shrunk.

So, a MOVE/REBUILD (the REBUILD of the Index Partitons was required because I did a Partition MOVE without UPDATE INDEXES), could be used to shrink the Partitions with newer, smaller, Extents allocated.

But what about the case of SPLIT Partition, where Partitions SPLIT from an 8MB Partition resulted in 2 8MB Partitions, even for empty Partitions.

Here's a workaround.  Before SPLITting the P_MAX Partition, I resize it.

SQL> alter table my_part_tbl move partition p_max storage (initial 64K next 64K);

Table altered.

SQL> alter index my_part_tbl_ndx rebuild partition p_max storage (initial 64K next 64K);

Index altered.

SQL> alter table my_part_tbl
  2  split partition p_max
  3  at (1001)
  4  into (partition p_1000, partition p_max)
  5  /

Table altered.

SQL> alter table my_part_tbl
  2  split partition p_1000
  3  at (901)
  4  into (partition p_900, partition p_1000)
  5  /

Table altered.

SQL> alter table my_part_tbl
  2  split partition p_900
  3  at (801)
  4  into (partition p_800, partition p_900)
  5  /

Table altered.

SQL> 
SQL> l
  1  select segment_name, partition_name, bytes/1024, extents
  2  from user_segments
  3  where segment_name like 'MY_PART_%'
  4* order by 1,2
SQL> 
SQL> /

SEGMENT_NAME                   PARTITION_NA BYTES/1024    EXTENTS
------------------------------ ------------ ---------- ----------
MY_PART_TBL                    P_100             20480         35
MY_PART_TBL                    P_200             21504         36
MY_PART_TBL                    P_300              8192          1
MY_PART_TBL                    P_400              8192          1
MY_PART_TBL                    P_600              8192          1
MY_PART_TBL                    P_680              8192          1
MY_PART_TBL                    P_700              8192          1
MY_PART_TBL                    P_800                64          1
MY_PART_TBL                    P_900                64          1
MY_PART_TBL                    P_1000               64          1
MY_PART_TBL                    P_MAX                64          1
MY_PART_TBL_NDX                P_100             18432         33
MY_PART_TBL_NDX                P_200             19456         34
MY_PART_TBL_NDX                P_300                64          1
MY_PART_TBL_NDX                P_400                64          1
MY_PART_TBL_NDX                P_600                64          1
MY_PART_TBL_NDX                P_680                64          1
MY_PART_TBL_NDX                P_700                64          1
MY_PART_TBL_NDX                P_800                64          1
MY_PART_TBL_NDX                P_900                64          1
MY_PART_TBL_NDX                P_1000               64          1
MY_PART_TBL_NDX                P_MAX                64          1

22 rows selected.

SQL> 


(Note : I have manually relocated Partition P_1000 in the listing).
Partitions P_600, P_680 and P_700 had been created by SPLIT PARTITION commands in the previous post, beginning with segment-created P_MAX partition.  However, after rebuilding P_MAX to 64KB Extents, subsequently SPLITted Partitions (P_800 to P_1000) are also 64KB.

Note : I am not advising that all have to Partitions be 64K.  (Observe how AutoAllocate did allocate 1MB Extents to P_100 and P_200 after the first 1MB of space usage (using 16 64KB Extents).
.
.
.


23 April, 2016

Partition Storage -- 3 : Adding new Range Partitions with SPLIT

Building on the Partitioned Table in the previous two blog posts...

We know that the Table is a Range Partitioned Table.  With a MAXVALUE Partition, the only way to add new Partitions is to use the SPLIT PARTITION command.

First, let's review the Table, Partitions and Segments.

SQL> select table_name, num_rows
  2  from user_tables
  3  where table_name = 'MY_PART_TBL' 
  4  /

TABLE_NAME    NUM_ROWS
---------------- ----------
MY_PART_TBL     2200004

SQL> select partition_name, num_rows, blocks
  2  from user_tab_partitions
  3  where table_name = 'MY_PART_TBL'
  4  order by 1
  5  /

PARTITION_NA   NUM_ROWS     BLOCKS
------------ ---------- ----------
P_100  1100001       3022
P_200  1100001       3668
P_300        1       1006
P_400        1       1006
P_MAX        0   0

SQL>                                   
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
  2  from user_segments
  3  where segment_name like 'MY_PART_%'  
  4  order by 1,2,3 
  5  /

SEGMENT_NAME      SEGMENT_TYPE PARTITION_NA BYTES/1024    EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL      TABLE PARTITION P_100    24576   3
MY_PART_TBL      TABLE PARTITION P_200    32768   4
MY_PART_TBL      TABLE PARTITION P_300     8192   1
MY_PART_TBL      TABLE PARTITION P_400     8192   1
MY_PART_TBL_NDX      INDEX PARTITION P_100    28672  43
MY_PART_TBL_NDX      INDEX PARTITION P_200    33792  48
MY_PART_TBL_NDX      INDEX PARTITION P_300       64   1
MY_PART_TBL_NDX      INDEX PARTITION P_400       64   1

8 rows selected.

SQL> 


So, the table has 5 partitions P_100 to P_MAX but only 4 have segments created after one or more rows have been populated.  P_MAX has no segment created for either the Table Partition or the Index Partition.

What happens if we SPLIT P_MAX (an empty, segmentless Partition) to create a new Partition ?

SQL> alter table my_part_tbl
  2  split partition p_max
  3  at (501)
  4  into (partition p_500, partition p_max)
  5  /

Table altered.

SQL> 
SQL> exec dbms_stats.gather_table_stats('','MY_PART_TBL',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select partition_name, high_value, num_rows, blocks
  2  from user_tab_partitions
  3  where table_name = 'MY_PART_TBL'
  4  order by partition_position
  5  /

PARTITION_NA HIGH_VALUE  NUM_ROWS     BLOCKS
------------ ---------------- ---------- ----------
P_100      101   1100001       3022
P_200      201   1100001       3668
P_300      301         1       1006
P_400      401         1       1006
P_500      501         0   0
P_MAX      MAXVALUE         0   0

6 rows selected.

SQL> 
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
  2  from user_segments
  3  where segment_name like 'MY_PART_%'
  4  order by 1,2,3
  5  /

SEGMENT_NAME      SEGMENT_TYPE PARTITION_NA BYTES/1024    EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL      TABLE PARTITION P_100    24576   3
MY_PART_TBL      TABLE PARTITION P_200    32768   4
MY_PART_TBL      TABLE PARTITION P_300     8192   1
MY_PART_TBL      TABLE PARTITION P_400     8192   1
MY_PART_TBL_NDX      INDEX PARTITION P_100    28672  43
MY_PART_TBL_NDX      INDEX PARTITION P_200    33792  48
MY_PART_TBL_NDX      INDEX PARTITION P_300       64   1
MY_PART_TBL_NDX      INDEX PARTITION P_400       64   1

8 rows selected.

SQL> 


So, the process of creating Partition P_500 did not create a segment for it, because P_MAX which it was SPLIT from, was segmentless.  What happens if I split a Partition with 1 or more rows ?

SQL> insert into my_part_tbl
  2  select 550, 'Five Hundred Fifty'
  3  from dual 
  4  /

1 row created.

SQL> commit;
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
  2  from user_segments
  3  where segment_name like 'MY_PART_%'
  4  order by 1,2,3
  5  /

SEGMENT_NAME      SEGMENT_TYPE PARTITION_NA BYTES/1024    EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL      TABLE PARTITION P_100    24576   3
MY_PART_TBL      TABLE PARTITION P_200    32768   4
MY_PART_TBL      TABLE PARTITION P_300     8192   1
MY_PART_TBL      TABLE PARTITION P_400     8192   1
MY_PART_TBL      TABLE PARTITION P_MAX     8192   1
MY_PART_TBL_NDX      INDEX PARTITION P_100    28672  43
MY_PART_TBL_NDX      INDEX PARTITION P_200    33792  48
MY_PART_TBL_NDX      INDEX PARTITION P_300       64   1
MY_PART_TBL_NDX      INDEX PARTITION P_400       64   1
MY_PART_TBL_NDX      INDEX PARTITION P_MAX       64   1

10 rows selected.

SQL> 
SQL> alter table my_part_tbl
  2  split partition p_max
  3  at (601)
  4  into (partition p_600, partition p_max)
  5  /

Table altered.

SQL> select segment_name, segment_type, partition_name, bytes/1024, extents   
  2  from user_segments
  3  where segment_name like 'MY_PART_%'
  4  order by 1,2,3 
  5  /

SEGMENT_NAME      SEGMENT_TYPE PARTITION_NA BYTES/1024    EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL      TABLE PARTITION P_100    24576   3
MY_PART_TBL      TABLE PARTITION P_200    32768   4
MY_PART_TBL      TABLE PARTITION P_300     8192   1
MY_PART_TBL      TABLE PARTITION P_400     8192   1
MY_PART_TBL      TABLE PARTITION P_600     8192   1
MY_PART_TBL      TABLE PARTITION P_MAX     8192   1
MY_PART_TBL_NDX      INDEX PARTITION P_100    28672  43
MY_PART_TBL_NDX      INDEX PARTITION P_200    33792  48
MY_PART_TBL_NDX      INDEX PARTITION P_300       64   1
MY_PART_TBL_NDX      INDEX PARTITION P_400       64   1
MY_PART_TBL_NDX      INDEX PARTITION P_600       64   1

SEGMENT_NAME      SEGMENT_TYPE PARTITION_NA BYTES/1024    EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL_NDX      INDEX PARTITION P_MAX       64   1

12 rows selected.

SQL> 


So, the row for ID_COLUMN=550 created the segment for Partition P_MAX. Subsequently, SPLITting this Partition into P_600 and P_MAX resulted into two Partitions of 8MB each.
The row for ID_COLUMN=550 would be in the P_600 Partition and the P_MAX Partition would now be the empty Partition.  Yet, even P_MAX now takes an 8MB extent, unlike earlier.

Let's try doing such a SPLIT that, say P_700 is created empty but P_MAX inherits the row.

SQL> insert into my_part_tbl
  2  select 900, 'Nine Hundred'
  3  from dual
  4  /

1 row created.

SQL> commit;

Commit complete.

SQL> alter table my_part_tbl
  2  split partition p_max
  3  at (701)
  4  into (partition p_700, partition p_max)
  5  /

Table altered.

SQL> 
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
  2  from user_segments
  3  where segment_name like 'MY_PART_%'
  4  order by 1,2,3
  5  /

SEGMENT_NAME      SEGMENT_TYPE PARTITION_NA BYTES/1024    EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL      TABLE PARTITION P_100    24576   3
MY_PART_TBL      TABLE PARTITION P_200    32768   4
MY_PART_TBL      TABLE PARTITION P_300     8192   1
MY_PART_TBL      TABLE PARTITION P_400     8192   1
MY_PART_TBL      TABLE PARTITION P_600     8192   1
MY_PART_TBL      TABLE PARTITION P_700     8192   1
MY_PART_TBL      TABLE PARTITION P_MAX     8192   1
MY_PART_TBL_NDX      INDEX PARTITION P_100    28672  43
MY_PART_TBL_NDX      INDEX PARTITION P_200    33792  48
MY_PART_TBL_NDX      INDEX PARTITION P_300       64   1
MY_PART_TBL_NDX      INDEX PARTITION P_400       64   1

SEGMENT_NAME      SEGMENT_TYPE PARTITION_NA BYTES/1024    EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL_NDX      INDEX PARTITION P_600       64   1
MY_PART_TBL_NDX      INDEX PARTITION P_700       64   1
MY_PART_TBL_NDX      INDEX PARTITION P_MAX       64   1

14 rows selected.

SQL> select count(*) from my_part_tbl partition (P_700);

  COUNT(*)
----------
  0

SQL> 


Again, both Partitions (P_700 and P_MAX) have a segment of 8MB.
This means that, once a Segment for a Partition is created, any SPLIT of that Partition results into two Segments inheriting the same 8MB Extent Size, irrespective of the fact that one of the two may be empty.

SQL> alter table my_part_tbl
  2  split partition p_700
  3  at (681)
  4  into (partition p_680, partition p_700)
  5  /

Table altered.

SQL> 
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
  2  from user_segments
  3  where segment_name like 'MY_PART_%'
  4  order by 1,2,3
  5  /

SEGMENT_NAME      SEGMENT_TYPE PARTITION_NA BYTES/1024    EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL      TABLE PARTITION P_100    24576   3
MY_PART_TBL      TABLE PARTITION P_200    32768   4
MY_PART_TBL      TABLE PARTITION P_300     8192   1
MY_PART_TBL      TABLE PARTITION P_400     8192   1
MY_PART_TBL      TABLE PARTITION P_600     8192   1
MY_PART_TBL      TABLE PARTITION P_680     8192   1
MY_PART_TBL      TABLE PARTITION P_700     8192   1
MY_PART_TBL      TABLE PARTITION P_MAX     8192   1
MY_PART_TBL_NDX      INDEX PARTITION P_100    28672  43
MY_PART_TBL_NDX      INDEX PARTITION P_200    33792  48
MY_PART_TBL_NDX      INDEX PARTITION P_300       64   1

SEGMENT_NAME      SEGMENT_TYPE PARTITION_NA BYTES/1024    EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL_NDX      INDEX PARTITION P_400       64   1
MY_PART_TBL_NDX      INDEX PARTITION P_600       64   1
MY_PART_TBL_NDX      INDEX PARTITION P_680       64   1
MY_PART_TBL_NDX      INDEX PARTITION P_700       64   1
MY_PART_TBL_NDX      INDEX PARTITION P_MAX       64   1

16 rows selected.

SQL> 


That is confirmation that SPLITting a Partition that has a segment (even if it is empty) results into two segmented partitions, even if both are empty.

Going back to Parttion P_500 (which is present but segmentless), what happens if we split it ?

SQL> alter table my_part_tbl
  2  split partition p_500
  3  at (451)
  4  into (partition p_450, partition p_500)
  5  /

Table altered.

SQL>    
SQL> select partition_name, high_value
  2  from user_tab_partitions
  3  where table_name = 'MY_PART_TBL'
  4  order by partition_position
  5  /

PARTITION_NA HIGH_VALUE
------------ ----------------
P_100      101
P_200      201
P_300      301
P_400      401
P_450      451
P_500      501
P_600      601
P_680      681
P_700      701
P_MAX      MAXVALUE

10 rows selected.

SQL> 
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
  2  from user_segments
  3  where segment_name like 'MY_PART_%'
  4  order by 1,2,3
  5  /

SEGMENT_NAME      SEGMENT_TYPE PARTITION_NA BYTES/1024    EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL      TABLE PARTITION P_100    24576   3
MY_PART_TBL      TABLE PARTITION P_200    32768   4
MY_PART_TBL      TABLE PARTITION P_300     8192   1
MY_PART_TBL      TABLE PARTITION P_400     8192   1
MY_PART_TBL      TABLE PARTITION P_600     8192   1
MY_PART_TBL      TABLE PARTITION P_680     8192   1
MY_PART_TBL      TABLE PARTITION P_700     8192   1
MY_PART_TBL      TABLE PARTITION P_MAX     8192   1
MY_PART_TBL_NDX      INDEX PARTITION P_100    28672  43
MY_PART_TBL_NDX      INDEX PARTITION P_200    33792  48
MY_PART_TBL_NDX      INDEX PARTITION P_300       64   1

SEGMENT_NAME      SEGMENT_TYPE PARTITION_NA BYTES/1024    EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL_NDX      INDEX PARTITION P_400       64   1
MY_PART_TBL_NDX      INDEX PARTITION P_600       64   1
MY_PART_TBL_NDX      INDEX PARTITION P_680       64   1
MY_PART_TBL_NDX      INDEX PARTITION P_700       64   1
MY_PART_TBL_NDX      INDEX PARTITION P_MAX       64   1

16 rows selected.

SQL> 


Splitting segmentless Partition P_500 into P_450 and P_500 did *not* result into new Segments.

 This has implications for your SPLIT Partition strategy.  If you need to do a recursive split to create, say, 90 1-day Partitions and you start with a Partition that has a segment (even if empty), you get 90 new segments as well.  Thus, the table would suddenly "grow" by 720MB without having inserted a single row on the day you create these 90 Partitions.  You may get some questions from IT Operations / Support about the sudden "growth" in 1 day.
On the other hand, starting with a segmentess Partition, you get 90 new segmentless Partitions.  Their segments will be created when they are populated.
.
.

.

20 April, 2016

Partition Storage -- 2 : New Rows Inserted in 12.1 Partitioned Table

Following up from the previous post,  given the current sizes of the Partitions (The underlying Tablespace is AUTOALLOCATE)  :

SQL> col segment_name format a30
SQL> col partition_name format a12
SQL> l
  1  select segment_name, partition_name, segment_type, bytes/1024
  2  from user_segments
  3  where segment_name like 'MY_PART_%'
  4* order by 1,2
SQL> /

SEGMENT_NAME         PARTITION_NA SEGMENT_TYPE       BYTES/1024
------------------------------ ------------ ------------------ ----------
MY_PART_TBL         P_100     TABLE PARTITION      8192
MY_PART_TBL         P_200     TABLE PARTITION      8192
MY_PART_TBL         P_300     TABLE PARTITION      8192
MY_PART_TBL         P_400     TABLE PARTITION      8192
MY_PART_TBL_NDX         P_100     INDEX PARTITION        64
MY_PART_TBL_NDX         P_200     INDEX PARTITION        64
MY_PART_TBL_NDX         P_300     INDEX PARTITION        64
MY_PART_TBL_NDX         P_400     INDEX PARTITION        64

8 rows selected.

SQL> 


I shall now insert rows so that a Partition has to grow beyond the first extent.

SQL> insert into my_part_tbl                      
  2  select 25, 'New Row'
  3  from dual 
  4  connect by level < 100001;

100000 rows created.

SQL> insert into my_part_tbl
  2  select 45, 'New Row'
  3  from dual
  4  connect by level < 500001;

500000 rows created.

SQL> /

500000 rows created.

SQL> commit;

Commit complete.

SQL> 
SQL> select segment_name, partition_name, segment_type, bytes/1024
  2  from user_segments
  3  where segment_name like 'MY_PART_%'
  4  order by 1,2;

SEGMENT_NAME         PARTITION_NA SEGMENT_TYPE       BYTES/1024
------------------------------ ------------ ------------------ ----------
MY_PART_TBL         P_100     TABLE PARTITION     24576
MY_PART_TBL         P_200     TABLE PARTITION      8192
MY_PART_TBL         P_300     TABLE PARTITION      8192
MY_PART_TBL         P_400     TABLE PARTITION      8192
MY_PART_TBL_NDX         P_100     INDEX PARTITION     28672
MY_PART_TBL_NDX         P_200     INDEX PARTITION        64
MY_PART_TBL_NDX         P_300     INDEX PARTITION        64
MY_PART_TBL_NDX         P_400     INDEX PARTITION        64

8 rows selected.

SQL> 


So, now, the first Partition P_100 has grown to exceed 24MB. Let's check the Extent Sizes.

SQL> select extent_id, blocks, blocks*8192/1024 Size_KB
  2  from user_extents
  3  where segment_name = 'MY_PART_TBL'
  4  and segment_type = 'TABLE PARTITION'
  5  and partition_name = 'P_100'
  6  order by 1;

 EXTENT_ID     BLOCKS  SIZE_KB
---------- ---------- ----------
  0  1024     8192
  1  1024     8192
  2  1024     8192

SQL> 
SQL> l
  1  select extent_id, blocks, blocks*8192/1024 Size_KB
  2  from user_extents
  3  where segment_name = 'MY_PART_TBL_NDX'
  4  and segment_type = 'INDEX PARTITION'
  5  and partition_name = 'P_100'
  6* order by 1
SQL> /

 EXTENT_ID     BLOCKS  SIZE_KB
---------- ---------- ----------
  0     8       64
  1     8       64
  2     8       64
  3     8       64
  4     8       64
  5     8       64
  6     8       64
  7     8       64
  8     8       64
  9     8       64
 10     8       64
 11     8       64
 12     8       64
 13     8       64
 14     8       64
 15     8       64
 16   128     1024
 17   128     1024
 18   128     1024
 19   128     1024
 20   128     1024
 21   128     1024
 22   128     1024
 23   128     1024
 24   128     1024
 25   128     1024
 26   128     1024
 27   128     1024
 28   128     1024
 29   128     1024
 30   128     1024
 31   128     1024
 32   128     1024
 33   128     1024
 34   128     1024
 35   128     1024
 36   128     1024
 37   128     1024
 38   128     1024
 39   128     1024
 40   128     1024
 41   128     1024
 42   128     1024

43 rows selected.

SQL> 


So, the Table Partition grew as 8MB Extents but the Index Partition used the normal AutoAllocate routine of 64KB Extents for the first 1MB (16 Extents) followed by 1MB Extents.

What might have been the expected size of the Table Partition ?   ** see the UPDATE at the end of this post.

SQL> exec dbms_stats.gather_table_stats('','MY_PART_TBL',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select partition_name, trunc(1.2*avg_row_len*num_rows/1024) Expected_KB
  2  from user_tab_partitions
  3  where table_name = 'MY_PART_TBL'
  4  order by 1 
  5  /

PARTITION_NA EXPECTED_KB
------------ -----------
P_100     14179
P_200         0
P_300         0
P_400         0
P_MAX         0

SQL> 


I use a 20% factor to account for PCTFREE and block/row overheads.  **  see the UPDATE at the end of  this post.

Next, what if I use Parallel Insert ?

SQL> alter table my_part_tbl parallel 4;

Table altered.

SQL> alter session enable parallel dml ;

Session altered.

SQL> insert /*+ PARALLEL (MANUAL) */
  2  into my_part_tbl 
  3  select 125, 'New Row'
  4  from dual
  5  connect by level < 100001;

100000 rows created.

SQL> commit;

Commit complete.

SQL> insert /*+ PARALLEL (MANUAL) */
  2  into my_part_tbl
  3  select 125, 'New Row'
  4  from dual         
  5  connect by level < 500001;

500000 rows created.

SQL> commit;

Commit complete.

SQL> insert /*+ PARALLEL (MANUAL) */
  2  into my_part_tbl
  3  select 125, 'New Row'
  4  from dual
  5  connect by level < 500001;

500000 rows created.

SQL> commit;

Commit complete.

SQL> 
SQL> l
  1  select segment_name, partition_name, segment_type, extents, bytes/1024
  2  from user_segments
  3  where segment_name like 'MY_PART_%'
  4* order by 1,2
SQL> /

SEGMENT_NAME    PARTITION_NA SEGMENT_TYPE      EXTENTS BYTES/1024
------------------------- ------------ ------------------ ---------- ----------
MY_PART_TBL    P_100        TABLE PARTITION     3   24576
MY_PART_TBL    P_200        TABLE PARTITION     4   32768
MY_PART_TBL    P_300        TABLE PARTITION     1    8192
MY_PART_TBL    P_400        TABLE PARTITION     1    8192
MY_PART_TBL_NDX    P_100        INDEX PARTITION    43   28672
MY_PART_TBL_NDX    P_200        INDEX PARTITION    48   33792
MY_PART_TBL_NDX    P_300        INDEX PARTITION     1      64
MY_PART_TBL_NDX    P_400        INDEX PARTITION     1      64

8 rows selected.

SQL> 


Note the size of P_200.  Parallel Insert actually allocated more extents because each PX server process created at least 1 extent for the rows it inserted (Note : For a smaller row-count it is not necessary that each PX server would have inserted rows, some may not have). So, Parallel Insert can result in more allocated space.

** UPDATE :  What about the "expected size" computation based on AVG_ROW_LEN x NUM_ROWS ?

SQL> select partition_name, avg_row_len, num_rows, blocks, trunc(1.2*avg_row_len*num_rows/8192) Expected_Blocks
  2  from user_tab_partitions
  3  where table_name = 'MY_PART_TBL'
  4  order by 1;

PARTITION_NA AVG_ROW_LEN   NUM_ROWS BLOCKS EXPECTED_BLOCKS
------------ ----------- ---------- ---------- ---------------
P_100        11    1100001   3022    1772
P_200        12    1100001   3668    1933
P_300        26   1   1006       0
P_400        28   1   1006       0
P_MAX         0   0      0       0

SQL> 


This shows how any"formula" based on AVG_ROW_LEN and NUM_ROWS can be inaccurate in forecasting the expected size (blocks used).  Also note that P_100 was populated with a Serial Insert while P_200 was populated with a Parallel (Degree=4) Insert.
.
.
.



19 April, 2016

Partition Storage -- 1 : Default Partition Sizes in 12c

11g 11.2.0.2 introduced a change whereby the default Initial Extent of a Table Partition was 8MB.  However, this did not apply to Index Partitions which could still start with 64KB extents in an AutoAllocate Tablespace.

12cR1 now introduces a parameter to enable large Initial Extent for Index Partitions as well.

SQL> connect / as sysdba                          
Connected.
SQL> select banner from v$version where banner like 'Oracle Database%';

BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

SQL> show parameter deferred

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
deferred_segment_creation            boolean     TRUE
SQL> 
SQL> connect hemant/hemant
Connected.
SQL> create table my_part_tbl (id_column number(6), data_column varchar2(100))
  2  partition by range(id_column)
  3  (partition p_100 values less than (101),
  4   partition p_200 values less than (201),
  5   partition p_300 values less than (301),
  6   partition p_400 values less than (401),
  7   partition p_max values less than (maxvalue))
  8  /

Table created.

SQL> create index my_part_tbl_ndx on my_part_tbl(id_column) local;

Index created.

SQL> 
SQL> select segment_name, partition_name, segment_type, bytes/1024
  2  from user_segments
  3  where segment_name like 'MY_PART_%'
  4  order by 1,2;

no rows selected

SQL> 
SQL> insert into my_part_tbl values (51,'Fifty One');

1 row created.

SQL> insert into my_part_tbl values (151,'One Hundred Fifty One');

1 row created.

SQL> insert into my_part_tbl values (251, 'Two Hundred Fifty One');

1 row created.

SQL> select segment_name, partition_name, segment_type, bytes/1024
  2  from user_segments
  3  where segment_name like 'MY_PART_%'
  4  order by 1,2;

SEGMENT_NAME                   PARTITION_NA SEGMENT_TYPE       BYTES/1024
------------------------------ ------------ ------------------ ----------
MY_PART_TBL                    P_100        TABLE PARTITION          8192
MY_PART_TBL                    P_200        TABLE PARTITION          8192
MY_PART_TBL                    P_300        TABLE PARTITION          8192
MY_PART_TBL_NDX                P_100        INDEX PARTITION            64
MY_PART_TBL_NDX                P_200        INDEX PARTITION            64
MY_PART_TBL_NDX                P_300        INDEX PARTITION            64

6 rows selected.

SQL> 


I can enable large Index Partition Extent with a parameter (which can be set with ALTER SESSION)


SQL> alter session set "_index_partition_large_extents"=TRUE;

Session altered.

SQL> insert into my_part_tbl values (351,'Three Hundred Fifty One');

1 row created.

SQL> select segment_name, partition_name, segment_type, bytes/1024
  2  from user_segments
  3  where segment_name like 'MY_PART_%'
  4  order by 1,2;

SEGMENT_NAME                   PARTITION_NA SEGMENT_TYPE       BYTES/1024
------------------------------ ------------ ------------------ ----------
MY_PART_TBL                    P_100        TABLE PARTITION          8192
MY_PART_TBL                    P_200        TABLE PARTITION          8192
MY_PART_TBL                    P_300        TABLE PARTITION          8192
MY_PART_TBL                    P_400        TABLE PARTITION          8192
MY_PART_TBL_NDX                P_100        INDEX PARTITION            64
MY_PART_TBL_NDX                P_200        INDEX PARTITION            64
MY_PART_TBL_NDX                P_300        INDEX PARTITION            64
MY_PART_TBL_NDX                P_400        INDEX PARTITION          8192

8 rows selected.

SQL> 


However, I can rebuild the Index Partition Extent as well :

SQL> alter index my_part_tbl_ndx rebuild partition p_400 storage (initial 64K);

Index altered.

SQL> select segment_name, partition_name, segment_type, bytes/1024
  2  from user_segments
  3  where segment_name like 'MY_PART_%'
  4  order by 1,2;

SEGMENT_NAME                   PARTITION_NA SEGMENT_TYPE       BYTES/1024
------------------------------ ------------ ------------------ ----------
MY_PART_TBL                    P_100        TABLE PARTITION          8192
MY_PART_TBL                    P_200        TABLE PARTITION          8192
MY_PART_TBL                    P_300        TABLE PARTITION          8192
MY_PART_TBL                    P_400        TABLE PARTITION          8192
MY_PART_TBL_NDX                P_100        INDEX PARTITION            64
MY_PART_TBL_NDX                P_200        INDEX PARTITION            64
MY_PART_TBL_NDX                P_300        INDEX PARTITION            64
MY_PART_TBL_NDX                P_400        INDEX PARTITION            64

8 rows selected.

SQL> 


In the next post, we'll see more Extents for the Partitions.
,
,
,

17 April, 2016

Online Relocation of Database File : ASM to FileSystem and FileSystem to ASM

There have been few published examples of the online datafile relocation feature in 12c.  The examples I've seen are on filesystem.

Here I show online relocation to/from ASM and FileSystem.

SQL> connect system/oracle
Connected.
SQL> create tablespace test_relocate;

Tablespace created.

SQL> create table test_relocate_tbl                         
  2  tablespace test_relocate
  3  as select * from dba_objects;

Table created.

SQL> select tablespace_name, bytes/1024
  2  from user_segments
  3  where segment_name = 'TEST_RELOCATE_TBL';

TABLESPACE_NAME         BYTES/1024
------------------------------ ----------
TEST_RELOCATE       13312

SQL> select file_name, bytes/1024
  2  from dba_data_files
  3  where tablespace_name = 'TEST_RELOCATE';

FILE_NAME
--------------------------------------------------------------------------------
BYTES/1024
----------
+DATA/NONCDB/DATAFILE/test_relocate.260.909444793
    102400


SQL> 
SQL> alter database move datafile                                                        
  2  '+DATA/NONCDB/DATAFILE/test_relocate.260.909444793'
  3  to '/oradata/NONCDB/test_relocate_01.dbf';

Database altered.

SQL> !ls -l /oradata/NONCDB
total 102408
-rw-r----- 1 oracle asmdba 104865792 Apr 17 23:39 test_relocate_01.dbf

SQL> 
SQL> alter database move datafile  
  2  '/oradata/NONCDB/test_relocate_01.dbf'
  3  to '+DATA';

Database altered.

SQL> select file_name, bytes/1024
  2  from dba_data_files
  3  where tablespace_name = 'TEST_RELOCATE';

FILE_NAME
--------------------------------------------------------------------------------
BYTES/1024
----------
+DATA/NONCDB/DATAFILE/test_relocate.260.909445261
    102400


SQL>
SQL> !ls -l /oradata/NONCDB
total 0

SQL> 


Note that I was courageous enough to not use the KEEP keyword (which is optional !).
.
.
.

10 April, 2016

FBDA -- 6 : Some Bug Notes

Some MoS documents on FBDA Bugs

1.  Bug 16454223  :  Wrong Results  (more rows than expected)

2.  Bug 16898135  :  FBDA does not split partitions  (resulting in rows not being purged)

3.  Bug 18294320  :   ORA-01555 (ORA-2475) on SMON_SCN_TIME

4.  Bug 22456983  :   Limit on SMON_SCN_TIME affecting FBDA

5.  Document 2039070.1 :  Known Issues with Flashback Data Archive
.
.
.




Recent Blog Series on Compression

FBDA -- 5 : Testing AutoPurging

Tracking data changes after one row added (ID_COLUMN=2000) on 06-Apr

SQL> select systimestamp from dual;

SYSTIMESTAMP
---------------------------------------------------------------------------
06-APR-16 10.53.20.328132 PM +08:00

SQL> select scn_to_timestamp(startscn), scn_to_timestamp(endscn), count(*)
  2  from sys_fba_hist_93250
  3  group by scn_to_timestamp(startscn), scn_to_timestamp(endscn)
  4  order by 1,2;

SCN_TO_TIMESTAMP(STARTSCN)
---------------------------------------------------------------------------
SCN_TO_TIMESTAMP(ENDSCN)
---------------------------------------------------------------------------
  COUNT(*)
----------
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM
       450

02-APR-16 11.32.55.000000000 PM
03-APR-16 11.45.24.000000000 PM
       550

02-APR-16 11.46.11.000000000 PM
03-APR-16 11.41.33.000000000 PM
  5

02-APR-16 11.46.11.000000000 PM
03-APR-16 11.45.24.000000000 PM
       445

03-APR-16 11.41.33.000000000 PM
03-APR-16 11.45.24.000000000 PM
  5

03-APR-16 11.45.24.000000000 PM
04-APR-16 11.05.33.000000000 PM
      1000

06-APR-16 10.40.43.000000000 PM
06-APR-16 10.42.54.000000000 PM
  1


7 rows selected.

SQL> 
SQL> select count(*) from sys_fba_tcrv_93250;    

  COUNT(*)
----------
      1002

SQL> 


More changes on 07-Apr


SQL> insert into test_fbda
  2  select 3000, to_char(3000), trunc(sysdate)
  3  from dual;

1 row created.

SQL> commit;

Commit complete.

SQL> update test_fbda
  2  set date_inserted=date_inserted
  3  where id_column=3000;

1 row updated.

SQL> delete test_fbda
  2  where id_column < 1001 ;

1000 rows deleted.

SQL> commit;

Commit complete.

SQL> 
SQL> l
  1  select scn_to_timestamp(startscn) starttime, scn_to_timestamp(endscn) endtime, count(*)
  2  from sys_fba_hist_93250
  3  group by scn_to_timestamp(startscn), scn_to_timestamp(endscn)
  4* order by 1,2,3
SQL> /

STARTTIME    ENDTIME       COUNT(*)
-------------------------------- -------------------------------- ----------
02-APR-16 11.32.55.000000000 PM  02-APR-16 11.46.11.000000000 PM  450
02-APR-16 11.32.55.000000000 PM  03-APR-16 11.45.24.000000000 PM  550
02-APR-16 11.46.11.000000000 PM  03-APR-16 11.41.33.000000000 PM    5
02-APR-16 11.46.11.000000000 PM  03-APR-16 11.45.24.000000000 PM  445
03-APR-16 11.41.33.000000000 PM  03-APR-16 11.45.24.000000000 PM    5
03-APR-16 11.45.24.000000000 PM  04-APR-16 11.05.33.000000000 PM 1000
04-APR-16 11.09.43.000000000 PM  07-APR-16 10.28.03.000000000 PM 1000
06-APR-16 10.40.43.000000000 PM  06-APR-16 10.42.54.000000000 PM    1
07-APR-16 10.27.35.000000000 PM  07-APR-16 10.28.03.000000000 PM    1
07-APR-16 10.28.03.000000000 PM  07-APR-16 10.28.03.000000000 PM 1000

10 rows selected.

SQL> 
SQL> l
  1  select id_column, trunc(date_inserted), count(*)
  2  from test_fbda
  3  group by id_column, trunc(date_inserted)
  4* order by 1
SQL> /

 ID_COLUMN TRUNC(DAT   COUNT(*)
---------- --------- ----------
      2000 06-APR-16       1
      3000 07-APR-16       1

SQL> 


I see two new 1000 row sets (04-Apr and 07-Apr).  I should expect only one.

Now that rows for ID_COLUMN less than 1001 have been deleted on 07-Apr, we have to see if and when they get purged from the History table.


On 09-Apr:

SQL> insert into test_fbda
  2  select 4000, to_char(4000),trunc(sysdate)
  3  from dual;

1 row created.

SQL> commit;

Commit complete.

SQL> update test_fbda
  2  set date_inserted=date_inserted
  3  where id_column=4000;

1 row updated.

SQL> commit;

Commit complete.

SQL> l
  1  select scn_to_timestamp(startscn) starttime, scn_to_timestamp(endscn) endtime, count(*)
  2  from sys_fba_hist_93250
  3  group by scn_to_timestamp(startscn), scn_to_timestamp(endscn)
  4* order by 1,2,3
SQL> /

STARTTIME    ENDTIME       COUNT(*)
-------------------------------- -------------------------------- ----------
02-APR-16 11.32.55.000000000 PM  02-APR-16 11.46.11.000000000 PM  450
02-APR-16 11.32.55.000000000 PM  03-APR-16 11.45.24.000000000 PM  550
02-APR-16 11.46.11.000000000 PM  03-APR-16 11.41.33.000000000 PM    5
02-APR-16 11.46.11.000000000 PM  03-APR-16 11.45.24.000000000 PM  445
03-APR-16 11.41.33.000000000 PM  03-APR-16 11.45.24.000000000 PM    5
03-APR-16 11.45.24.000000000 PM  04-APR-16 11.05.33.000000000 PM 1000
04-APR-16 11.09.43.000000000 PM  07-APR-16 10.28.03.000000000 PM 1000
06-APR-16 10.40.43.000000000 PM  06-APR-16 10.42.54.000000000 PM    1
07-APR-16 10.27.35.000000000 PM  07-APR-16 10.28.03.000000000 PM    1
07-APR-16 10.28.03.000000000 PM  07-APR-16 10.28.03.000000000 PM 1000
09-APR-16 11.10.25.000000000 PM  09-APR-16 11.10.48.000000000 PM    1

11 rows selected.

SQL> 
SQL> select * from user_flashback_archive
  2  /

OWNER_NAME
------------------------------
FLASHBACK_ARCHIVE_NAME
--------------------------------------------------------------------------------
FLASHBACK_ARCHIVE# RETENTION_IN_DAYS
------------------ -----------------
CREATE_TIME
---------------------------------------------------------------------------
LAST_PURGE_TIME
---------------------------------------------------------------------------
STATUS
-------
SYSTEM
FBDA
   1     3
02-APR-16 11.24.39.000000000 PM
02-APR-16 11.24.39.000000000 PM



SQL> 


As on the morning of 10-Apr (after leaving the database instance running overnight) :

SQL> select scn_to_timestamp(startscn) starttime, scn_to_timestamp(endscn) endtime, count(*)
  2  from sys_fba_hist_93250
  3  group by scn_to_timestamp(startscn), scn_to_timestamp(endscn)
  4  order by 1,2,3
  5  /

STARTTIME    ENDTIME       COUNT(*)
-------------------------------- -------------------------------- ----------
02-APR-16 11.32.55.000000000 PM  02-APR-16 11.46.11.000000000 PM  450
02-APR-16 11.32.55.000000000 PM  03-APR-16 11.45.24.000000000 PM  550
02-APR-16 11.46.11.000000000 PM  03-APR-16 11.41.33.000000000 PM    5
02-APR-16 11.46.11.000000000 PM  03-APR-16 11.45.24.000000000 PM  445
03-APR-16 11.41.33.000000000 PM  03-APR-16 11.45.24.000000000 PM    5
03-APR-16 11.45.24.000000000 PM  04-APR-16 11.05.33.000000000 PM 1000
04-APR-16 11.09.43.000000000 PM  07-APR-16 10.28.03.000000000 PM 1000
06-APR-16 10.40.43.000000000 PM  06-APR-16 10.42.54.000000000 PM    1
07-APR-16 10.27.35.000000000 PM  07-APR-16 10.28.03.000000000 PM    1
07-APR-16 10.28.03.000000000 PM  07-APR-16 10.28.03.000000000 PM 1000
09-APR-16 11.10.25.000000000 PM  09-APR-16 11.10.48.000000000 PM    1

11 rows selected.

SQL> select systimestamp from dual
  2  /

SYSTIMESTAMP
---------------------------------------------------------------------------
10-APR-16 08.51.29.398107 AM +08:00

SQL> 
SQL> select * from user_flashback_archive
  2  /

OWNER_NAME
------------------------------
FLASHBACK_ARCHIVE_NAME
--------------------------------------------------------------------------------
FLASHBACK_ARCHIVE# RETENTION_IN_DAYS
------------------ -----------------
CREATE_TIME
---------------------------------------------------------------------------
LAST_PURGE_TIME
---------------------------------------------------------------------------
STATUS
-------
SYSTEM
FBDA
   1     3
02-APR-16 11.24.39.000000000 PM
02-APR-16 11.24.39.000000000 PM



SQL> 


So auto-purge of the data as of earlier days (02-Apr to 06-Apr) hasn't yet kicked in ?  Let's try a manual purge.

SQL> alter flashback archive fbda purge before timestamp (sysdate-4);

Flashback archive altered.

SQL> select * from user_flashback_archive;

OWNER_NAME
------------------------------
FLASHBACK_ARCHIVE_NAME
--------------------------------------------------------------------------------
FLASHBACK_ARCHIVE# RETENTION_IN_DAYS
------------------ -----------------
CREATE_TIME
---------------------------------------------------------------------------
LAST_PURGE_TIME
---------------------------------------------------------------------------
STATUS
-------
SYSTEM
FBDA
   1     3

05-APR-16 11.52.16.000000000 PM



SQL> 
SQL> ! sleep 300
SQL> l
  1  select scn_to_timestamp(startscn) starttime, scn_to_timestamp(endscn) endtime, count(*)
  2  from sys_fba_hist_93250
  3  group by scn_to_timestamp(startscn), scn_to_timestamp(endscn)
  4* order by 1,2,3
SQL> /

STARTTIME    ENDTIME       COUNT(*)
-------------------------------- -------------------------------- ----------
02-APR-16 11.32.55.000000000 PM  02-APR-16 11.46.11.000000000 PM  450
02-APR-16 11.32.55.000000000 PM  03-APR-16 11.45.24.000000000 PM  550
02-APR-16 11.46.11.000000000 PM  03-APR-16 11.41.33.000000000 PM    5
02-APR-16 11.46.11.000000000 PM  03-APR-16 11.45.24.000000000 PM  445
03-APR-16 11.41.33.000000000 PM  03-APR-16 11.45.24.000000000 PM    5
03-APR-16 11.45.24.000000000 PM  04-APR-16 11.05.33.000000000 PM 1000
04-APR-16 11.09.43.000000000 PM  07-APR-16 10.28.03.000000000 PM 1000
06-APR-16 10.40.43.000000000 PM  06-APR-16 10.42.54.000000000 PM    1
07-APR-16 10.27.35.000000000 PM  07-APR-16 10.28.03.000000000 PM    1
07-APR-16 10.28.03.000000000 PM  07-APR-16 10.28.03.000000000 PM 1000
09-APR-16 11.10.25.000000000 PM  09-APR-16 11.10.48.000000000 PM    1

11 rows selected.

SQL> 


Although USER_FLASHBACK_ARCHIVE shows that a purge till 05-Apr (the 11:52pm timestamp is strange) has been done, I still see older rows in the History table.  The query on the active table does correctly exclude the rows that should not be available. 


SQL> select scn_to_timestamp(startscn) starttime, scn_to_timestamp(endscn) endtime, count(*)
  2  from sys_fba_hist_93250
  3  group by scn_to_timestamp(startscn), scn_to_timestamp(endscn)
  4  order by 1,2,3;

STARTTIME      ENDTIME    COUNT(*)
---------------------------------- ---------------------------------- ----------
02-APR-16 11.32.55.000000000 PM    02-APR-16 11.46.11.000000000 PM      450
02-APR-16 11.32.55.000000000 PM    03-APR-16 11.45.24.000000000 PM      550
02-APR-16 11.46.11.000000000 PM    03-APR-16 11.41.33.000000000 PM        5
02-APR-16 11.46.11.000000000 PM    03-APR-16 11.45.24.000000000 PM      445
03-APR-16 11.41.33.000000000 PM    03-APR-16 11.45.24.000000000 PM        5
03-APR-16 11.45.24.000000000 PM    04-APR-16 11.05.33.000000000 PM     1000
04-APR-16 11.09.43.000000000 PM    07-APR-16 10.28.03.000000000 PM     1000
06-APR-16 10.40.43.000000000 PM    06-APR-16 10.42.54.000000000 PM        1
07-APR-16 10.27.35.000000000 PM    07-APR-16 10.28.03.000000000 PM        1
07-APR-16 10.28.03.000000000 PM    07-APR-16 10.28.03.000000000 PM     1000
09-APR-16 11.10.25.000000000 PM    09-APR-16 11.10.48.000000000 PM        1

11 rows selected.

SQL> select * from user_flashback_archive;

OWNER_NAME
------------------------------
FLASHBACK_ARCHIVE_NAME
------------------------------------------------------------------------------------------------------------------------------------
FLASHBACK_ARCHIVE# RETENTION_IN_DAYS CREATE_TIME
------------------ ----------------- ---------------------------------------------------------------------------
LAST_PURGE_TIME            STATUS
--------------------------------------------------------------------------- -------
SYSTEM
FBDA
   1     3
05-APR-16 11.52.16.000000000 PM


SQL> select systimestamp from dual;

SYSTIMESTAMP
---------------------------------------------------------------------------
10-APR-16 10.52.12.361412 PM +08:00

SQL> select count(*) from test_fbda as of timestamp (sysdate-3);

  COUNT(*)
----------
  2

SQL> 
SQL> select partition_position, high_value
  2  from user_tab_partitions
  3  where table_name = 'SYS_FBA_HIST_93250'
  4  order by 1;

PARTITION_POSITION HIGH_VALUE
------------------ --------------------------------------------------------------------------------
   1 MAXVALUE

SQL> 



Support Document 16898135.1 states that if Oracle isn't maintaining partitions for the History table, it may not be purging data properly.  Even an ALTER FLASHBACK ARCHIVE ... PURGE doesn't purge data (unless PURGE ALL is issued).  I'd seen this behaviour in 11.2.0.4 . The bug is supposed to have been fixed in 12.1.0.2  but my 12.1.0.2 environment shows the same behaviour.   The fact that my test database has very little activity (very few SCNs being incremented) shouldn't matter. The "Workaround" is, of course, unacceptable.
.
.
.

05 April, 2016

FBDA -- 4 : Partitions and Indexes

Continuing our investigation of the FBDA architecture.

oracle@ora12102 Desktop]$ sqlplus hemant/hemant

SQL*Plus: Release 12.1.0.2.0 Production on Tue Apr 5 23:25:10 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Tue Apr 05 2016 23:23:47 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> select table_name, def_tablespace_name, partitioning_type, partition_count, status
  2  from user_part_tables
  3  order by 1;

TABLE_NAME
--------------------------------------------------------------------------------
DEF_TABLESPACE_NAME        PARTITION PARTITION_COUNT STATUS
------------------------------ --------- --------------- --------
SYS_FBA_HIST_93250
FBDA          RANGE         1 VALID


SQL> 
SQL> set pages600
SQL> set long 10000
SQL> select dbms_metadata.get_ddl('TABLE','SYS_FBA_HIST_93250') from dual;

DBMS_METADATA.GET_DDL('TABLE','SYS_FBA_HIST_93250')
--------------------------------------------------------------------------------

  CREATE TABLE "HEMANT"."SYS_FBA_HIST_93250"
   ( "RID" VARCHAR2(4000),
 "STARTSCN" NUMBER,
 "ENDSCN" NUMBER,
 "XID" RAW(8),
 "OPERATION" VARCHAR2(1),
 "ID_COLUMN" NUMBER,
 "DATA_COLUMN" VARCHAR2(15),
 "DATE_INSERTED" DATE,
 "D_1729869_NEW_COL_1" VARCHAR2(5)
   ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
  STORAGE(
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "FBDA"
  PARTITION BY RANGE ("ENDSCN")
 (PARTITION "HIGH_PART"  VALUES LESS THAN (MAXVALUE) SEGMENT CREATION IMMEDIATE

  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 NOCOMPRESS LOGGING
  STORAGE(INITIAL 8388608 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "FBDA" )


SQL> 


So, although my active table (TEST_FBDA) is not partitioned, the History table is Range Partitioned on the ENDSCN column, with a single partition !   All the rows in the History table are in the MAXVALUE partition.  I wonder if and when it will ever be split ?

What if I create a partitioned table and then define Flashback Archiving on it ?

SQL> create table test_fbda_partitioned (
  2  id_column number,
  3  data_column varchar2(15),
  4  date_inserted date)
  5  partition by range (id_column)
  6  (partition p_100 values less than (101),
  7  partition p_200 values less than (201),
  8  partition p_300 values less than (301),
  9  partition p_400 values less than (401),
 10  partition p_max values less than (MAXVALUE))
 11  /

Table created.

SQL> alter table test_fbda_partitioned flashback archive fbda;

Table altered.

SQL> insert into test_fbda_partitioned
  2  select rownum, to_char(rownum), trunc(sysdate)
  3  from dual connect by level < 701;

700 rows created.

SQL> commit;

Commit complete.

SQL> update test_fbda_partitioned
  2  set data_column=data_column;

700 rows updated.

SQL> commit;

Commit complete.

SQL> col subobject_name format a15
SQL> select object_type, subobject_name, object_id
  2  from user_objects
  3  where object_name = 'TEST_FBDA_PARTITIONED'
  4  order by 3,1;

OBJECT_TYPE  SUBOBJECT_NAME  OBJECT_ID
----------------------- --------------- ----------
TABLE          93342
TABLE PARTITION  P_100       93343
TABLE PARTITION  P_200       93344
TABLE PARTITION  P_300       93345
TABLE PARTITION  P_400       93346
TABLE PARTITION  P_MAX       93347

6 rows selected.

SQL> 
SQL> select table_name     
  2  from user_tables 
  3  where table_name like '%93342%'
  4  order by 1;

TABLE_NAME
--------------------------------------------------------------------------------
SYS_FBA_DDL_COLMAP_93342
SYS_FBA_HIST_93342
SYS_FBA_TCRV_93342

SQL> 
SQL> select dbms_metadata.get_ddl('TABLE','SYS_FBA_HIST_93342') from dual;

DBMS_METADATA.GET_DDL('TABLE','SYS_FBA_HIST_93342')
--------------------------------------------------------------------------------

  CREATE TABLE "HEMANT"."SYS_FBA_HIST_93342"
   ( "RID" VARCHAR2(4000),
 "STARTSCN" NUMBER,
 "ENDSCN" NUMBER,
 "XID" RAW(8),
 "OPERATION" VARCHAR2(1),
 "ID_COLUMN" NUMBER,
 "DATA_COLUMN" VARCHAR2(15),
 "DATE_INSERTED" DATE
   ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
  STORAGE(
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "FBDA"
  PARTITION BY RANGE ("ENDSCN")
 (PARTITION "HIGH_PART"  VALUES LESS THAN (MAXVALUE) SEGMENT CREATION IMMEDIATE

  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 NOCOMPRESS LOGGING
  STORAGE(INITIAL 8388608 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "FBDA" )


SQL> 


So, even though my active table (TEST_FBDA_PARTITIONED) is created as a Range Partitioned Table partitioned on ID_COLUMN, the corresponding History table is Range Partitioned on ENDSCN with a single MAXVALUE partition.
Therefore, Oracle ignores my partitioning definition when creating the History table.  The History table (seems to be ?) always Range Partitioned on ENDSCN and starts using the MAXVALUE partition up-front.  (When will this Partition be split ?  I could search MoS for Docs / Bugs, but I'll do that exercise later).

Now that we know that the History table doesn't use our Partition Key, we must wonder about Partition Pruning when running AS OF queries on the active table that need to access the History Table.  If we can't Partition Prune, can we Index the History table ?

SQL> select table_name                   
  2  from user_tables
  3  where table_name like 'SYS_FBA_HIST%'
  4  /

TABLE_NAME
--------------------------------------------------------------------------------
SYS_FBA_HIST_93250
SYS_FBA_HIST_93342

SQL> create index sys_fba_hist_93250_ndx_1 on sys_fba_hist_93250(id_column) tablespace fbda;

Index created.

SQL> create index sys_fba_hist_93342_ndx_1 on sys_fba_hist_93342(id_column) tablespace fbda;

Index created.

SQL> select dbms_metadata.get_ddl('INDEX','SYS_FBA_HIST_93250_NDX_1') from dual;

DBMS_METADATA.GET_DDL('INDEX','SYS_FBA_HIST_93250_NDX_1')
--------------------------------------------------------------------------------

  CREATE INDEX "HEMANT"."SYS_FBA_HIST_93250_NDX_1" ON "HEMANT"."SYS_FBA_HIST_932
50" ("ID_COLUMN")
  PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "FBDA"


SQL> select dbms_metadata.get_ddl('INDEX','SYS_FBA_HIST_93342_NDX_1') from dual;

DBMS_METADATA.GET_DDL('INDEX','SYS_FBA_HIST_93342_NDX_1')
--------------------------------------------------------------------------------

  CREATE INDEX "HEMANT"."SYS_FBA_HIST_93342_NDX_1" ON "HEMANT"."SYS_FBA_HIST_933
42" ("ID_COLUMN")
  PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "FBDA"


SQL> 


Yes !  Oracle does allow us to build custom indexes on the History tables.  (I believe I saw this somewhere in the documentation or a note).

Next Post (possibly) : More DML operations and whether auto-purging based on the defined RETENTION 3 DAY period.  (Note : The Retention Period is defined at the Flashback Archive level, not at the individual table (active/History) level.  If you need to have tables with different Retention Periods, you need to define different Flashback Archives, although they can all be in the same Tablespace).
.
.
.

04 April, 2016

FBDA -- 3 : Support for TRUNCATEs

One of the points in the previous post was how ADD and DROP Column commands are supported in FBDA.

Let's look at support for TRUNCATEs.

(Just to prove that I am not using the Undo tablespace for the AS OF query, I drop the Undo tablespace) :

SQL> startup
ORACLE instance started.

Total System Global Area  402653184 bytes
Fixed Size      2924928 bytes
Variable Size    260050560 bytes
Database Buffers   134217728 bytes
Redo Buffers      5459968 bytes
Database mounted.
Database opened.
SQL> show parameter undo

NAME         TYPE  VALUE
------------------------------------ ----------- ------------------------------
temp_undo_enabled       boolean  FALSE
undo_management        string  AUTO
undo_retention        integer  900
undo_tablespace        string  UNDOTBS1
SQL> show parameter spfile

NAME         TYPE  VALUE
------------------------------------ ----------- ------------------------------
spfile         string  +DATA/NONCDB/PARAMETERFILE/spf
       ile.267.896483727
SQL> show parameter undo

NAME         TYPE  VALUE
------------------------------------ ----------- ------------------------------
temp_undo_enabled       boolean  FALSE
undo_management        string  AUTO
undo_retention        integer  900
undo_tablespace        string  UNDOTBS1
SQL> create undo tablespace undotbs2;

Tablespace created.

SQL> alter system set undo_tablespace='UNDOTBS2';

System altered.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area  402653184 bytes
Fixed Size      2924928 bytes
Variable Size    260050560 bytes
Database Buffers   134217728 bytes
Redo Buffers      5459968 bytes
Database mounted.
Database opened.
SQL> drop tablespace undotbs1 including contents and datafiles;

Tablespace dropped.

SQL> shutdown immediate 
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area  402653184 bytes
Fixed Size      2924928 bytes
Variable Size    260050560 bytes
Database Buffers   134217728 bytes
Redo Buffers      5459968 bytes
Database mounted.
Database opened.
SQL> show parameter undo

NAME         TYPE  VALUE
------------------------------------ ----------- ------------------------------
temp_undo_enabled       boolean  FALSE
undo_management        string  AUTO
undo_retention        integer  900
undo_tablespace        string  UNDOTBS2
SQL> 


I now TRUNCATE my test table and then run an AS OF query against it.

SQL> connect hemant/hemant
Connected.
SQL> select count(*) from test_fbda;

  COUNT(*)
----------
      1000

SQL> truncate table test_fbda;

Table truncated.

SQL> select count(*) from test_fbda;

  COUNT(*)
----------
  0

SQL> select count(*) from test_fbda as of timestamp (sysdate-1);

  COUNT(*)
----------
      1000

SQL> 


The AS OF query was satisfied by the FBDA History table  (and not from Undo).

However, because of yesterday's test of ADD and DROP columns (see yesterday's post), the table structure doesn't match AS OF yesterday.  So, I work around the difference by naming the columns.

SQL> insert into test_fbda select * from test_fbda as of timestamp (sysdate-1);
insert into test_fbda select * from test_fbda as of timestamp (sysdate-1)
            *
ERROR at line 1:
ORA-00913: too many values


SQL> 
SQL> desc test_fbda
 Name        Null?    Type
 ----------------------------------------- -------- ----------------------------
 ID_COLUMN         NUMBER
 DATA_COLUMN         VARCHAR2(15)
 DATE_INSERTED         DATE

SQL> insert into test_fbda
  2  select id_column, data_column, date_inserted
  3  from test_fbda as of timestamp (sysdate-1);

1000 rows created.

SQL> commit;

Commit complete.

SQL>          


Thus, when my query matches the table structure, I can use the data from the previous version of the query.

Note how I did NOT use the SYS_FBA_HIST_93250 table as I'd shown in my previous posts.  The History table is NOT to be accessed directly but I demonstrated queries against it it to show the underlying architecture in FBDA.
.
.
.



FBDA -- 2 : FBDA Archive Table Structure

Following up on my earlier post, I look at the FBDA Archive Tables.

[oracle@ora12102 Desktop]$ sqlplus hemant/hemant

SQL*Plus: Release 12.1.0.2.0 Production on Sun Apr 3 23:26:27 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Sat Apr 02 2016 23:32:30 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> select table_name from user_tables;

TABLE_NAME
--------------------------------------------------------------------------------
TEST_FBDA
SYS_FBA_DDL_COLMAP_93250
SYS_FBA_HIST_93250
SYS_FBA_TCRV_93250

SQL> desc test_fbda
 Name        Null?    Type
 ----------------------------------------- -------- ----------------------------
 ID_COLUMN         NUMBER
 DATA_COLUMN         VARCHAR2(15)
 DATE_INSERTED         DATE

SQL> 
SQL> desc sys_fba_hist_93250
 Name        Null?    Type
 ----------------------------------------- -------- ----------------------------
 RID          VARCHAR2(4000)
 STARTSCN         NUMBER
 ENDSCN          NUMBER
 XID          RAW(8)
 OPERATION         VARCHAR2(1)
 ID_COLUMN         NUMBER
 DATA_COLUMN         VARCHAR2(15)
 DATE_INSERTED         DATE

SQL> desc sys_fba_ddl_colmap_93250
 Name        Null?    Type
 ----------------------------------------- -------- ----------------------------
 STARTSCN         NUMBER
 ENDSCN          NUMBER
 XID          RAW(8)
 OPERATION         VARCHAR2(1)
 COLUMN_NAME         VARCHAR2(255)
 TYPE          VARCHAR2(255)
 HISTORICAL_COLUMN_NAME        VARCHAR2(255)

SQL> desc sys_fba_tcrv_93250
 Name        Null?    Type
 ----------------------------------------- -------- ----------------------------
 RID          VARCHAR2(4000)
 STARTSCN         NUMBER
 ENDSCN          NUMBER
 XID          RAW(8)
 OP          VARCHAR2(1)

SQL> 


The HIST table is the History table for my active table. It adds columns that track Rowid, Start SCB and End SCN for a range of rows that are copied into the History Table, Transaction Identifier, Operation and then the actual columns of the active table.
The DDL_COLMAP table seems to track Column Mappings.  See example below.
The TCRV table seems to be tracking Transactions ?

Let's try some DDL to ADD and DROP columns to the active table.

SQL> alter table test_fbda add (new_col_1 varchar2(5));

Table altered.

SQL> desc test_fbda
 Name        Null?    Type
 ----------------------------------------- -------- ----------------------------
 ID_COLUMN         NUMBER
 DATA_COLUMN         VARCHAR2(15)
 DATE_INSERTED         DATE
 NEW_COL_1         VARCHAR2(5)

SQL> desc sys_fba_93250
ERROR:
ORA-04043: object sys_fba_93250 does not exist


SQL> desc sys_fba_hist_93250
 Name        Null?    Type
 ----------------------------------------- -------- ----------------------------
 RID          VARCHAR2(4000)
 STARTSCN         NUMBER
 ENDSCN          NUMBER
 XID          RAW(8)
 OPERATION         VARCHAR2(1)
 ID_COLUMN         NUMBER
 DATA_COLUMN         VARCHAR2(15)
 DATE_INSERTED         DATE
 NEW_COL_1         VARCHAR2(5)

SQL> 
SQL> select * from sys_fba_ddl_colmap_93250
  2  /

  STARTSCN     ENDSCN XID        O
---------- ---------- ---------------- -
COLUMN_NAME
--------------------------------------------------------------------------------
TYPE
--------------------------------------------------------------------------------
HISTORICAL_COLUMN_NAME
--------------------------------------------------------------------------------
   1697151
ID_COLUMN
NUMBER
ID_COLUMN

   1697151
DATA_COLUMN
VARCHAR2(15)
DATA_COLUMN

   1697151
DATE_INSERTED
DATE
DATE_INSERTED

   1728713
NEW_COL_1
VARCHAR2(5)
NEW_COL_1


SQL> 


The new column added to the active table is also now reflected in the History Table.  The DDL_COLMAP shows the effective start of each column (notice the STARTSCN mapped to the COLUMN_NAME)

Let's set some values in new column and see if they appear in the History Table.

SQL> update test_fbda set new_col_1 = 'New'
  2  where id_column < 6;

5 rows updated.

SQL> commit;

Commit complete.

SQL> select id_column, new_col_1, scn_to_timestamp(startscn), scn_to_timestamp(endscn)
  2  from sys_fba_hist_93250
  3  where id_column < 6 
  4  order by 1,3;

 ID_COLUMN NEW_C
---------- -----
SCN_TO_TIMESTAMP(STARTSCN)
---------------------------------------------------------------------------
SCN_TO_TIMESTAMP(ENDSCN)
---------------------------------------------------------------------------
  1
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

  2
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

  3
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

  4
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

  5
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM


SQL> 


What rows are copied into the History Table are *prior* image rows (copied from the Undo Area).  The STARTSCN and ENDSCN are of *yesterday* (02-April).

Let me DROP the new column.

SQL> alter table test_fbda drop (new_col_1);

Table altered.

SQL> desc test_fbda
 Name        Null?    Type
 ----------------------------------------- -------- ----------------------------
 ID_COLUMN         NUMBER
 DATA_COLUMN         VARCHAR2(15)
 DATE_INSERTED         DATE

SQL> desc sys_fba_hist_93250;
 Name        Null?    Type
 ----------------------------------------- -------- ----------------------------
 RID          VARCHAR2(4000)
 STARTSCN         NUMBER
 ENDSCN          NUMBER
 XID          RAW(8)
 OPERATION         VARCHAR2(1)
 ID_COLUMN         NUMBER
 DATA_COLUMN         VARCHAR2(15)
 DATE_INSERTED         DATE
 D_1729869_NEW_COL_1        VARCHAR2(5)

SQL>
SQL> select * from sys_fba_ddl_colmap_93250;

  STARTSCN     ENDSCN XID        O
---------- ---------- ---------------- -
COLUMN_NAME
--------------------------------------------------------------------------------
TYPE
--------------------------------------------------------------------------------
HISTORICAL_COLUMN_NAME
--------------------------------------------------------------------------------
   1697151
ID_COLUMN
NUMBER
ID_COLUMN

   1697151
DATA_COLUMN
VARCHAR2(15)
DATA_COLUMN

   1697151
DATE_INSERTED
DATE
DATE_INSERTED

   1728713    1729869
D_1729869_NEW_COL_1
VARCHAR2(5)
NEW_COL_1


SQL> 


The dropped column is no longer in the active table  and has been renamed in the History table.  (The data in the column has to be preserved but the column is renamed).  Notice how the DDL_COLMAP table now shows an ENDSCN for this column, with the new (renamed)  column as in the History table.  The column name seems to include the SCN (ENDSCN ?)

Let's confirm what data is now present in the History table  (remember : Our earlier query showed the pre-update image for this column).

SQL> select id_column, D_1729869_NEW_COL_1, scn_to_timestamp(startscn), scn_to_timestamp(endscn)
  2  from sys_fba_hist_93250
  3  where (id_column < 6 OR D_1729869_NEW_COL_1 is not null)
  4  order by 1,3;

 ID_COLUMN D_172
---------- -----
SCN_TO_TIMESTAMP(STARTSCN)
---------------------------------------------------------------------------
SCN_TO_TIMESTAMP(ENDSCN)
---------------------------------------------------------------------------
  1
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

  1
02-APR-16 11.46.11.000000000 PM
03-APR-16 11.41.33.000000000 PM

  1 New
03-APR-16 11.41.33.000000000 PM
03-APR-16 11.45.24.000000000 PM

  2
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

  2
02-APR-16 11.46.11.000000000 PM
03-APR-16 11.41.33.000000000 PM

  2 New
03-APR-16 11.41.33.000000000 PM
03-APR-16 11.45.24.000000000 PM

  3
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

  3
02-APR-16 11.46.11.000000000 PM
03-APR-16 11.41.33.000000000 PM

  3 New
03-APR-16 11.41.33.000000000 PM
03-APR-16 11.45.24.000000000 PM

  4
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

  4
02-APR-16 11.46.11.000000000 PM
03-APR-16 11.41.33.000000000 PM

  4 New
03-APR-16 11.41.33.000000000 PM
03-APR-16 11.45.24.000000000 PM

  5
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

  5
02-APR-16 11.46.11.000000000 PM
03-APR-16 11.41.33.000000000 PM

  5 New
03-APR-16 11.41.33.000000000 PM
03-APR-16 11.45.24.000000000 PM


15 rows selected.

SQL> 
SQL> select scn_to_timestamp(1729869) from dual;

SCN_TO_TIMESTAMP(1729869)
---------------------------------------------------------------------------
03-APR-16 11.45.27.000000000 PM

SQL> 


Why do we now have 3 rows in the History table for each row in the Active Table ?  Take ID_COLUMN=1.  The first row -- for the time range 02-Apr 11:32pm to 02-Apr 11:46pm--  is as of yesterday, the same row we saw in the History table after the update in the active table.  The second row is the representation to preserve the row for the time rang 02-Apr 11:46pm to 03-Apr 11:41pm to support AS OF queries upto the time of the UPDATE.  The third row for the time range 03-Apr 11:41pm to 03-Apr 11:45pm is to present the UPDATEd value ('New') in the column upto the last transaction updating it before the column was dropped at 03-Apr 11:45:27pm.

Thus, Oracle maintains multiple versions of the same row, including versions for DROPped columns, in the History Table.

Note :  The History Table is not supposed to be directly queried in the manner I have shown here.  The proper query against the active table would be an AS OF query which is automatically rewritten / redirected to "hit" the History table when necessary.

What about the third table table -- the TCRV table ?

SQL> l
  1  select scn_to_timestamp(startscn), op , count(*)
  2  from sys_fba_tcrv_93250
  3  group by scn_to_timestamp(startscn), op
  4* order by 2,1
SQL> /

SCN_TO_TIMESTAMP(STARTSCN)          O
--------------------------------------------------------------------------- -
  COUNT(*)
----------
03-APR-16 11.45.24.000000000 PM          U
      1000


SQL>
SQL> select count(distinct(rid)) from sys_fba_tcrv_93250;

COUNT(DISTINCT(RID))
--------------------
  1000

SQL> 
 


It shows 1000 rows has having been UPDATEd ? (Does OP='U' mean 'UPDATE). We do know that ADD and DROP column are changes to the table.  But are they UPDATEs ?

Next post : Continuing with DML operations (more rows, some updates).  We'll see if we can decipher anything rom the TCRV table as well. Changed to showing support for TRUNCATEs.
.
.
.



02 April, 2016

FBDA -- 1 : Testing Flashback Data Archive in 12c (NonCDB)

Note : At the bottom of this post, you'll find links to more (subsequent) posts on this topic.

Some testing I'd done with Flashback Data Archive (henceforth called FBDA in this and subsequent posts, if any) in 11.2.0.4 left me with uncertainty about the automatic purging of data beyond the Retention Period.  I might return to testing 11.2.0.4, but here I shall begin testing in 12.1.0.2  (NonCDB).

Setting up FBDA :

[oracle@ora12102 ~]$ sqlplus system/oracle

SQL*Plus: Release 12.1.0.2.0 Production on Sat Apr 2 23:23:53 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Sat Apr 02 2016 23:20:47 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> create tablespace fbda ;

Tablespace created.

SQL> create flashback archive fbda tablespace fbda  retention 3 day; 

Flashback archive created.

SQL> create tablespace hemant;

Tablespace created.

SQL> create user hemant identified by hemant
  2  default tablespace hemant;

User created.

SQL> grant create table to hemant;

Grant succeeded.

SQL> grant create session to hemant;

Grant succeeded.

SQL> alter user hemant quota unlimited on hemant;

User altered.

SQL> alter user hemant quota unlimited on fbda;

User altered.

SQL> grant flashback archive administer to hemant;

Grant succeeded.

SQL> grant flashback archive on fbda to hemant;

Grant succeeded.

SQL>
SQL> connect / as sysdba
Connected.
SQL> grant execute on dbms_flashback_archive to hemant;

Grant succeeded.

SQL> 
SQL> connect hemant/hemant
Connected.
SQL> create table test_fbda (id_column number, data_column varchar2(15), date_inserted date) tablespace hemant;

Table created.

SQL> alter table test_fbda flashback archive fbda;

Table altered.

SQL> select table_name from user_tables;

TABLE_NAME
--------------------------------------------------------------------------------
TEST_FBDA

SQL> 


Note the Flashback Archive history table corresponding to TEST_FBDA doesn't get created immediately.

SQL> connect hemant/hemant
Connected.
SQL> insert into test_fbda
  2  select rownum , to_char(rownum), trunc(sysdate)
  3  from dual connect by level < 1001;

1000 rows created.

SQL> commit;

Commit complete.

SQL> select table_name from user_tables;

TABLE_NAME
--------------------------------------------------------------------------------
TEST_FBDA

SQL> select count(*) from test_fbda;

  COUNT(*)
----------
      1000

SQL> select flashback_archive_name, retention_in_days, status
  2  from user_flashback_archive;

FLASHBACK_ARCHIVE_NAME
--------------------------------------------------------------------------------
RETENTION_IN_DAYS STATUS
----------------- -------
FBDA
  3


SQL> select table_name, flashback_archive_name, archive_table_name, status
  2  from user_flashback_archive_tables;

TABLE_NAME
--------------------------------------------------------------------------------
FLASHBACK_ARCHIVE_NAME
--------------------------------------------------------------------------------
ARCHIVE_TABLE_NAME          STATUS
----------------------------------------------------- -------------
TEST_FBDA
FBDA
SYS_FBA_HIST_93250          ENABLED


SQL> 
SQL> !sleep 300

SQL> select table_name from user_tables;

TABLE_NAME
--------------------------------------------------------------------------------
TEST_FBDA
SYS_FBA_DDL_COLMAP_93250
SYS_FBA_HIST_93250
SYS_FBA_TCRV_93250

SQL> 
SQL> select object_id
  2  from user_objects
  3  where object_name = 'TEST_FBDA'
  4  and object_type = 'TABLE'
  5  
SQL> /

 OBJECT_ID
----------
     93250

SQL> 


So, it took some time for the flashback archive history table (identified on the basis of the OBJECT_ID) to appear.  The background fbda process seems to run (wakeup) every 5minutes although it may wake up more frequently if there is more activity in the database.

SQL> select trunc(date_inserted), count(*)
  2  from test_fbda
  3  group by trunc(date_inserted)
  4  /

TRUNC(DAT   COUNT(*)
--------- ----------
02-APR-16 1000

SQL> select trunc(date_inserted), count(*)
  2  from sys_fba_hist_93250
  3  group by trunc(date_inserted)
  4  /

no rows selected

SQL> update test_fbda
  2  set data_column = data_column
  3  where rownum < 451;

450 rows updated.

SQL> commit;

Commit complete.

SQL> select trunc(date_inserted), count(*)
  2  from sys_fba_hist_93250
  3  group by trunc(date_inserted)
  4  /

no rows selected

SQL> 
SQL> !sleep 180

SQL> select trunc(date_inserted), count(*)
  2  from sys_fba_hist_93250
  3  group by trunc(date_inserted)
  4  /

TRUNC(DAT   COUNT(*)
--------- ----------
02-APR-16  450

SQL> 
SQL> select scn_to_timestamp(startscn), scn_to_timestamp(endscn), date_inserted, count(*)
  2  from sys_fba_hist_93250
  3  group by scn_to_timestamp(startscn), scn_to_timestamp(endscn), date_inserted
  4  order by 1;

SCN_TO_TIMESTAMP(STARTSCN)
---------------------------------------------------------------------------
SCN_TO_TIMESTAMP(ENDSCN)
---------------------------------------------------------------------------
DATE_INSE   COUNT(*)
--------- ----------
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM
02-APR-16  450


SQL> 


Notice that not all 1000 rows got copied to the FBDA.  Only the 450 rows that I updated were copied in.  They are tracked by SCN-Timestamp.  (The "DATE_INSERTED" column is my own date column, Oracle wouldn't be using that column to track DML dates for rows as the values in that column are controlled by me -- the application or developer, not Oracle).

Note :  The History Table is not supposed to be directly queried in the manner I have shown here.

Tomorrow :  More Rows, and some DDLs as well.

Post 2 here.  It covers some of the architectural components and support for ADD / DROP column DDL.

Post 3 here.  It shows support for TRUNCATEs.

Post 4 here.  On Partitions and Indexes.

Post 5 here.  On (Auto)Purging.

Post 6 here.  On Bug Notes

Post 7 here.  Maintainig Partitioned Source Table
.
.
.


Aggregated by orafaq.com

Aggregated by orafaq.com
This blog is being aggregated by orafaq.com

Top 50 Oracle SQL Blogs 2016

Top 50 Oracle SQL Blogs 2016
Top 50 Oracle SQL Blogs 2016