A Youtube Video on Restore and Recovery from Incremental Backups.
I am an Oracle Database Specialist in Singapore.
Please note that this site uses cookies.
30 May, 2016
23 May, 2016
Recent Blog Series on Partition Storage
These have been my recent posts on Partition Storage.
1. 1. Default Partition Sizes in 12c (Apr-16)
2. 2. New Rows Inserted in 12.1 Partitioned Table (Apr-16)
3. 3. Adding new Range Partitions with SPLIT (Apr-16)
4. 4. Resizing Partitions (Apr-16)
5. 5. Partitioned Table versus Non-Partitioned Table ? (in 12.1) (Apr-16)
6. 6. Revisiting Partition HWM (Apr-16)
7. 7. Revisiting HWM -- 2 (again) (May-16)
8. 8. Manually Sizing Partitions (May-16)
.
.
.
1. 1. Default Partition Sizes in 12c (Apr-16)
2. 2. New Rows Inserted in 12.1 Partitioned Table (Apr-16)
3. 3. Adding new Range Partitions with SPLIT (Apr-16)
4. 4. Resizing Partitions (Apr-16)
5. 5. Partitioned Table versus Non-Partitioned Table ? (in 12.1) (Apr-16)
6. 6. Revisiting Partition HWM (Apr-16)
7. 7. Revisiting HWM -- 2 (again) (May-16)
8. 8. Manually Sizing Partitions (May-16)
.
.
.
22 May, 2016
TRUNCATEing a Table makes an UNUSABLE Index VALID again
Here's something I learned from Jonathan Lewis sometime ago.
If you set an Index to be UNUSABLE but later issue a TRUNCATE TABLE, the Index becomes VALID again --- i.e. the Index gets updated with rows subsequently inserted.
So, I have a VALID Index on my Table.
I now make it UNUSABLE and add rows to it.
Oracle actually drops the Index segment (so you don't see it in USER_SEGMENTS) when it is set to UNUSABLE alhough the Index definition is still present. The Index doesn't "grow" as the Segment doesn't exist.
Let me TRUNCATE the table.
Immediately after the TRUNCATE TABLE, the Index Segment is instantiated and the Index becomes VALID again. So inserting rows will update the Index. My last explicit command against the Index was ALTER INDEX ... UNUSABLE but that seems to be not the current state now !
So, repopulating the Table has expanded the Index again.
.
.
.
If you set an Index to be UNUSABLE but later issue a TRUNCATE TABLE, the Index becomes VALID again --- i.e. the Index gets updated with rows subsequently inserted.
SQL> connect hemant/hemant Connected. SQL> drop table target_data purge; Table dropped. SQL> create table target_data as select * from source_data where 1=2; Table created. SQL> create index target_data_ndx_1 2 on target_data(owner, object_type, object_name); Index created. SQL> insert /*+ APPEND */ into target_data 2 select * from source_data; 367156 rows created. SQL> commit; Commit complete. SQL> col segment_name format a30 SQL> select segment_name, segment_type, bytes/1048576 2 from user_segments 3 where segment_name like 'TARGET_DATA%' 4 order by 1; SEGMENT_NAME SEGMENT_TYPE BYTES/1048576 ------------------------------ ------------------ ------------- TARGET_DATA TABLE 49 TARGET_DATA_NDX_1 INDEX 19 SQL> SQL> col index_name format a30 SQL> select index_name, status 2 from user_indexes 3 where table_name = 'TARGET_DATA'; INDEX_NAME STATUS ------------------------------ -------- TARGET_DATA_NDX_1 VALID SQL>
So, I have a VALID Index on my Table.
I now make it UNUSABLE and add rows to it.
SQL> alter index target_Data_ndx_1 unusable; Index altered. SQL> select status 2 from user_indexes 3 where index_name = 'TARGET_DATA_NDX_1'; STATUS -------- UNUSABLE SQL> insert /*+ APPEND */ into target_data 2 select * from source_data; 367156 rows created. SQL> commit; Commit complete. SQL> select index_name, status 2 from user_indexes 3 where table_name = 'TARGET_DATA'; INDEX_NAME STATUS ------------------------------ -------- TARGET_DATA_NDX_1 UNUSABLE SQL> select segment_name, segment_type, bytes/1048576 2 from user_segments 3 where segment_name like 'TARGET_DATA%' 4 order by 1; SEGMENT_NAME SEGMENT_TYPE BYTES/1048576 ------------------------------ ------------------ ------------- TARGET_DATA TABLE 104 SQL>
Oracle actually drops the Index segment (so you don't see it in USER_SEGMENTS) when it is set to UNUSABLE alhough the Index definition is still present. The Index doesn't "grow" as the Segment doesn't exist.
Let me TRUNCATE the table.
SQL> truncate table target_data; Table truncated. SQL> select segment_name, segment_type, bytes/1048576 2 from user_segments 3 where segment_name like 'TARGET_DATA%' 4 order by 1; SEGMENT_NAME SEGMENT_TYPE BYTES/1048576 ------------------------------ ------------------ ------------- TARGET_DATA TABLE .0625 TARGET_DATA_NDX_1 INDEX .0625 SQL> select index_name, status 2 from user_indexes 3 where table_name = 'TARGET_DATA'; INDEX_NAME STATUS ------------------------------ -------- TARGET_DATA_NDX_1 VALID SQL>
Immediately after the TRUNCATE TABLE, the Index Segment is instantiated and the Index becomes VALID again. So inserting rows will update the Index. My last explicit command against the Index was ALTER INDEX ... UNUSABLE but that seems to be not the current state now !
SQL> insert /*+ APPEND */ into target_data 2 select * from source_data; 367156 rows created. SQL> commit; Commit complete. SQL> select segment_name, segment_type, bytes/1048576 2 from user_segments 3 where segment_name like 'TARGET_DATA%' 4 order by 1; SEGMENT_NAME SEGMENT_TYPE BYTES/1048576 ------------------------------ ------------------ ------------- TARGET_DATA TABLE 49 TARGET_DATA_NDX_1 INDEX 19 SQL>
So, repopulating the Table has expanded the Index again.
.
.
.
18 May, 2016
Partition Storage -- 8 : Manually Sizing Partitions
As demonstrated in the first post in this series, the default size for Table Partitions is to start with an I nitial8MB Extent. The fourth post demonstrated how to resize a Partition that has already been created and populated.
Here is a method to pre-size the Initial Extent. I start with 3 partitions of different sizes.
However, the caveat about Partition Extent Sizes when SPLIT is used as shown in the third post still applies.
So, SPLITting P_MAX to create P_300 created P_300 with an 8MB extent because the sizing is inherited from P_MAX. How do I change the Partition Extent sizes on SPLIT ?
This time, when I split P_MAX I deliberately specified a size for P_500. Then, when I split P_500 further, the new P_400 inherited the sizing from P_500.
,
,
,
Here is a method to pre-size the Initial Extent. I start with 3 partitions of different sizes.
SQL> create table my_part_tbl_init_sized (id_column number(6), data_column varchar2(100)) 2 partition by range (id_column) 3 (partition p_100 values less than (101) segment creation immediate 4 storage (initial 64K next 64K) tablespace hemant, 5 partition p_200 values less than (201) segment creation immediate 6 storage (initial 1M next 1M) tablespace hemant, 7 partition p_max values less than (maxvalue) segment creation immediate 8 storage (initial 8M next 1M) tablespace hemant) 9 / Table created. SQL> SQL> select partition_name, initial_extent, next_extent, tablespace_name 2 from user_tab_partitions 3 where table_name = upper('my_part_tbl_init_sized') 4 order by partition_position 5 / PARTITION_NA INITIAL_EXTENT NEXT_EXTENT TABLESPACE_NAME ------------ -------------- ----------- ------------------------------ P_100 65536 65536 HEMANT P_200 1048576 1048576 HEMANT P_MAX 8388608 1048576 HEMANT SQL> SQL> select partition_name, bytes/1024, extents 2 from user_segments 3 where segment_name = upper('my_part_tbl_init_sized') 4 and segment_type = 'TABLE PARTITION' 5 order by 1 6 / PARTITION_NA BYTES/1024 EXTENTS ------------ ---------- ---------- P_100 64 1 P_200 1024 1 P_MAX 8192 1 SQL>
However, the caveat about Partition Extent Sizes when SPLIT is used as shown in the third post still applies.
SQL> alter table my_part_tbl_init_sized 2 split partition p_max 3 at (301) 4 into (partition p_300, partition p_max) 5 / Table altered. SQL> SQL> select partition_name, initial_extent, next_extent, tablespace_name 2 from user_tab_partitions 3 where table_name = upper('my_part_tbl_init_sized') 4 order by partition_position 5 / PARTITION_NA INITIAL_EXTENT NEXT_EXTENT TABLESPACE_NAME ------------ -------------- ----------- ------------------------------ P_100 65536 65536 HEMANT P_200 1048576 1048576 HEMANT P_300 8388608 1048576 HEMANT P_MAX 8388608 1048576 HEMANT SQL> SQL> select partition_name, bytes/1024, extents 2 from user_segments 3 where segment_name = upper('my_part_tbl_init_sized') 4 and segment_type = 'TABLE PARTITION' 5 order by 1 6 / PARTITION_NA BYTES/1024 EXTENTS ------------ ---------- ---------- P_100 64 1 P_200 1024 1 P_300 8192 1 P_MAX 8192 1 SQL>
So, SPLITting P_MAX to create P_300 created P_300 with an 8MB extent because the sizing is inherited from P_MAX. How do I change the Partition Extent sizes on SPLIT ?
SQL> alter table my_part_tbl_init_sized 2 split partition p_max 3 at (501) 4 into (partition p_500 storage (initial 64K next 64K) tablespace hemant, partition p_max) 5 / Table altered. SQL> SQL> alter table my_part_tbl_init_sized 2 split partition p_500 3 at (401) 4 into (partition p_400, partition p_500) 5 / Table altered. SQL> SQL> select partition_name, initial_extent, next_extent, tablespace_name 2 from user_tab_partitions 3 where table_name = upper('my_part_tbl_init_sized') 4 order by partition_position 5 / PARTITION_NA INITIAL_EXTENT NEXT_EXTENT TABLESPACE_NAME ------------ -------------- ----------- ------------------------------ P_100 65536 65536 HEMANT P_200 1048576 1048576 HEMANT P_300 8388608 1048576 HEMANT P_400 65536 65536 HEMANT P_500 65536 65536 HEMANT P_MAX 8388608 1048576 HEMANT 6 rows selected. SQL> select partition_name, bytes/1024, extents 2 from user_segments 3 where segment_name = upper('my_part_tbl_init_sized') 4 and segment_type = 'TABLE PARTITION' 5 order by 1 6 / PARTITION_NA BYTES/1024 EXTENTS ------------ ---------- ---------- P_100 64 1 P_200 1024 1 P_300 8192 1 P_400 64 1 P_500 64 1 P_MAX 8192 1 6 rows selected. SQL>
This time, when I split P_MAX I deliberately specified a size for P_500. Then, when I split P_500 further, the new P_400 inherited the sizing from P_500.
,
,
,
09 May, 2016
Compression -- 7 : Updating after BASIC Compression
In the first blog post on compression, I had shown BASIC Compression and also the effect of executing an UPDATE on a table with BASIC compression.
To expand on the them of UPDATEs of BASIC compression blocks ....
So we have a table with 1.1million rows and no Row Chaining.
What happens if we update about 20% of the rows ?
I have updated 220,000 rows without actually increasing the notional length of each row (I set OWNER=OWNER). Yet, The CHAIN_CNT is now 202K and the table's HighWaterMark has expanded from 4,452 blocks to 7,882 blocks. A significant increase !
(YMMV may vary in your tests !)
It was Jonathan Lewis who suggested getting the Chain Count (or LIST CHAINED ROWS) to understand the impact of UPDATEs on a table with BASIC compression.
.
.
.
To expand on the them of UPDATEs of BASIC compression blocks ....
SQL> select count(*) from source_data; COUNT(*) ---------- 367156 SQL> create table target_comp row store compress basic as select * from source_data where 1=2; Table created. SQL> select pct_free from user_tables where table_name = 'TARGET_COMP'; PCT_FREE ---------- 0 SQL> insert /*+ APPEND */ into target_comp 2 select * from source_data; 367156 rows created. SQL> commit; Commit complete. SQL> insert /*+ APPEND */ into target_comp 2 select * from source_data; 367156 rows created. SQL> commit; Commit complete. SQL> insert /*+ APPEND */ into target_comp 2 select * from source_data; 367156 rows created. SQL> commit; Commit complete. SQL> SQL> analyze table target_comp compute statistics; Table analyzed. SQL> select chain_cnt, blocks from user_tables where table_name = 'TARGET_COMP'; CHAIN_CNT BLOCKS ---------- ---------- 0 4452 SQL> SQL> exec dbms_stats.gather_table_stats('','TARGET_COMP'); PL/SQL procedure successfully completed. SQL> select num_rows, chain_cnt, blocks from user_tables where table_name = 'TARGET_COMP'; NUM_ROWS CHAIN_CNT BLOCKS ---------- ---------- ---------- 1101468 0 4452 SQL>
So we have a table with 1.1million rows and no Row Chaining.
What happens if we update about 20% of the rows ?
SQL> begin 2 for rec in (select rowid from target_comp where rownum < 220001) 3 loop 4 update target_comp set owner=owner where rowid=rec.rowid; 5 commit; 6 end loop; 7 end; 8 / PL/SQL procedure successfully completed. SQL> SQL> commit; Commit complete. SQL> analyze table target_comp compute statistics; Table analyzed. SQL> select chain_cnt, blocks from user_tables where table_name = 'TARGET_COMP'; CHAIN_CNT BLOCKS ---------- ---------- 202189 7882 SQL> exec dbms_stats.gather_table_stats('','TARGET_COMP'); PL/SQL procedure successfully completed. SQL> select num_rows, chain_cnt, blocks from user_tables where table_name = 'TARGET_COMP'; NUM_ROWS CHAIN_CNT BLOCKS ---------- ---------- ---------- 1101468 202189 7882 SQL>
I have updated 220,000 rows without actually increasing the notional length of each row (I set OWNER=OWNER). Yet, The CHAIN_CNT is now 202K and the table's HighWaterMark has expanded from 4,452 blocks to 7,882 blocks. A significant increase !
(YMMV may vary in your tests !)
It was Jonathan Lewis who suggested getting the Chain Count (or LIST CHAINED ROWS) to understand the impact of UPDATEs on a table with BASIC compression.
.
.
.
05 May, 2016
Compression -- 6b : Advanced Index Compression (revisited)
Following up on my earlier post on 12.1.0.2 Advanced Index Compression, one of my readers asked what would be the difference if I reversed the order of columns in the chosen index.
My defined index was on (OWNER, OBJECT_TYPE, OBJECT_NAME) --- defined as being from the column with the fewest distinct values to the most. This ordering is best compressible with Index Key Compression (also known as Prefix Compression). If I reverse the order, Index Key Compression for the two leading columns wouldn't deliver the same level of compression. The question is whether Advanced Index Compression can intelligently handle the reversal.
Surprisingly, this index with Prefix 2 on (OBJECT_NAME, OBJECT_TYPE) is, at 3,091 leaf blocks, smaller than the previous index with Prefix 2 on (OWNER, OBJECT_TYPE) at 5,508 leaf blocks.
Continuing with Prefix 3
At 2,277 leaf blocks it is, as expected, the same size with Prefix 3 on (OWNER, OBJECT_TYPE, OBJECT_NAME). Since the entire index key is specified as the Prefix, both indexes would be the same size.
Going on to Advanced Index Compression
This is, again, as expected. Advanced Index Compression results in the same size irrespective of the ordering of the columns.
The advantage of Advanced Index Compression over Key or Prefix Compression is that the DBA does not need to determine the Prefix for compression. He does not have to spend time to analyze the data and compare the number of distinct values for each of the columns in the composite index.
.
.
.
My defined index was on (OWNER, OBJECT_TYPE, OBJECT_NAME) --- defined as being from the column with the fewest distinct values to the most. This ordering is best compressible with Index Key Compression (also known as Prefix Compression). If I reverse the order, Index Key Compression for the two leading columns wouldn't deliver the same level of compression. The question is whether Advanced Index Compression can intelligently handle the reversal.
SQL> create index target_data_ndx_3_comp on 2 target_data(object_name, object_type, owner) compress 2; Index created. SQL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_3_COMP'); PL/SQL procedure successfully completed. SQL> select leaf_blocks 2 from user_indexes 3 where index_name = 'TARGET_DATA_NDX_3_COMP' 4 / LEAF_BLOCKS ----------- 3091 SQL>
Surprisingly, this index with Prefix 2 on (OBJECT_NAME, OBJECT_TYPE) is, at 3,091 leaf blocks, smaller than the previous index with Prefix 2 on (OWNER, OBJECT_TYPE) at 5,508 leaf blocks.
Continuing with Prefix 3
SQL> drop index target_data_ndx_3_comp; Index dropped. SQL> create index target_data_ndx_3_comp on 2 target_data(object_name, object_type, owner) compress 3; Index created. SQL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_3_COMP'); PL/SQL procedure successfully completed. SQL> select leaf_blocks 2 from user_indexes 3 where index_name = 'TARGET_DATA_NDX_3_COMP' 4 / LEAF_BLOCKS ----------- 2277 SQL>
At 2,277 leaf blocks it is, as expected, the same size with Prefix 3 on (OWNER, OBJECT_TYPE, OBJECT_NAME). Since the entire index key is specified as the Prefix, both indexes would be the same size.
Going on to Advanced Index Compression
SQL> drop index target_data_ndx_3_comp; Index dropped. SQL> create index target_data_ndx_4_advcomp on 2 target_data(object_name, object_type, owner) 3 compress advanced low 4 / Index created. SQL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_4_ADVCOMP'); PL/SQL procedure successfully completed. SQL> select leaf_blocks 2 from user_indexes 3 where index_name = 'TARGET_DATA_NDX_4_ADVCOMP' 4 / LEAF_BLOCKS ----------- 2277 SQL>
This is, again, as expected. Advanced Index Compression results in the same size irrespective of the ordering of the columns.
The advantage of Advanced Index Compression over Key or Prefix Compression is that the DBA does not need to determine the Prefix for compression. He does not have to spend time to analyze the data and compare the number of distinct values for each of the columns in the composite index.
.
.
.
03 May, 2016
Compression -- 6 : Advanced Index Compression
Earlier, I had covered Index (Key) Compression which is included in the Enterprise Edition.
In Key Compression, the DBA must specify the Prefix (i.e. number of leading columns in a composite index) that must be used as the compression key.
12.1.0.2 Advanced Index Compression does not require the DBA to manually identify the prefix key length. Advanced Index Compression auto(magically) identifies the optimal prefix columns.
(Note : Advanced Index Compression requires the Advanced Compression licence Option and is available only in 12.1.0.2 and higher)
Creating a Key Compression Index by specifying the Prefix size (the leading 2 columns) :
Note how I specified "2" as the Prefix size as I want to compress on repeated values of (OWNER, OBJECT_NAME).
Using Advanced Index Compression by specifying "COMPRESS ADVANCED LOW" and letting Oracle decide on the compression strategy in each leaf block :
Wow, that's significantly smaller. What's more, I did not have to spend time analyzing the data and the index definition to identify the "correct" Prefix size.
However, it is now possible to specify the entire composite key as the Prefix, although that is not what I would have done in earlier versions. Identifying the Prefix size requires analyzing the data.
So, the Advanced option allows me to let Oracle automatically decide the appropriate mechanism to compress the Index keys.
UPDATE : Also see the subsequent test with a reversal of the columns in the composite index.
Just for comparison, here is a regular index :
That is a much larger regular index !
.
.
.
In Key Compression, the DBA must specify the Prefix (i.e. number of leading columns in a composite index) that must be used as the compression key.
12.1.0.2 Advanced Index Compression does not require the DBA to manually identify the prefix key length. Advanced Index Compression auto(magically) identifies the optimal prefix columns.
(Note : Advanced Index Compression requires the Advanced Compression licence Option and is available only in 12.1.0.2 and higher)
SQL> create table target_data as select * from source_data where 1=2; Table created. SQL> insert /*+ APPEND */ into target_data select * from source_data; 367156 rows created. SQL> commit; Commit complete. SQL> insert /*+ APPEND */ into target_data select * from source_data; 367156 rows created. SQL> commit; Commit complete. SQL> insert /*+ APPEND */ into target_data select * from source_data; 367156 rows created. SQL> commit; Commit complete. SQL>
Creating a Key Compression Index by specifying the Prefix size (the leading 2 columns) :
SQL> create index target_data_ndx_1_comp on 2 target_data (owner, object_type, object_name) compress 2; Index created. SQL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_1_COMP'); PL/SQL procedure successfully completed. SQL> select leaf_blocks 2 from user_indexes 3 where index_name = 'TARGET_DATA_NDX_1_COMP' 4 / LEAF_BLOCKS ----------- 5508 SQL>
Note how I specified "2" as the Prefix size as I want to compress on repeated values of (OWNER, OBJECT_NAME).
Using Advanced Index Compression by specifying "COMPRESS ADVANCED LOW" and letting Oracle decide on the compression strategy in each leaf block :
SQL> drop index target_data_ndx_1_comp; Index dropped. SQL> create index target_data_ndx_2_advcomp on 2 target_data (owner, object_type, object_name) 3 compress advanced low; Index created. SQL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_2_ADVCOMP'); PL/SQL procedure successfully completed. SQL> select leaf_blocks 2 from user_indexes 3 where index_name = 'TARGET_DATA_NDX_2_ADVCOMP' 4 / LEAF_BLOCKS ----------- 2277 SQL>
Wow, that's significantly smaller. What's more, I did not have to spend time analyzing the data and the index definition to identify the "correct" Prefix size.
However, it is now possible to specify the entire composite key as the Prefix, although that is not what I would have done in earlier versions. Identifying the Prefix size requires analyzing the data.
SQL> create index target_data_ndx_1_comp on 2 target_data (owner, object_type, object_name) compress 3 3 / Index created. SQL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_1_COMP'); PL/SQL procedure successfully completed. SQL> select leaf_blocks 2 from user_indexes 3 where index_name = 'TARGET_DATA_NDX_1_COMP' 4 / LEAF_BLOCKS ----------- 2277 SQL>
So, the Advanced option allows me to let Oracle automatically decide the appropriate mechanism to compress the Index keys.
UPDATE : Also see the subsequent test with a reversal of the columns in the composite index.
Just for comparison, here is a regular index :
SQL> drop index target_data_ndx_2_advcomp; Index dropped. SQL> create index target_data_ndx_3_nocomp on 2 target_data (owner, object_type, object_name) 3 / Index created. SQL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_3_NOCOMP'); PL/SQL procedure successfully completed. SQL> select leaf_blocks 2 from user_indexes 3 where index_name = 'TARGET_DATA_NDX_3_NOCOMP' 4 / LEAF_BLOCKS ----------- 7289 SQL>
That is a much larger regular index !
.
.
.
02 May, 2016
FBDA -- 7 : Maintaining Partitioned Source Table
Taking up the TEST_FBDA_PARTITIONED table, let's look at a couple of Partition Maintenance operations.
Let's try a TRUNCATE PARTITION
So, that's supported.
Let's try a SPLIT PARTTIION
So, a SPLIT PARTITION fails. We need to DISASSOCIATE the Flashback Archive.
While a Table is disassociated with it's Flashback Archive, DDL that would not normally be permitted may be done under strict control to ensure that there is no data divergence.
.
.
.
SQL> select partition_name, high_value, num_rows 2 from user_tab_partitions 3 where table_name = 'TEST_FBDA_PARTITIONED' 4 order by partition_position 5 / PARTITION_NAME HIGH_VALUE NUM_ROWS ---------------- ------------------------- ---------- P_100 101 100 P_200 201 100 P_300 301 100 P_400 401 100 P_MAX MAXVALUE 301 SQL>
Let's try a TRUNCATE PARTITION
SQL> alter table test_fbda_partitioned truncate partition p_100; Table truncated. SQL>
So, that's supported.
Let's try a SPLIT PARTTIION
SQL> alter table test_fbda_partitioned 2 split partition p_max at (501) 3 into (partition p_500, partition p_max) 4 / alter table test_fbda_partitioned * ERROR at line 1: ORA-55610: Invalid DDL statement on history-tracked table SQL>
So, a SPLIT PARTITION fails. We need to DISASSOCIATE the Flashback Archive.
SQL> execute dbms_flashback_archive.disassociate_fba('HEMANT','TEST_FBDA_PARTITIONED'); PL/SQL procedure successfully completed. SQL> select table_name, flashback_archive_name, archive_table_name, status 2 from user_flashback_archive_tables 3 where table_name = 'TEST_FBDA_PARTITIONED' 4 / TABLE_NAME -------------------------------------------------------------------------------- FLASHBACK_ARCHIVE_NAME -------------------------------------------------------------------------------- ARCHIVE_TABLE_NAME STATUS ----------------------------------------------------- ------------- TEST_FBDA_PARTITIONED FBDA SYS_FBA_HIST_93342 DISASSOCIATED SQL> SQL> alter table test_fbda_partitioned 2 split partition p_max at (501) 3 into (partition p_500, partition p_max) 4 / Table altered. SQL> execute dbms_flashback_archive.reassociate_fba('HEMANT','TEST_FBDA_PARTITIONED'); PL/SQL procedure successfully completed. SQL> SQL> select table_name, flashback_archive_name, archive_table_name, status 2 from user_flashback_archive_tables 3 where table_name = 'TEST_FBDA_PARTITIONED' 4 / TABLE_NAME -------------------------------------------------------------------------------- FLASHBACK_ARCHIVE_NAME -------------------------------------------------------------------------------- ARCHIVE_TABLE_NAME STATUS ----------------------------------------------------- ------------- TEST_FBDA_PARTITIONED FBDA SYS_FBA_HIST_93342 ENABLED SQL>
While a Table is disassociated with it's Flashback Archive, DDL that would not normally be permitted may be done under strict control to ensure that there is no data divergence.
.
.
.
Partition Storage -- 7 : Revisiting HWM - 2 (again)
Revisiting the previous test case, but with a larger AVG_ROW_LEN
Aha ! Unlike the previous case (where an AVG_ROW_LEN of 11, a MOVE reduced the HWM from 3,022 to 2,484), with a larger row size, the HWM has moved from 22,349 to 22,626.
So, space consumption is a factor of both the AVG_ROW_LEN and the manner in which the rows are inserted / relocated.
Also, see how the "Expected Blocks" count seems more accurate than earlier.
.
.
.
SQL> create table part_table_large (id_column number(6), data_col_1 varchar2(100), data_col_2 varchar2(100)) partition by range (id_column) (partition p_100 values less than (101), partition p_200 values less than (201), partition p_300 values less than (301), partition p_400 values less than (401), partition p_max values less than (maxvalue)) / 2 3 4 5 6 7 8 9 Table created. SQL> SQL> insert into part_table_large values (51,rpad('String',75,'X'), rpad('Another',60,'Y')) 2 3 SQL> / 1 row created. SQL> SQL> commit; Commit complete. SQL> SQL> declare cntr number; begin cntr := 0; while cntr < 100000 loop insert into part_table_large values (25, rpad('String',75,'X'), rpad('Another',60,'Y')); commit; cntr := cntr + 1; end loop; end; 2 3 4 5 6 7 8 9 10 11 12 13 14 / PL/SQL procedure successfully completed. SQL> commit; Commit complete. SQL> SQL> declare cntr number; begin cntr := 0; while cntr < 500001 loop insert into part_table_large values (45, rpad('String',75,'X'), rpad('Another',60,'Y')); commit; cntr := cntr + 1; end loop; end; 2 3 4 5 6 7 8 9 10 11 12 13 14 / PL/SQL procedure successfully completed. SQL> commit; Commit complete. SQL> SQL> declare cntr number; begin cntr := 0; while cntr < 500001 loop insert into part_table_large values (55, rpad('String',75,'X'), rpad('Another',60,'Y')); commit; cntr := cntr + 1; end loop; end; 2 3 4 5 6 7 8 9 10 11 12 13 14 / PL/SQL procedure successfully completed. SQL> SQL> commit; Commit complete. SQL> SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_LARGE',granularity=>'ALL'); PL/SQL procedure successfully completed. SQL> SQL> select avg_row_len, num_rows, blocks from user_tab_partitions where table_name = 'PART_TABLE_LARGE' and partition_name = 'P_100' / 2 3 4 5 AVG_ROW_LEN NUM_ROWS BLOCKS ----------- ---------- ---------- 140 1100003 22349 SQL> SQL> SQL> alter table part_table_large move partition p_100 ; Table altered. SQL> SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_LARGE',granularity=>'ALL'); PL/SQL procedure successfully completed. SQL> SQL> select avg_row_len, num_rows, blocks from user_tab_partitions where table_name = 'PART_TABLE_LARGE' and partition_name = 'P_100' / 2 3 4 5 AVG_ROW_LEN NUM_ROWS BLOCKS ----------- ---------- ---------- 140 1100003 22626 SQL> SQL> SQL> select extent_id, blocks from dba_extents where segment_name = 'PART_TABLE_LARGE' and segment_type = 'TABLE PARTITION' and partition_name = 'P_100' and owner = 'HEMANT' order by 1 / 2 3 4 5 6 7 8 EXTENT_ID BLOCKS ---------- ---------- 0 1024 1 1024 2 1024 3 1024 4 1024 5 1024 6 1024 7 1024 8 1024 9 1024 10 1024 11 512 12 1024 13 1024 14 1024 15 1024 16 1024 17 1024 18 1024 19 1024 20 1024 21 1024 22 1024 23 rows selected. SQL>
Aha ! Unlike the previous case (where an AVG_ROW_LEN of 11, a MOVE reduced the HWM from 3,022 to 2,484), with a larger row size, the HWM has moved from 22,349 to 22,626.
So, space consumption is a factor of both the AVG_ROW_LEN and the manner in which the rows are inserted / relocated.
SQL> l 1 select avg_row_len*num_rows*1.2/8192 Expected_Blocks, Blocks 2 from user_tab_partitions 3 where table_name = 'PART_TABLE_LARGE' 4* and partition_name = 'P_100' SQL> / EXPECTED_BLOCKS BLOCKS --------------- ---------- 22558.6553 22626 SQL>
Also, see how the "Expected Blocks" count seems more accurate than earlier.
.
.
.
Subscribe to:
Posts (Atom)