In the previous blog, I demonstrated creating an empty table with BASIC Compression and then populating it.
What if you have a pre-existing table with data that you want to compress ?
Here I start with a table that already has rows but doesn't have compression configured.
I then proceed to rebuild it as a COMPRESSed table.
Note how not only did the table size shrink to less than 12MB, the PCT_FREE also got reset to 0 !
.
.
.
What if you have a pre-existing table with data that you want to compress ?
Here I start with a table that already has rows but doesn't have compression configured.
PDB1@ORCL> select tablespace_name, segment_type, bytes/1048576 2 from user_segments 3 where segment_name = 'REGULAR_1'; TABLESPACE_NAME SEGMENT_TYPE BYTES/1048576 ------------------------------ ------------------ ------------- HEMANT TABLE 49 PDB1@ORCL> select pct_free, blocks, num_rows, blocks*8192/1048576 2 from user_tables 3 where table_name = 'REGULAR_1'; PCT_FREE BLOCKS NUM_ROWS BLOCKS*8192/1048576 ---------- ---------- ---------- ------------------- 10 6224 364496 48.625 PDB1@ORCL>
I then proceed to rebuild it as a COMPRESSed table.
PDB1@ORCL> alter table regular_1 move compress; Table altered. PDB1@ORCL> exec dbms_stats.gather_table_stats('','REGULAR_1'); PL/SQL procedure successfully completed. PDB1@ORCL> select tablespace_name, segment_type, bytes/1048576 2 from user_segments 3 where segment_name = 'REGULAR_1'; TABLESPACE_NAME SEGMENT_TYPE BYTES/1048576 ------------------------------ ------------------ ------------- HEMANT TABLE 12 PDB1@ORCL> select pct_free, blocks, num_rows, blocks*8192/1048576 2 from user_tables 3 where table_name = 'REGULAR_1'; PCT_FREE BLOCKS NUM_ROWS BLOCKS*8192/1048576 ---------- ---------- ---------- ------------------- 0 1448 364496 11.3125 PDB1@ORCL>
Note how not only did the table size shrink to less than 12MB, the PCT_FREE also got reset to 0 !
.
.
.
2 comments:
How would it impact performance ? and Does Plan is also expected to be changed post compression?
Rohit,
A Full Table Scan has fewer blocks to read. That reduces the I/O cost (provided the statistics are correctly updated). There may be some CPU cycles for the "uncompression" but the gain in I/O would outweigh that.
On the other hand, if you are doing indexed lookups, it makes no difference.
Post a Comment