Last Sunday, I wrote an note {for developers and analysts within my organisation} on the impact of INSERT and DELETE operations on UNDO and REDO requirements. As the note also included actual SQL statements on a demo table, I could show the Undo size from V$TRANSACTION and the Redo size from V$SESSSTAT.
I included a few options (CTAS, APPEND, NOLGGING, INSERT) and scenarios (new table, without indexes, with 2 indexes). The presence of indexes did make a signifcant difference to the total REDO.
Learnings :
The UNDO for a DELETE is more for than that for an INSERT.
That is because the UNDO for a DELETE is actually the re-Insert of the whole row. So, it has preserve values for each of the columns of the row to be able to "Undo" the DELETE.
Conversely, the UNDO for an INSERT is a Delete of the row by ROWID – which does not have to preserve the values being deleted.
A DELETE causes more REDO than an INSERT
Since the UNDO is also preserved in the REDO stream, this also enlarges the REDO.
The presence of Indexes also increase the amount of UNDO and REDO.
If the Table has one or more Indexes present, an INSERT also has to update the Index(es). Similarly, a DELETE has to remove remove entries from the Index(es). Like the Table, the Index(es) is/are also protected by UNDO and REDO.
DELETEs always cause REDO (ie are always "logging")..
There are NOLOGGING methods of achieving INSERTs but DELETEs are almost always normal DML, protected by Redo {except in the case when achieved by executing TRUNCATEs} See {link to 9i Doc on nologging} for information on NOLOGGING operations.
Normal DML always causes REDO.
Even if a Table has been created or configured with the NOLOGGING option, all normal DML – including INSERTs -- still cause logging. Again see {link to 9i Doc on nologging}
Practical Uses :
When doing large DELETEs, consider if you can drop one or some of the Indexes on
the Table to reduce the REDO
If doing a very large purge of a Table, it can be faster to use this method :
Create an empty new Table and set it to NOLOGGING
Use an INSERT with an APPEND hint to copy selective data from the original Table (the one to be purged) – ie copy the data that is to be retained
Build Indexes on the new Table (or build after the rename in the next step)
Drop the original Table and rename the new table to take the same name as the original Table
Build Indexes if not done earlier
Create Triggers
Recompile dependent objects
When "bulk-copying" data from between two existing Tables (or to a new Table), you can use the same method as above
Rebuild Indexes after bulk deletes
Use the NOLOGGING option when rebuilding Indexes
After all such activities are completed, reset the Tables and Indexes to LOGGING
(although, actually, normal DML would ignore the NOLOGGING attribute on the Table, a table which appears to be NOLOGGING can worry a DBA who knows something about Nologging but not enough !)
Avoid using NOLOGGING operations when Online Backups are running (particularly on OLTP)
Identify all scheduled jobs and adhoc executions which use NOLOGGING and track them in case a Restoration and Recovery of the database needs to be executed as a Recovery through a NOLOGGING job would mark the affected blocks as "logically corrupt" requiring manual re-execution of the NOLOGGING job.
Critical databases could be set to FORCE LOGGING. It is good to periodically check for nologging operations (query V$DATAFILE for UNRECOVERABLE_CHANGE# and UNRECOVERABLE_TIME).
1 comment:
gracias, los dbas de la junta te lo van a agradecer
Post a Comment