When you see ORA-1555s do you just go ahead and increase UNDO_RETENTION ?
Recently, I had a string of ORA-1555s occurring frequently through the day.
One quick suggestion was to increase UNDO_RETENTION. Some DBAs might do that.
However, that would have been jumping to a conclusion.
This was a database that had been running 9.2 for more than 2 years. I had one other
9.2 database with the same schema, usage and comparable size {for another business
unit} and wasn't seeing ORA-1555s there.
The trace files showed me that these ORA-1555s were always for the same SQL.
I knew that this SQL was a Refresh Query being executed to refresh a Materialized View
in another database.
Apparently, it was only this query that was reporting ORA-1555s at about 1 out of every 4
refreshes (the refresh being hourly) . However, later, a developer also reported a
similar query on the same tables taking a long time (but not yet erroring out on ORA-1555s}.
So, the fix was not to increase UNDO_RETENTION but to tune/optimize the tables
and specific queries. The rest of the database schema had no issues.
13 comments:
What if the query was already tuned to its maximum? Under that circumstances, UNDO_RETENTION increase would be an answer?
Thanks.
MG
Yes, you may need to increase UNDO_RETENTION
(and that is what "oerr" or the Error Messages manual may tell you). Do so if you really know that your UNDO_RETENTION is too low _and_ you cannot tune the query.
Your UNDO_RETENTION would be seen to be too low if you have _multiple_ queries reporting ORA-1555.
If you have only 1 query that reports ORA-1555 then you know that the rest of the database (application) is comfortable with the UNDO_RETENTION. So go back and look at that query again.
ora-1555 may come from a long time query, especial a cursor no matter how much the RBS it spent even 70% left.
i ever met ora-1555 error evey day, made me sleepless. but we tried to separated that cursor to 3 sub programs and ran at the same time.
Lucky we did it.
The ORA-1555 error message in earlier versions used to have two components "snapshot too old OR rollback segment too small". The first component made it clear that this error could occur in times of significant transaction volumes when the running query takes so long that, because of the occurrence of a large number of updates, it is unable to recreate a Read Consistent image of the data.
See Tom Kyte's explanation at http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:275215756923 (or check my post of 04-Apr-07 for the link)
In earlier versions, with AUM and UNDO_RETENTION, DBAs knew that Rollback Segment Sizing and Query Tuning went hand in hand. Many of the ORA-1555 error situations were fixed by Query Tuning (redesigning the tables and indexes, rewriting the SQL and predicates, breaking up the SQL into multiple statements, identifying non-peak periods) not by resizing (large or multiple) rollback segments.
I have a big table with 3 billions rows and would like to rebuild in a less period of time as possible.
I ran 3 plsql scrits , each plsql script is doing a full table scan to get 3 diferente range of data and insert in a new table , the commit in the cursor is for each 200.000 rows ( forall ) , after 16 hours I got the ORA-1555 , ( oracle 9.2 ) , but I am only doing a insert , I am alone in the database. How can I avoid the ORA-1555 in insert sql ?
Marcel,
A PLSQL script is likely the slowest way to copy rows from one table to another, even you do do FORALL operations. You would use PLSQL if you need additional processing / transformation on the rows that SQL cannot handle. It is likely that you don't close the cursors at every commit so they remain open and the ORA-1555s are caused by Fetch Across Commit (a Google search or a search on support.oracle.com or a search on asktom.oracle.com will help you).
To help speed up the insert and reduce the undo and redo that the insert creates :
Set the target table to NOLOGGING and use Direct Path (with the APPEND Hint).
If you are using the Enterprise Edition, you can do a Parallel Insert.
Ensure that you don't have indexes present on the new table before you insert.
See http://hemantoracledba.blogspot.com/2008/05/append-nologging-and-indexes.html
If you have the Partitioning Option, create the new table as a Partitioned table.
Thanks Hemant,
I am using the plsql to run more than one processes to decrease the execution time.
3 plsql are the same logic, only change the code in where clause.
The table that is beeing rebuild is IOT table, the new table was created with NOLOGGING , the insert has APPEND hint , the cursor is opened using parallel option, unfortunately still with ORA-1555 after 17 hours of processing ( 350.000.000 of rows , from total 3 billions ).
Marcel,
Why do it with 3 PLSQLs ? Why not a single SQL statement in one-pass (or even with PARALLEL Hints on the INSERT and SELECT).
Are you deleting from the source table as you insert into the target ?
Hemant K Chitale
Hi,
I can not use one sql to insert .. select parallel .. , because this will spend 3TB for UNDO tablespace.
Reasons to use 3 plsql:
1- trying to do in less time
2- commit the work to avoid 3 TB for UNDO tbs
example :
declare
TYPE ARRAY IS TABLE OF costedevent%ROWTYPE;
l_data ARRAY;
CURSOR c IS
SELECT /*+ PARALLEL ( A , 20 ) */
ACCOUNT_NUM, EVENT_SEQ, EVENT_SOURCE, EVENT_TYPE_ID, EVENT_REF, CLUSTER_SUB_ID, CREATED_DTM, EVENT_DTM, EVENT_COST_MNY,
MANAGED_FILE_ID, ROW_NUMBER, MODIFIED_BOO, COST_CENTRE_ID, TARIFF_ID, EVENT_ATTR_1, EVENT_ATTR_2, EVENT_ATTR_3, EVENT_ATTR_4,
EVENT_ATTR_5, EVENT_ATTR_6, EVENT_ATTR_7, EVENT_ATTR_8, EVENT_ATTR_9, EVENT_ATTR_10, EVENT_ATTR_11, EVENT_ATTR_12, EVENT_ATTR_13,
EVENT_ATTR_14, EVENT_ATTR_15, EVENT_ATTR_16, EVENT_ATTR_17, EVENT_ATTR_18, EVENT_ATTR_19, EVENT_ATTR_20, EVENT_ATTR_21,
EVENT_ATTR_22, EVENT_ATTR_23, EVENT_ATTR_24, LOYALTY_POINTS, COMPETITOR_COST_MNY, INTERNAL_COST_MNY, EXTERNAL_COST_MNY,
FRAGMENT_NUMBER, IMPORTED_COST_MNY, IMPORTED_CURRENCY_CODE, RATING_DISCOUNTED_USAGE_TOTAL, PRE_DISCOUNTED_COST_MNY,
HIGHEST_PRIORITY_DISCOUNT_ID, TAX_OVERRIDE_ID, UST_CATEGORY_ID, UST_CODE_ID, ORIGINAL_ACCOUNT_NUM, RULE_NUMBER,
EVENT_ATTR_25, EVENT_ATTR_26, EVENT_ATTR_27, EVENT_ATTR_28, EVENT_ATTR_29, EVENT_ATTR_30, EVENT_ATTR_31, EVENT_ATTR_32,
EVENT_ATTR_33, EVENT_ATTR_34, EVENT_ATTR_35, EVENT_ATTR_36, HIGHEST_PRIORITY_PROD_SEQ, RECEIVABLE_CLASS_ID, REVENUE_CODE_ID, DISCOUNT_DATA,
PRIMARY_EVENT_REF,
INHOST_COSTED_BOO
from costedevent A where EVENT_TYPE_ID=6;
BEGIN
OPEN c;
LOOP
FETCH c BULK COLLECT INTO l_data LIMIT 200000;
FORALL i IN 1..l_data.COUNT
insert into COSTEDEVENT_TEST_manual values l_data(i);
commit;
EXIT WHEN c%NOTFOUND;
END LOOP;
CLOSE c;
END;
/
commit;
declare
TYPE ARRAY IS TABLE OF costedevent%ROWTYPE;
l_data ARRAY;
CURSOR c IS
SELECT /*+ PARALLEL ( A , 20 ) */
ACCOUNT_NUM, EVENT_SEQ, EVENT_SOURCE, EVENT_TYPE_ID, EVENT_REF, CLUSTER_SUB_ID, CREATED_DTM, .. , INHOST_COSTED_BOO
from costedevent A where EVENT_TYPE_ID=6;
BEGIN
OPEN c;
LOOP
FETCH c BULK COLLECT INTO l_data LIMIT 200000;
FORALL i IN 1..l_data.COUNT
insert into COSTEDEVENT_TEST_manual values l_data(i);
commit;
EXIT WHEN c%NOTFOUND;
END LOOP;
CLOSE c;
END;
/
Hi,
I am not deleting or updating, only inserting.
I can not use one sql command ,
insert ..select PARALLEL ..
because I dont have space to create 3 TB for undo tablespace.
I am using 3 plsql to try decrease the total time for reorg the table.
thanks.
Is it really an IOT table with that many columns ? There would be overflow.
Yes, inserting into an IOT would generate Undo and Redo because it is effectively an index.
Your only choice would be to copy the rows in batches and have some way to identify or mark the rows already copied so that the next batch run doesn't copy the same rows again.
Hemant K Chitale
Post a Comment