06 March, 2007

Using Normal Tables for Temporary Data

Many application vendors still use regular (normal) tables to hold Temporary (Transient) Data.

Recently, I came across code like this being executed by a particular section of the application which allows the user to run his own queries :

Since the data is "temporary" (is DELETEd and INSERTed every time the table is accessed), the vendor did not think an Index was necessary ---- after all, the table was supposed to be [very ?] small.
There being no Index on TEMP_TABLE, the DELETE and the SELECT had to do Full-Table-Scans of TEMP_TABLE. Unfortunately, at some time in the recent past, TEMP_TABLE had grown to 49,000 blocks -- because the particular "key" values that the user queries by can retrieve 10s to thousands of rows-- ie the INSERT operation for some user in the past had inserted hundreds of thousands of rows. Although this part of the application was "easy to use" every user executing it thereafter suffered two full-table-scans of TEMP_TABLE very frequently.
The solution comprised of two steps :
a) Index TEMP_TABLE on TID
b) TRUNCATE TEMP_TABLE to reset the High Water Mark
I couldn't get permission for the Truncate in a live environment, so I just created the Index alone.
That made a significant difference for the user ! Still waiting for the TRUNCATE to help further.

UPDATE : 21-Sep : Again, last week, I was able to make a significant difference to a report. I found that the report (query running for 1 hour) was repeatedly scanning a "temp" table which was expected to and did contain 0 rows. Unfortunately, the table had grown to a couple of hundred MBs. A quick TRUNCATE of the table, resetting the High Water Mark and the query time was down to less than 10 seconds.


Anonymous said...

I have a similar problem . We have transient table which see frequent isert and delete . The table size does not go beyond ~5K records. The query on this uses table scan. I explored the possibility of indexing . To test I wrote a simple procedure which

insert x records at once
then in loop
insert 1 recor
queries records
delete the just inserted record

delete all records

The preformance with index was little worse than what we see without index.

Is there any way I can improve performance here. Would caching of this table help? How does the caching work with frequent insert/update tables.

Hemant K Chitale said...

Indexing helps if it reduces the number of Buffer Gets that are required -- IF the table
is queried very frequently.

Caching the table keeps the table in the Buffer Cache rather than ageing it out immediately (as is the default for FullTableScans). {However, there is a "_small_table_threshold" also that Oracle uses).
If you know that your reads
are frequently waiting on 'db file scattered read' (doing a FullTableScan), caching the table {IF it is SMALL} will help.

However, that does not reduce the number of Logical Reads and Latch Gets that are still required.

Anonymous said...

Thanks for reply. Yes , I see table scan happening in the queries.

Also the read is frequent.

It's like this -

An insert happend in the start of the transaction.
The record remains there for an average of 10 mins.
At the end of transaction the record is queried (some 5 times) and ultimately deleted.

So do you see caching helping there. How does the caching work. Does it work like indexing (where I did not get any performance improvement) .
Also should I try the way - alter table cache " way or there is better way of doing it .
We are on Oracle 9 currently , though are moving to Oracle 10 (with Data guard) very soon.

Hemant K Chitale said...

Yes, the command is "ALTER TABLE tablename CACHE".
However, if you were to trace the session, waits on
'db file scattered read' would indicate FullTableScans are looking for blocks not in the Buffer Cache. If you have only 'library cache' waits always, the blocks are most likely in the Buffer Cache. Query X$BH or V$BH to see if the table is present in the buffer cache.
Or you could define a seperate KEEP pool (you'd need to restart the instance) and ALTER the table to be in the KEEP pool.