Many application vendors still use regular (normal) tables to hold Temporary (Transient) Data.
Recently, I came across code like this being executed by a particular section of the application which allows the user to run his own queries :
DELETE TEMP_TABLE ;
INSERT INTO TEMP_TABLE SELECT TID FROM A, B WHERE ....;
SELECT TID, C2, C3 from X, Y, TEMP_TABLE WHERE X.C1=TEMP_TABLE.TID ...
Since the data is "temporary" (is DELETEd and INSERTed every time the table is accessed), the vendor did not think an Index was necessary ---- after all, the table was supposed to be [very ?] small.
There being no Index on TEMP_TABLE, the DELETE and the SELECT had to do Full-Table-Scans of TEMP_TABLE. Unfortunately, at some time in the recent past, TEMP_TABLE had grown to 49,000 blocks -- because the particular "key" values that the user queries by can retrieve 10s to thousands of rows-- ie the INSERT operation for some user in the past had inserted hundreds of thousands of rows. Although this part of the application was "easy to use" every user executing it thereafter suffered two full-table-scans of TEMP_TABLE very frequently.
The solution comprised of two steps :
a) Index TEMP_TABLE on TID
b) TRUNCATE TEMP_TABLE to reset the High Water Mark
I couldn't get permission for the Truncate in a live environment, so I just created the Index alone.
That made a significant difference for the user ! Still waiting for the TRUNCATE to help further.
UPDATE : 21-Sep : Again, last week, I was able to make a significant difference to a report. I found that the report (query running for 1 hour) was repeatedly scanning a "temp" table which was expected to and did contain 0 rows. Unfortunately, the table had grown to a couple of hundred MBs. A quick TRUNCATE of the table, resetting the High Water Mark and the query time was down to less than 10 seconds.