I will occassionally post "Some Common Errors" over the next few months.
1. Using a SELECT COUNT to test performance after adding an index.
Say a query has been running poorly, doing FullTableScans. The DBA adds an Index on the correct columns for the query predicates. Now, instead of running the whole query "as is" he substitutes all the columns of the SELECT clause with only a "COUNT(*)".
This is absolutely the wrong thing to do as Oracle may well just use the index and not need to access the table at all. The "COST" and the number of block gets of such an operation would be much smaller than an operation that has to access at least one table column not present in the index !
Here is a simple demo (I've excluded the portion about how I created the STORES_LIST table) :
This is the STORES_LIST Table :
SQL> REM Describe the table
SQL> select blocks, avg_row_len, num_rows, sample_size from user_tables where table_name = 'STORES_LIST';
BLOCKS AVG_ROW_LEN NUM_ROWS SAMPLE_SIZE
---------- ----------- ---------- -----------
4930 80 405884 405884
SQL> desc stores_list
Name Null? Type
------------------------------------------------------------------------ -------- -------------------------------------------------
STORE_ID NOT NULL NUMBER
COUNTRY NOT NULL VARCHAR2(30)
STATE NOT NULL VARCHAR2(23)
STORE_NAME NOT NULL VARCHAR2(128)
STORE_OWNER VARCHAR2(128)
START_DATE DATE
SQL>
The Table has 405,884 rows (with 100% sampling).
Currently, there is an Index on STORE_ID (which is a Unique Key). However, the user's query is not based on STORE_ID : (I've added some relevant statistics which the DBA/Developer might have looked at)
SQL> select country, state, store_name from stores_list where country = 'INDIA' and state like 'VI%';
23408 rows selected.
Elapsed: 00:00:00.50
Execution Plan
----------------------------------------------------------
Plan hash value: 3986898903
---------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 13377 | 587K| 1347 (1)| 00:00:17 |
|* 1 | TABLE ACCESS FULL| STORES_LIST | 13377 | 587K| 1347 (1)| 00:00:17 |
---------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("COUNTRY"='INDIA' AND "STATE" LIKE 'VI%')
4899 consistent gets
4845 physical reads
(The DBA/Developer has possibly ignored some statistics ?)
The user requests the DBA's (or, the Developer's) assistance to "tune" the query. The DBA/Developer determines that an index on (COUNTRY,STATE) would be useful. He, then, acts accordingly :
SQL> REM The DBA creates this index
SQL> create index stores_list_cntry_stt_ndx on stores_list (country,state);
Index created.
SQL> select index_name, num_rows, sample_size, distinct_keys, leaf_blocks, clustering_factor
2 from user_indexes where table_name = 'STORES_LIST';
INDEX_NAME NUM_ROWS SAMPLE_SIZE DISTINCT_KEYS LEAF_BLOCKS CLUSTERING_FACTOR
------------------------------ ---------- ----------- ------------- ----------- -----------------
STORES_LIST_STORE_ID_NDX 405884 405884 405884 869 5242
STORES_LIST_CNTRY_STT_NDX 405884 405884 253 1709 16629
SQL>
The DBA/Developer thinks he will get better performance. To test the user's query, he runs it again. However, he wishes to avoid displaying the 23,408 rows output (after all, who wants to wait for the terminal to finish displaying twenty thousand odd rows ?). He figures that he can run the same query, but put a count(*) or count(1) "wrapper" around it ! "Surely, it still has to fetch all the rows from the table", he says.
So, his "test" query is :
SQL> select count(1) from
2 (select country, state, store_name from stores_list where country = 'INDIA' and state like 'VI%');
Elapsed: 00:00:00.04
Execution Plan
----------------------------------------------------------
Plan hash value: 32650011
-----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 20 | 59 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 20 | | |
|* 2 | INDEX RANGE SCAN| STORES_LIST_CNTRY_STT_NDX | 13377 | 261K| 59 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("COUNTRY"='INDIA' AND "STATE" LIKE 'VI%')
filter("STATE" LIKE 'VI%')
88 consistent gets
102 physical reads
Once again, the DBA/Developer has ignored some information which he could have gleaned from the Execution Plan and the Statistics. All he sees is that execution time is down from 0.50 seconds to 0.04 seconds and that the numbers for consistent gets and physical reads from being in excess of 4,800 to around 100 only.
He is happy that he has "tuned" the query and informs the user that query runtime is now only 8% of the previous runtime !
The user now runs his own query. .....
.... (and, surely, you know why I posted this) .... there's a catch ! There is some performance improvement but not to the same scale !
SQL> select country, state, store_name from stores_list where country = 'INDIA' and state like 'VI%';
23408 rows selected.
Elapsed: 00:00:00.30
Execution Plan
----------------------------------------------------------
Plan hash value: 2588565413
---------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 13377 | 587K| 609 (1)| 00:00:08 |
| 1 | TABLE ACCESS BY INDEX ROWID| STORES_LIST | 13377 | 587K| 609 (1)| 00:00:08 |
|* 2 | INDEX RANGE SCAN | STORES_LIST_CNTRY_STT_NDX | 13377 | | 59 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("COUNTRY"='INDIA' AND "STATE" LIKE 'VI%')
filter("STATE" LIKE 'VI%')
911 consistent gets
1393 physical reads
Runtime has improved by only 40% --- it is still 60% of what it was before, not the 8% that the DBA/Developer claims.
What has gone wrong ?The analysis for one.
The usage of a count(*) wrapper as well.
Let's take the second point -- about using a count(*) first. If you look closely at the Execution Plan, you can figure out that Oracle
a) Decides to read the index alone
b) Expects only 1 row as the final result set.
If the user's query was to fetch three columns of which only 2 columns were in the index, there is no way that the real result-set can be generated without going to the table to retrieve the values of the third column (STORE_NAME) that is not in the index. Those values are only in the table rows. Yet, Oracle doesn't go to the table at all !
Oracle sees the query as a request for a count(*) as being the final result. The optimizer figures out that it can use the Index to read all the ROWIDs and do a count against the index. It doesn't really need to know what the STORE_NAMEs are because STORE_NAME is not a query predicate. The optimizer can satisfy the query submitted by the DBA/Developer by using the Index alone !
The expected Row Count of 1 should have alerted the DBA/Developer ! A COUNT(*) operation returns 1 row -- a row that displays a count of rows ! It does NOT return 23,408 rows.
The optimizer has been smart (and correct) in generating an execution plan to satisfy what the DBA/Developer requested -- a count of rows, not the actual rows themselves !
Returning to the first point. What was wrong -- actually missing -- in the analysis. Many a times a person doesn't read all the output presented by an EXPLAIN PLAN (an example of which we just noted above) or all the Statistics presented by the AUTOTRACE.
In the user's first query execution, these statistics were presented :
Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
4899 consistent gets
4845 physical reads
0 redo size
544997 bytes sent via SQL*Net to client
998 bytes received via SQL*Net from client
48 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
23408 rows processed
In his second query execution (after the index was added), these were the statistics :
Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
911 consistent gets
1393 physical reads
0 redo size
544997 bytes sent via SQL*Net to client
998 bytes received via SQL*Net from client
48 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
23408 rows processed
However, in the DBA/Developer's execution, the statistics were :
Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
88 consistent gets
102 physical reads
0 redo size
517 bytes sent via SQL*Net to client
492 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
We've already seen what that "1 rows processed" means. It is the size of the final result set. The database server sent only 1 row to the client program -- sqlplus -- in the query executed by the DBA/Developer. However, it sent 23,408 rows in both executions by the user.
What else is significant ? Look at "bytes sent via SQL*Net to client" and "SQL*Net roundtrips to/from client" ! In both executions by the user, 545thousand bytes were sent in 48 round-trips. Both the database server and the client (sqlplus) incurred some slight (albeit, very very slight, considering modern microprocessor speeds) overheads in sending, receiving, presenting (and writing, if spooling to disk) 545kilobytes of data ! Furthermore, network latency impinged on each of the 48 round-trips. Assuming a latency of 2ms, that adds 98 ms. If this were a WAN link with a higher latency -- say 10ms or 15ms ? -- we are talking of higher elapsed time required to complete the result-set ! The DBA/Developer completely avoided both overheads -- the volume of data and the number of round-trips -- with his count(*) trick !
So, the next time you, as a DBA or a Developer tries to use a count(*) wrapper around a user's query, remember that the query you submit to the optimizer is *different* from that the user actually runs. The optimizer will identify that your requirements are not the same as the user's requirements and will generate a different execution plan which may well run much faster than what the user's session sees.
.
.
.