I have been able to use ARRAYSIZE in SQLPlus to fetch more rows at each call and reduce the number of round-trips between the SQLPlus Client and the database. [see example below]
However, this does not work when fetching rows from one database to another via a Database Link. (SET ARRAYSIZE is an SQL*Plus command, not an Oracle Database / SQL command).
If I were to fetch a few tens of thousands or hundreds of thousands of rows across a DBLink -- eg for a Materialized View refresh -- how can I tune the size of each fetch and reduce the number of round-trips between the two databases ?
For Example : In SQLPlus I run the query
====================================================
select *
from
test_txn_table
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.03 368 85 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 10284 0.82 4.42 1853 12300 0 154239
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 10286 0.84 4.45 2221 12385 0 154239
Rows Row Source Operation
------- ---------------------------------------------------
154239 TABLE ACCESS FULL TEST_TXN_TABLE (cr=12300 pr=1853 pw=0 time=797235 us)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 10284 0.00 0.02
db file scattered read 143 0.48 3.29
SQL*Net message from client 10284 0.35 151.98
db file sequential read 15 0.08 0.20
I see 10,284 round-trips between the client SQLPlus and the server Database.
(and the corresponding count of FETCH calls)
====================================================
====================================================
If I re-run it with ARRAYSIZE 100 and PAGESIZE 600 (Pagesize to
reduce the overhead that SQLPlus spends in formatting page and column
titles every 16 lines ) I get
select *
from
test_txn_table
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.33 348 186 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1544 0.67 2.81 1851 3680 0 154239
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1546 0.67 3.14 2199 3866 0 154239
Rows Row Source Operation
------- ---------------------------------------------------
154239 TABLE ACCESS FULL TEST_TXN_TABLE (cr=3680 pr=1851 pw=0 time=466682 us)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1544 0.00 0.00
db file scattered read 140 0.08 2.12
SQL*Net message from client 1544 0.68 55.60
SQL*Net more data to client 3976 0.00 0.08
db file sequential read 9 0.05 0.09
I now have only 1,544 round trips
====================================================
How can I get similar savings when the query is to fetch data from one
database to another via a DBLink ?
3 comments:
LOL! I found this posting and figured you had a solution :) We are having the same issue. When we trace on the remote side, we see a fetch for every row (or at least every other row).
We changed SDU/MTU hoping to stuff back more through the pipe at a time, but no joy. The fetches themselves took almost no time (less than 4% total), but I have to think the number of roundtrips is related to the number of fetches, at least indirectly.
I will post back if I find something.
I know this doesn't answer our question, but we ended up fixing our problem by creating a pipe on the remote node, and exporting with the pipe as the destination file.
We then ssh'd from our destination to this remote host and cat'd this pipe to a local pipe.
We then imported using this local pipe as the source file. It was three times faster (30% of original time) than selecting over a database link.
Yes, that's quite an
effective method for data transfers. Quite good.
However, I have two requirements :
1. MV Refresh's
2. Custom code that periodically pulls data from a remote database.
I could use a convoluted script which synchronises export-imports (eg rsh calls ?) and replaces the local "MV" with the imported table.
Post a Comment