February 20, 2006 at 8:09 am
Hello,
We have a dts job that writes (data pump) from a SQL 2000 table to an Oracle 9i table. I found that when writing any number of records (zero up to any number of sql server records) that it appears SQL Server 2000 buffers the Oracle table data prior to writing (dtsrun memory usage is increasing each day as the Oracle table grows). I tried running the dts job writing to a test Oracle database with zero Oracle table records and the dts job ran fast (5 seconds). When I repeated the job (zero SQL records writing to a 1M Oracle record table) it took several hours.
Any thoughts?
Thank you
Randy
February 20, 2006 at 9:36 am
Is this the same phenomenon as when doing a select and filter on a linked server table? I used to have the problem of SQL Server downloading the entire table (6 mil+ records) and then filter.
February 20, 2006 at 9:59 am
I don't know. The SQL is simply a select <field1>, <field2>, etc ... from <table in SQL>. I tried running the SQL from query analyzer outside dts and it runs in <1 second when no records are present. It appears to be the transformation step buffering all Oracle records before figuring out there are no SQL records to write (or same buffering when there are SQL records to write).
Thank you for responding.
Randy
February 20, 2006 at 10:00 am
A better option would be to use DTS to dump the data into a text file (this will be pretty fast) and then load into Oracle using sql *ldr (either conventional path or if your operations allow - direct path)..that will be much more robust and much more faster.
February 20, 2006 at 10:07 am
I agree your suggestion is a better approach. I was just curious if anyone else had experienced the "buffer" issue (uses SQL Server memory proportional to the number of records in Oracle (receiving table).
Thank you
Randy
February 20, 2006 at 10:51 am
If you set it up as a linked server, I beleive you can do a simple
INSERT INTO [fully qualified name]
FROM vw_Table --format fields and such
February 21, 2006 at 6:53 am
How is the transfer designed in DTS?
I suspect that it may have something to do with unique constraints (PKs) or something similar and that SQL Server wants to check ahead of time for conflicts. I can't imagine any other reason for a transfer in one direction to require transfer in the other first.
I do transfers from Oracle to SQL but I have to go through SQL*Plus scripts running on a separate server because our admins are very afraid of putting Oracle code onto our production SQL Server.
SQL*Loader has its benefits, but DTS is not a bad environment if most of the work is done in SQL Server anyway.
--peter
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply