Bringing Large Oracle Tables to SQL thru DTS

  • I am trying to import 100+ tables from Oracle 10g with varying number or records into SQL 2000. The largest table is 40.1 million records and the smallest is 500 records.

    I am running into performance issues. It takes forever to bring the data across. Also, when 1.6 million records are brought over to SQL 2000 I get an oracle error. The oracle dba isn't concerned about this and shrugs it off as a SQL problem. Any suggestions?

    Can I do anything as far as parallel dts?

    How can I interface with oracle more effieciently?

    Any suggestion is appreciated.

  • Sounds like a lot of work!

    For the larger tables, I would suggest using DTS to create a file on the SQL Server machine, and then either bcp in the file or use the bulk insert sql statement to import it into the destination table. Use a batch size of perhaps 100,000 rows, and use the tablock hint if possible. Set the max-errors higher than the default 10, and if using bcp, use the -eErrorFileName parameter so you know which rows fail. Bulk insert is supposed to be faster, but bcp is a little more flexible. Either will let you start where you left off before with the firstrow parameter in the event of a failure.

    You may want to rtrim() any character columns in the Oracle query, to reduce the network traffic, destination size, etc., especially if they are char datatype (padded to length with spaces). You may also want to have oracle order the results (order by) if the destination table has a clustered index.

    Is this a one-time import, or will you be doing it regularly? Parallelization is possible and beneficial, either by using un-connected tasks in the package, or by starting separate packages.

    It may be easier to do some of the smaller tables in sql with the openquery function: insert mySQLTable select * from openquery(myOracleServer, 'select * from myOracleTable') The data can be massaged on either the Oracle end of the query, or if necessary (SQL Server-only datatypes) on the SQL Server end.

    Optimizing disk i/o will be a key factor for performance; it will help if you have lots of drives. For more ideas, see:

    http://www.sqlsavior.com/8HoursTo2.html

    Lastly, if the Oracle people are willing, it would probably be faster if they created the larger files for you - perhaps not worth asking if this is a one-time load, but if you are doing it regularly...

    Hope this helps.

  • Also check your SQL database is large enough to receive all the Oracle data. It's probably set to autogrow (the default) but dynamically expanding a db device takes Server resources.

    Are you getting an Oracle error, i.e. one that starts ORA-? If so, it's definitely an Oracle error and the Oracle DBA should acknowledge this. It may be a client side error (e.g. timeout, buffers exeeded) but it's still an Oracle error!

    Maybe do the really massive tables (the ones over, say, 10 million rows) in their own DTS and, as your last respondee says, do it in 2 stages: Oracle -> file, file -> SQL Server.

    I agree this sounds like a massive job. I hope your peers and managers realise this and are giving you sufficient time and support rather than just dumping it on you and expecting it to be done yesterday.

  • Hi

    I'm transfering data from Oracle to SQL via normal transact SQL - not DTS.

    So i'm able to transfer a table which had about 192Mio records - final size 105GB.

    with

    select * from openquery([TNSNames], 'select * from [tablespace].[tablename]')

    I transfered the records using a cursor which is loading then package by package to SQL - ok it was taking time - about 2 days - but without any timeout.

    Roger

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply