What is the most efficient way

  • to move data from DB2 on a mainframe to SQLServer?

    Currently we are using DTS to due this process in two steps.

    1. Load the data from DB2 into a flat file on the server.

    2. Delete the contents of the table on SQLServer and then import the data from the flat file.

    I have discovered several things since "taking over" this project. Typically the Upload from the text file takes twice as long as download from the mainframe. Through investigation I determined that the programmer who set these up initially used deletes rather than truncate table commands, found that out on a table with 5+ million rows. We would like to move only the records that changed during the day, but this doesn't appear to be a quick solution because we cannot guarantee that the timestamps on the mainframe are being updated correctly.

    Does anyone have any tips, tricks, suggestions that might speed up the importing the text files to SQLServer?

  • Basically, the difference between delete and truncate is that truncate does not record any entry in the transaction log, which could create a problem if you have to roll back. Other than that, both do the same.

    You can use an ODBC to set the transfer beteewn DB2 and SQL Server. It will take some time to set it up, but it is a better way tahn run a flat file and then run the DTS, I think.

    We run something similar from a very old system called BBX. We install the BBX driver and then run the task using DTS.

    We have around 100 databases and most of them do the daily update. We use ODBC for some and incrementals for some others. The incremental file is a flat file created in BBX system and then import into SQL Server, but not a DTS job. Is a code base script to run this incrementals.

    This is just a small contribution GOOD LUCK

  • According to my handy dandy SQL2000 book I got from the library.. 🙂 Truncate table not only does not write to the log but it deletes data page by page, instead of row by row as with a delete. If you are trying to wipe the whole table out and refresh totally, I would look into it.

    Right now everything works fine in our current system, but we are trying to decrease run times on the server and are looking for any options that we could change that would do that. The reason we go to a text file first is that since we refresh totally each day. If there was a problem with the connection to DB2 then we would have no data in our reporting tables, until the job could be executed again. Where in our 2 step process we pretty much are sure that we have data in each area at any given time, because if the download from DB2 errors then it doesn't try to upload the empty text file to the SQL server.

    Edited by - EOJRR1 on 07/16/2002 09:56:29 AM

    Edited by - EOJRR1 on 07/16/2002 10:00:38 AM

  • Truncate does write to the log, it just logs the page deletes rather than row by row. If you run it inside a transaction and rollback all your data will still be there. Definitely recommended if you want to wipe the table.

    Andy

    http://www.sqlservercentral.com/columnists/awarren/

  • As an alternative, have you considered loading the data from DB2 to SQL (via ODBC) into a temporary table. Then load your production table with the contents of the temporary table (which you could check the rowcount of before truncating your production table) ?

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply