March 22, 2004 at 6:48 am
We are trying to replicate a large amount of data from one SQL Server to another. We can not do this through transaction, snapshot or merge replication because the overhead is too great and there are no primary key columns. We currently use a DTS to load information from our AS400 to the local SQL Server. This process can take up to 30 minutes. We must delete all information out of all of the tables and rebuild with the new information on the AS400 because of the lack of primary keys. The problem is that when we try to use a replication it takes too long to deliver all transactions and data to the production environment. We must have all data up to date every 4 hours and can not have any down time. Since each table depends on the others, and the data determines pricing, we can not update part of the tables and chance someone gettting an invalid price. We have batted around many ideas including detaching the database zipping the files, ftping them to the production and attaching a new database. When this is complete, our pricing routines can point to the information in the new database and we can delete the old. I was wondering if anyone has any other ideas that could help in this situation? Thank you.
March 22, 2004 at 7:03 am
You could create a process wich bcp out all the tables that you need to .txt articles. If you can filter the data using any date column much better, so can minimize the data you export.
Then zip all the .txt files. Copy them to your destination server and bcp in to temp tables with the same structure of the destination tables. This is to avoid deleting the data in your destination table (but you will need more disk space). And then insert only the data you need with a left join from the temp table and the destination table.
But if you have a good network connection, I will still use transactional replication in continuos mode or every 2, 3 hours.
Viewing 2 posts - 1 through 1 (of 1 total)
You must be logged in to reply to this topic. Login to reply