September 26, 2013 at 7:41 pm
Hello
I have to transfer 300gb's of data from a table in one database to a table in another database on the same server. I've done some research into data loading and testing and found the quickest and most efficient way to do this was set the recovery to BULK_LOGGED, turn on trace flag 610 and lock the target table. I have also ordered the data set on the primary key of the target table as well.
The current transfer rate that I'm getting is around 1gb per hour.
My question is would I get a quicker transfer rate if I was to unload the data to a file using BCP or bulk insert and then load it back into the target database and table?
September 27, 2013 at 12:14 am
It might be a track worth pursuing, but to me it seems like you're adding an extra step.
How do you load the data now? If you sort the data according to the primary key, you need to mention it somewhere so that SQL Server doesn't inspect the data for sorting.
Need an answer? No, you need a question
My blog at https://sqlkover.com.
MCSE Business Intelligence - Microsoft Data Platform MVP
Viewing 2 posts - 1 through 1 (of 1 total)
You must be logged in to reply to this topic. Login to reply