March 7, 2008 at 9:09 am
I have a set of about 10 million records that I need to move from one database to another. This will be done off hours, so performance is not a critical issue, but needs to be taken into account. (Also note that some of this data will have text datatype data in it - xml).
We tried a DTS package and after 2 days, it will still running, so we had to roll it back. One suggestion was dropping the index, but I was wondering if there were any other suggestions in how to get the data over as quickly as possible. I would think 10 million rows could be bulk moved over a weekend.
Anyone have any ideas?
Thanks!
Michael
March 7, 2008 at 9:22 am
I agree you want to drop the indexes on the table. It should speed things up.
If you are looking at a big bulk load, like you mentioned, I would also suggest you look at using the command line BCP tool. It tends to be a lot faster for larger data sets.
March 7, 2008 at 2:04 pm
Drop indexes first on the target server;
*****
Here is the place to play games.
You can use either BCP or BULK INSERT;
You can output data from your data source into several plain files;
Import these files at the same time;
*****
Re-create indexes after completion;
March 10, 2008 at 2:17 am
Michael,
I'm currently working on application that copies data between SQL2K servers. It uses BCP. One of the large tables has 60 millions rows and it is imported into target database within 40 minutes. What I had to do was dropping non-clustered indexes on the target table before the import. XML data may have some influence on the performance in your case but I would definitely drop the indexes.
Marek
March 10, 2008 at 4:17 am
And, since you're moving data from SQL Server TO SQL Server, if you use the "Native" data format, it'll absolutely scream.
--Jeff Moden
Change is inevitable... Change for the better is not.
March 10, 2008 at 6:38 am
Thanks all...but it seems BCP and BULK INSERT require the data to come from data files, not an instance of SQL. Is there a way to copy it from SQLA to SQLB without moving it to a text file quickly?
March 10, 2008 at 6:48 am
You may consider the SqlBulkCopy class from .Net Framework 2.0:
http://msdn2.microsoft.com/en-us/library/system.data.sqlclient.sqlbulkcopy.aspx
older version of the same: SQL-DMO objects,
or just simple INSERT statement (INSERT INTO ... SELECT * from linked_server_table).
In all cases you should drop indexes on the destination table.
Marek
March 10, 2008 at 6:54 am
michael.esposito (3/10/2008)
Thanks all...but it seems BCP and BULK INSERT require the data to come from data files, not an instance of SQL. Is there a way to copy it from SQLA to SQLB without moving it to a text file quickly?
Not sure why you're opposed to getting it done in just a couple of minutes 😉 And, I don't suppose replication would suit you. Have you tried a linked server and SELECT/INTO?
--Jeff Moden
Change is inevitable... Change for the better is not.
March 10, 2008 at 7:08 am
Jeff is right. If you start BCP at 5:00PM on Friday, you will get the job done in 10 minutes (so you can still have few beers in the evening). If you want to write .Net program to do the job, you may need 1 day to do it (including testing). You spent alraedy few days on the problem and now you want to do it faster then in 10 minutes :)?
Marek
March 10, 2008 at 11:26 am
I do not know a whole lot about BCP, can someone maybe give me some sample code that I could work with to accomplish what I need?
Thanks!!!!
Michael
P.S. I am not opposed to speed, just not sure how it works, so I was skeptical 😛
March 10, 2008 at 11:36 am
You may take a look at samples available on the Internet like:
http://www.databasejournal.com/features/mssql/article.php/3391761
or http://www.windowsitlibrary.com/Content/77/14/1.html (look at second page - there are examples there)
or directly to Microsoft documentation: http://msdn2.microsoft.com/en-us/library/aa174646(SQL.80).aspx
Marek
March 11, 2008 at 11:40 am
Here's a two sample of bcp commands I use to transfer data from one database to another:
bcp Database..TableName out Drive:\FolderName\table.out -T -S SourceServerName -n
bcp Database..TableName in Drive:\folderName\table.out -T -S DestinationServerName -n -E -h "TABLOCK"
I use other osql commands to disable the triggers on the destination tables, as well as truncating them prior to transferring the data.
March 11, 2008 at 4:36 pm
Michael... lookup BCP OVERVIEW in Books Online and take a look at each parameter that Ross used in the code just above... it's the only way you'll start to learn this very powerful, frequently overlooked tool.
--Jeff Moden
Change is inevitable... Change for the better is not.
March 12, 2008 at 8:24 am
Ross,
Thanks a ton for those commands. Helped me out a lot. BCP'd about 40million rows in like 10 minutes.
March 26, 2008 at 7:25 am
Ladies/Gentlemen,
I ran the following BCP commands this weekend
bcp "select AuditRecUID,TrackingUID,PolicySymbol,PolicyNumber,MasterCompany,Mod,LOB,ContractNumber,AuditType,AuditStep,null as AuditXml,ProcessingSecs,wUser,UserType,SystemTs from DB..PolicyProcessAudit2 (nolock) where systemts >= '1/1/2008'" queryout C:\PPA1\Table.out -T -S PRODDB -n
bcp DB_Archive..PolicyProcessAudit in C:\PPA1\Table.out -T -S PRODDB -n -E -h "TABLOCK"
and the export seemed to work OK, when I went to run the import, I got the following errors
Starting copy...
SQLState = 22001, NativeError = 0
Error = [Microsoft][ODBC SQL Server Driver]String data, right truncation
SQLState = 22001, NativeError = 0
Error = [Microsoft][ODBC SQL Server Driver]String data, right truncation
SQLState = S1000, NativeError = 0
Error = [Microsoft][ODBC SQL Server Driver]Unexpected EOF encountered in BCP dat
a-file
0 rows copied.
Network packet size (bytes): 4096
Clock Time (ms.): total 16
Any help/ideas?
Thanks!
Michael
Viewing 15 posts - 1 through 15 (of 20 total)
You must be logged in to reply to this topic. Login to reply