March 4, 2013 at 12:56 pm
Hi,
I need to copy a big table (about 250G, basically used to store images) that I have on a MSSQL Server to a MSSQL Server. I don't need to transform data but I don't if this command will works
INSERT INTO Temp SELECT * FROM [SourceServerName].DBName.dbo.Temp
Thanks for your help
March 4, 2013 at 1:05 pm
If space isn't an issue, I'd restore a backup to the other server and drop the things you don't need, keeping only that table
______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience
March 4, 2013 at 1:07 pm
Hi, I need to insert it into a database. Thanks
March 4, 2013 at 1:13 pm
Your command will work, there's just a whole bunch of things to think about: First thing that comes to mind is that the entire process will be logged and generated one heck of a large transaction! Is your DB in simple, bulk-logged, or full recovery?
______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience
March 4, 2013 at 1:19 pm
Hi,
The database is in Simple Mode.
Thanks
March 4, 2013 at 3:53 pm
Don't use the insert command as you currently have it. Contrary to popular belief, having the database in the SIMPLE recovery mode does NOT prevent the log file from being used. Since you're trying to transfer all 250GB in a single query, your log file will explode to at least that size.
You could write a WHILE loop to transfer the data in smaller chunks but it'll still take a while and fill the pipe while you're doing it.
It's time for a "carrier-pidgeon" to do the first load. Before we get into that, are both servers on a SAN and are both SANs of the same make and model?
--Jeff Moden
Change is inevitable... Change for the better is not.
March 4, 2013 at 6:20 pm
Hi,
They are on the same SAN. Thanks
March 4, 2013 at 7:17 pm
In that case, you might try what SQL Server itself uses for replication if you have the disk space. BCP in native format. Export the table to a file, then import it in half million (or less depending on the size of your rows) row chunks using the batch size setting. It will keep your log file from exploding.
As an alternative, you could have the infrastructure folks do what I believe is called a "SAN Snapshot" to copy the mdf and ldf files and then simply attach the database.
--Jeff Moden
Change is inevitable... Change for the better is not.
March 4, 2013 at 7:36 pm
Hi,
In the past I had problems with BCP because when I try to copy the data to a file some times SQL introduce returns between the data and when I try to reconstruct again the images I got errors.
The second option is a valid but I need to copy the table to a specific data base, so if a move the mdf and ldf I still have to move the table from two locals databases
Thanks
March 4, 2013 at 7:44 pm
csalazar-607991 (3/4/2013)
Hi,In the past I had problems with BCP because when I try to copy the data to a file some times SQL introduce returns between the data and when I try to reconstruct again the images I got errors.
Yes but if you actually set it up to replicate the data over (meaning physically create a publication/subscriber), the snapshot will create multiple bcp chunks for your automatically and should keep everything in sync.
If you can't use snap shot replication as a one time deal, you really only have 2 options: 1) have the SAN guys take a snap of the LUN (as Jeff has mentioned) or 2) break it up into a match style job, transferring over x-number of rows at a time.
I can't think of anything else that hasn't already been mentioned
______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience
March 4, 2013 at 8:42 pm
Jeff Moden (3/4/2013)
In that case, you might try what SQL Server itself uses for replication if you have the disk space. BCP in native format. Export the table to a file, then import it in half million (or less depending on the size of your rows) row chunks using the batch size setting. It will keep your log file from exploding.As an alternative, you could have the infrastructure folks do what I believe is called a "SAN Snapshot" to copy the mdf and ldf files and then simply attach the database.
As jeff sugested , taking a SAN snapshot and then attaching the mdf / ldf file could save time and will be a good approach.
"More Green More Oxygen !! Plant a tree today"
March 4, 2013 at 9:55 pm
I have some recommends here:
1-Choose the point time when your server does not work hard
2-Separate your table into smaller pieces (may be multiple files) and using a package to transform data
March 6, 2013 at 4:10 am
Did you try Import/export option of SQL when server is not too busy?
March 6, 2013 at 8:23 am
Hi,
We decided to insert the info using
BEGIN TRANSACTION
INSERT INTO DB2.dbo.Table
SELECT *
FROM DB1.dbo.Table
WHERE CodigoCaso < @cont and CodigoCaso > @ultimo
SET @ultimo = @cont
SET @cont = @cont + 1000
COMMIT TRANSACTION
In order to avoid a huge increase in the log file.
I will post the result
Thanks
Viewing 14 posts - 1 through 13 (of 13 total)
You must be logged in to reply to this topic. Login to reply