May 5, 2008 at 9:02 am
Hi all,
Can any 1 help me out tell me whats the fastest way to copy the backup file from server 1 to server 2. I am doing through the map drive and its too slow, for 50 gb backup file its taking 36hrs and its still 75 copied.
Thanks
May 5, 2008 at 9:15 am
Are you manually copying? Check your network bandwidth, perhaps dedicate some ports to this task or a separate network on a switch.
May 5, 2008 at 9:20 am
yes i am doing manually, and i dont know how to dedicate ports you are talking about.
May 5, 2008 at 9:54 am
Check with your network admin. 36 hours for 50gigs seams very slow. Of course it is all bandwidth dependent.
The more switches between server a and server b means more network to be transversed. The packets you are sending competes with all other packets on the network at that time.
In some switches, you can set up a priority (normally done by network admin) to certain traffic. Hopefully you are running on a Gig network switch, and have Gigabit cards in both servers.
There is nothing programatically in SQL server that increases your bandwidth, it is all hardware dependent, so your first step should be to see your system/network admin
Marvin Dillard
Senior Consultant
Claraview Inc
May 5, 2008 at 10:04 am
As suggested, talk to your network admin. They can help you determine why the copy takes so long.
May 6, 2008 at 1:26 am
Windows is rubbish at file copying - back in the old Netware days it had optimised file copy commands (NCOPY) to copy between servers - in all the time Microsoft have been dominant they have left their server to server copy less than optimal.
If you are running the copy at your client the data will be coming down from Server A to your client and then going back to Server B (old Netware NCOPY would send straight from ServerA to ServerB even if command issued from your client) - so make sure you are issuing command from one of the servers.
Also if you are coping to a URL (e.g. \\ServerB\BackupShare) then windows copy can be slow as it keeps reassessing security rights - mapping a drive first can be a lot quicker - i.e. NET USE Z: \\ServerB\BackupShare and then copy to Z:)
XCopy is quicker than Copy (seems to use beeter size chunks) - better still is RoboCopy (see the Windows resource kit - or just Google it)
In most cases where server CPU is more available than network bandwidth it is worth compressing the file first on the source server, copying and then expanding on the target server (order here is important - you must run command to expand again on the target server itself otherwise it needs to bring the data back to wherever you are running the command) - gzip seems popular for this sort of command line compression.
Wouldn't it make sense if Windows could send compressed data streams in a copy .... maybe Win 2020 Server ...
James Horsley
Workflow Consulting Limited
May 6, 2008 at 8:03 pm
robocopy is a good utility for this, plus you can specify retry & wait switches. For example, /r:1 /w:5.
As James said, use server map not via your own workstation/client.
May 7, 2008 at 12:16 am
Try Zipping your file using some zip utility WINZIP etc.. you will need to use commandline utility of winzip for such a large file. WZZIP...
Secondly if its a simple bak file you can try taking a litespeed backup with compression level 7 it will reduce your file 7 times.
finally use robocopy with restart set to 1 sec i believe its /z option.
May 7, 2008 at 3:52 pm
The bottleneck of slowness in copying files across servers may be due to the firewall in between. I came across the experience copying a 6 GB file took me over 10 hours.
Well, possible solutions.
1. Compress your file in backup;
2. Zip your file;
3. Find bypass the firewall. You may find the third server as a bridge to copy your file;
4. Copy your file to an external disk and take it to your target server to restore it. It is not a joke. Couple years ago, we had a database with 300+ GB size. We could not copy it across server. We had to copy it to an external disk and brought it to the target server, ...
May 21, 2008 at 7:05 pm
Guys,
There's a little bit of misinformation in some of the posts here - copying to a UNC target is generally the same as copying to the same target via a mapped drive from a performance perspective. They both use SMB for the transfer, and they both are still bound by regular Windows share-level and NTFS permissions present on that target.
Let's leave aside your network fabric for a sec, and look at the applications used for the transfer. Windows Explorer (eg drag-and-drop, copy-paste file, etc.) is one of the worst for performance - it has a lot of extra overhead and should be avoided where possible. A command prompt-based copy using xcopy or RoboCopy is faster, and since these have nice extras like "continue on error", etc. they are much better. However...
All of the above make use of buffered file IO (CopyFile()) calls. This is great for multiple small copy operations, but is extremely poor for huge files and can actually starve your server of memory during the copy operation. It's MUCH faster to copy big files using unbuffered IO (WriteFile()) - the only problem is there are very few tools out there that use unbuffered IO for file copies. The main one around that most people will have access to is ESEUTIL.EXE from an Exchange Server 2K/2K3 installation. This is normally used for Exchange message store maintenance operations but also has a switch (/Y) that simply copies files using unbuffered IO. If you have access to an Exchange server copy ESEUTIL over plus its DLL and use it on the source server to copy your files to the target UNC path. It can be 3-4 times faster on huge DB files than xcopy.
More details are here and here.
SMB is generally slow, yes (though much better when using solely Vista or Win2K8 boxes), and if you can reduce the file size significantly before or during transfer you can also win significant transfer time decreases. So compressing the file beforehand with a fast compression util (eg QuickLZ - fastest around) then copying via ESEUTIL to the target will give you the best performance.
All of the above applies even if you have a poor network fabric, though it's much nicer if you're running on gigabit ethernet.
Regards,
Jacob
May 22, 2008 at 11:52 am
There are couple of solutions to copy big files such as 70GB or more than that backup files and so on.
1) The Robocopy is the best solution and you may use some switches as well.
2) You can zip the file than transfer to the server
3) Use FTP (File Transfer Protocol) For FTP you may need to enable the services for that. I mean that services should be enabled on both the systems(Servers).
4) Any Third party tool for copying Big files and send it thru email.
MCP, MCTS (GDBA/EDA)
May 22, 2008 at 11:54 am
There are couple of solutions to copy big files such as 70GB or more than that backup files and so on.
1) The Robocopy is the best solution and you may use some switches as well.
2) You can zip the file than transfer to the server
3) Use FTP (File Transfer Protocol) For FTP you may need to enable the services for that. I mean that services should be enabled on both the systems(Servers).
4) Any Third party tool for copying Big files and send it thru email.
And the last and most imp solution...
If the aplication is mission critical SLA is set for 72 hours then probably you may need Tape backup that can be send to your server location.
For Tape bacup it wont' take more than 48 hours.
MCP, MCTS (GDBA/EDA)
May 22, 2008 at 1:26 pm
ESEUTIL seems a good tool for copying a big file. First time for me to hear about this. It is great for a big database migration. Save a lot of time!
May 22, 2008 at 1:51 pm
Again, If it is mission Critical application, Please go with Tape Backup Shipment.
I would recommend the Tape Backup Shipment and FTP.
MCP, MCTS (GDBA/EDA)
October 11, 2008 at 3:11 pm
Hi ,
Do u have both servers in same location , then you can try hard disk movement between two server if its feasible - ( like server is not in production )
Viewing 15 posts - 1 through 15 (of 18 total)
You must be logged in to reply to this topic. Login to reply