November 6, 2011 at 1:40 am
Comments posted to this topic are about the item Quick and Dirty DR Solution Using Robocopy
MARCUS. Why dost thou laugh? It fits not with this hour.
TITUS. Why, I have not another tear to shed;
--Titus Andronicus, William Shakespeare
November 7, 2011 at 3:45 am
On top of using RoboCopy, use RoboCopyPlus, which will let you email job results with condensed logfile, and inserts entries in the windows eventlog with job result error levels.
November 7, 2011 at 4:58 am
Be aware that using the /z switch (restartable mode) for Robocopy slows it down massively cant tell you how much but something like half speed. You are better off not using /z in general and just recopying the whole file again in the rare event it fails part way through.
I have just been doing something identical to this but I found Robocopy to be far too slow for large files ie > 20GB. It also seems to cause file corruption (silently) somewhere in the range 30GB to 70GB. I can copy 30GB files but not 70GB files. You can prove that a file is identical (or not) easily using the Microsoft utility called FCIV which creates a hash value for a file:
http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=11533
There is a much quicker way to copy very large files using a Microsoft utility called ESEUTIL. I have just spent my entire weekend migrating 280GB of backup files across a network using this. It runs much faster than robocopy and does not corrupt the files.
Don’t believe me about ESEUTIL? See this for evidence that using ESEUTIL is a sensible thing to do (bottom of page post by Jacob Luebbers Posted 5/22/2008 2:05 AM:
http://www.sqlservercentral.com/Forums/Topic495042-357-1.aspx
(also see the other posts he refers to – I have seen other posts on same topic too)
November 7, 2011 at 6:07 am
That is a nice, thank you.
I am using something simmilar but through windows tasks.
Iulian
November 7, 2011 at 8:14 am
Be careful with using Robocopy on large files - it uses buffered IO functions in the Windows API which can lead to massive paging and negatively impact SQL Server performance.
These two blog posts explain buffered vs. unbuffered IO in greater detail:
http://blogs.technet.com/b/askperf/archive/2007/05/08/slow-large-file-copy-issues.aspx
The articles also talk about other tools which copy files using unbuffered IO as an alternative to robocopy.
November 7, 2011 at 12:06 pm
Sean Elliott (UK) (11/7/2011)
Be aware that using the /z switch (restartable mode) for Robocopy slows it down massively cant tell you how much but something like half speed. You are better off not using /z in general and just recopying the whole file again in the rare event it fails part way through.I have just been doing something identical to this but I found Robocopy to be far too slow for large files ie > 20GB. It also seems to cause file corruption (silently) somewhere in the range 30GB to 70GB. I can copy 30GB files but not 70GB files. You can prove that a file is identical (or not) easily using the Microsoft utility called FCIV which creates a hash value for a file:
http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=11533
There is a much quicker way to copy very large files using a Microsoft utility called ESEUTIL. I have just spent my entire weekend migrating 280GB of backup files across a network using this. It runs much faster than robocopy and does not corrupt the files.
Don’t believe me about ESEUTIL? See this for evidence that using ESEUTIL is a sensible thing to do (bottom of page post by Jacob Luebbers Posted 5/22/2008 2:05 AM:
http://www.sqlservercentral.com/Forums/Topic495042-357-1.aspx
(also see the other posts he refers to – I have seen other posts on same topic too)
I've heard about ESEUTIL but never been able to get my hands on a copy to try it out.
MARCUS. Why dost thou laugh? It fits not with this hour.
TITUS. Why, I have not another tear to shed;
--Titus Andronicus, William Shakespeare
November 7, 2011 at 1:50 pm
I thought RichCopy replaced RoboCopy as the copy utility of choice. Or am I comparing apples and oranges?
November 7, 2011 at 2:39 pm
You can get eseutil from the web if you look via google "eseutil download". It's installed with MS Exchange but all you need is 2 files eseutil.exe and eseutil.dll and they are small.
If you are on W2k8 you can use xcopy with the new /j flag which does the same type of unbuffered copying apparently. I'm not on w2k8.
November 9, 2011 at 12:42 pm
Guys,
Considering an online database, with users transactioning on it, is this procedure able to create a copy of the database that I can use on another server?
For instance, if server1 crashes and I try to attach the copy on server2 using the file I've synced this way, is this going to work?
Regards
November 10, 2011 at 6:58 am
Yes the copy will work OK so long as server1 and server2 are both not using the mdf and ldf files whilst they are copying over.
Note that mdf and ldf files are many times larger than a backup especially if backup compression is used. You would be advised to zip (ie compress) the files before copying them across the network.
It might be the case that server2 will already need to have previously had the same database online so that it already knows which ndf and ldf to use. Recovery might not happen when attaching databases (I could be wrong). Recovery will definitly work if SQL is started with the right files in the right place ie as it has previously known.
Try it.
November 10, 2011 at 7:25 am
It copies the backup files not the .mdf or .ldf , moeover the article explains some params of the roocopy so that it will retry a few times to copy the backup files.
Once the backup files copied in the second location you should be able to restore the database.
This tool looks to me more as an"quick and dirty" 🙂 extension of the backup so that it also push the database to another location.
Maybe what you are looking for is mirroring, replication, failover clustering, etc.
Regards,
Iulian
November 10, 2011 at 8:59 am
I couldnt agree more 🙂
November 11, 2011 at 9:00 am
You should always run Robocopy from the standby host (the secondary) to copy from primary to secondary. This way the Robocopy uses secondary host resources and only reads the backup files from primary host.
If you have money to spend, you can use double-take. It copies at data-block level.
Jason
Jason
http://dbace.us
😛
November 11, 2011 at 10:22 am
yes this article is about copying backup files but you can copy mdf and ldf from one server to another and then attach them.
November 14, 2011 at 4:29 am
Great tip!
One change is needed in my opinion though:
SET /A ERRLEV = "(%ERRLEV1% & %ERRLEV2%) & 24"
Should be "(%ERRLEV1% | %ERRLEV2%) & 24"
(untested)
Otherwise you'll miss error code(s) in cases when e.g.
ERRLEV1=8 and ERRLEV2=16
or
ERRLEV1=24 and ERRLEV2=0
Edit:
Hope this example illustrates my point:
select (8 & 16) & 24 as errorcode_lost
select (8 | 16) & 24 as errorcode_active
select (0 | 16) & 24 as errorcode_active
select (8 | 8) & 24 as errorcode_active
select (5 | 6) & 24 as no_errorcode
select (0 | 0) & 24 as no_errorcode
Viewing 15 posts - 1 through 15 (of 24 total)
You must be logged in to reply to this topic. Login to reply