September 7, 2009 at 8:38 am
Has anyone used robocopy ( or something better ) to copy backups? The pipe from our server farm at Raging Wire to our office is 250MB and in testing it's taking many hours ( 7 with xcopy, 12+ with robocopy in restartable mode) to copy our 300GB full backup file. Today I'm trying robocopy without the /Z "restartable" switch, although I'm not happy with that. This bak file will be growing significantly over the next year.
We're looking at doing two things: drop log shipping completely and copy backups to a disaster recovery box. This is just an interim step. Longer term, as budget allows, we need a log shipping standby sql server powerful enough to actually use. Right now log shipping is giving us a copy of the database but the standby box isn't powerful enough to actually fail over to.
Since we need to set the database in simple recovery every few months for various projects, I'm tired of recreating log shipping when it really isn't serving the intended purpose. ( simple recovery disables log shipping )
So the interim plan is to 1) move the standby box to our office, away from the Raging Wire facility ( geographical distribution ) and 2) copy the weekly full bak, nightly differential backup and transaction log backups over this pipe to a file server. I realize it may be the pipe bandwidth and not robocopy that's the problem.
September 7, 2009 at 11:36 am
maybe you can use Transactional Replication ?
September 8, 2009 at 1:02 am
Indianrock (9/7/2009)
Since we need to set the database in simple recovery every few months for various projects.....
Why?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
September 8, 2009 at 2:54 am
Indianrock (9/7/2009)
Has anyone used robocopy ( or something better ) to copy backups? The pipe from our server farm at Raging Wire to our office is 250MB and in testing it's taking many hours ( 7 with xcopy, 12+ with robocopy in restartable mode) to copy our 300GB full backup file. Today I'm trying robocopy without the /Z "restartable" switch, although I'm not happy with that. This bak file will be growing significantly over the next year.
If the problem is just one of size, you could look at using a product like Quest LiteSpeed to compress the backups (or upgrade to 2008 Enterprise Edition, which also contains backup compression abilities - though not as well!) SQL Server backups normally compress extremely well - a factor of five is not uncommon. You might find copying a 60GB file much more manageable.
You could also take a backup to a transportable medium (such as a portable hard drive) and physically ship that to your office.
You could also look at the actual transfer speed you get down the 250Mbps pipe. That should give you 25MBps throughput at full tilt, completing a 300GB transfer in around 21 minutes. Windows file copy (all versions ever) are notoriously slow at networked file copy operations. We use FTP transfer for exactly this reason.
Indianrock (9/7/2009)
We're looking at doing two things: drop log shipping completely and copy backups to a disaster recovery box. This is just an interim step. Longer term, as budget allows, we need a log shipping standby sql server powerful enough to actually use. Right now log shipping is giving us a copy of the database but the standby box isn't powerful enough to actually fail over to.
The whole point of log shipping is to maintain a time-delayed copy of the production database by transferring only log backups. Sure, you need a reasonably recent full backup, and maybe a differential, together with an unbroken log chain in case of disaster (such as a restore log operation dying and leaving the database in an unrecoverable state).
But still - the full backup can generally be taken weekly (or even less frequently, depending on the rate of data change). A new differential and log is all that is then required to resurrect log shipping. I would encourage you to share the full details of your requirements with us so we can help you with this.
Indianrock (9/7/2009)
Since we need to set the database in simple recovery every few months for various projects, I'm tired of recreating log shipping when it really isn't serving the intended purpose. ( simple recovery disables log shipping )
Simple recovery breaks the log chain as soon as a checkpoint occurs*. This seems an odd thing to do to a log-shipped database, so I assume there is something you haven't told us.
Indianrock (9/7/2009)
So the interim plan is to 1) move the standby box to our office, away from the Raging Wire facility ( geographical distribution ) and 2) copy the weekly full bak, nightly differential backup and transaction log backups over this pipe to a file server. I realize it may be the pipe bandwidth and not robocopy that's the problem.
Now I'm confused about what's where. Perhaps I'm being thick?
Paul
September 8, 2009 at 5:32 am
The database structure is a work in progress. New file groups were added in May along with other schema changes, initially while in full recovery. Despite log shipping and transaction log backups every 15 minutes, the log file filled up, so we went to simple recovery for the duration of that weekend project.
There will be more such work in the near future and getting the disk space I'd like to see is always a struggle. The bottom line is that until the standby sql server could actually be used to support the production load, I don't want to spend anymore Sunday afternoons resurrecting log shipping.
September 8, 2009 at 5:41 am
I have suggested LiteSpeed. We'll see if I can get the $$ approved. FTP, now that's interesting. I'll ask our Systems folks about that. Most of our production hardware is at a Raging Wire server facility in Sacramento.
Our office is about ten miles from there so we want to have a current copy of either the database or backup files at our office along with the dev servers. This will save us money that otherwise goes to rack space and energy charges at Raging Wire.
We're doing the full backup Saturday night only, differentials nightly along with the 24/7 every 15 minute transaction log backups for log shipping.
See my other post about why we go to simple recovery -- it's because the log fills up when we do large schema changes.
September 8, 2009 at 6:26 am
Yes, FTP was a real 'win' for us - even the latest 2008 Server doesn't do file copy well. It has a bug where a large copy bloats the file cache to a point where the OS ceases to function correctly. There is a non-public hotfix, but it didn't work for us. Shame, because I'd really like to dump WGET (a free FTP thingy) and use Windows file copy again one day...
LiteSpeed is a worthwhile investment...a no I don't have any connection with them 😀
I sympathize with your other difficulties - good luck.
Paul
September 8, 2009 at 11:13 am
used litespeed before, can recommend it. I would make the suggestion that you get it on a trial basis or find out how much compression would be applied to your databases. that could be a big selling point. used robocopy before but purely because of the restart functionality.
FTP could a good solution, but ideally I would concentrate on getting the backups to as small as possible. Might make your current setup more managable if the backups are significally smaller.
--------------------------------------------------------------------------------------
[highlight]Recommended Articles on How to help us help you and[/highlight]
[highlight]solve commonly asked questions[/highlight]
Forum Etiquette: How to post data/code on a forum to get the best help by Jeff Moden[/url]
Managing Transaction Logs by Gail Shaw[/url]
How to post Performance problems by Gail Shaw[/url]
Help, my database is corrupt. Now what? by Gail Shaw[/url]
September 8, 2009 at 3:21 pm
A LiteSpeed trial is an excellent suggestion. I keep banging on about FTP since using it nearly trebled our throughput on a 400Mbps up-country link.
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply