October 19, 2009 at 5:41 pm
This is a fairly general question so all comments are welcome.
I collect all my backups in one location according to the following convention:
\\SERVER1\DBBACKUPS\DBSERVERNAME\DATABASENAME and then the backup files are stored in each subfolder. My backup files are fulls and transaction backups. Files are date and time stamped.
My \\SERVER1\DBBACKUPS\ folder is over 400 GBs. My retention policy in this folder is 2-3 days. This folder is written to tape and sent offsite.
We would like to improve this situation from a DR perspective by moving the data offsite over the network/VPN. The proposed solution is a large disk array onsite that synchs to a similar device offsite.
The problem is the size of the folder and the available bandwidth, i.e., the data is too big and the pipe too small.
Any suggestions? I imagine some of you are doing something similar. Thanks, Em.
October 19, 2009 at 6:50 pm
I understand the situation but would like to know how often are you planning to move the data from onsite to offsite.
We have a larger size of databases and doing almost the same way that you have, we have the data movement scheduled at every 20 mins so at any point, the latest data that is available in that folder is moved. So it is not the entire 400 Gigs that you would be required to be moved in a single shot.
I am not sure the bandwidth specs that we have and try to enquire it and let you know,but how about your bandwidth, any idea what do you have?
Blog -- LearnSQLWithBru
Join on Facebook Page Facebook.comLearnSQLWithBru
Twitter -- BruMedishetty
October 20, 2009 at 8:51 am
Thanks for your respsonse.
>> like to know how often are you planning to move the data from onsite to offsite.
The system we are evaluating has two-modes, traditional run the job and move the data which we would probably run daily and a near real-time synch mode which we may use for our most critical db.
>>but how about your bandwidth, any idea what do you have?
I'm thinking we may be able to sustain 3-6 mb upload. Its not dedicated so it will vary.
October 20, 2009 at 1:40 pm
I would look at using Hyperbac myself. Or, you could look at the various products available from Redgate, Quest, Idera, etc... that perform backups and allow for compression.
Using Litespeed, I have compressed my 400GB (used) database to a full backup that is less than 80GB. You should get comparable space savings from any of the utilities available - which will help tremendously on the amount of data being transferred across the wire.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
October 20, 2009 at 2:13 pm
emily-1119612 (10/19/2009)
...
The problem is the size of the folder and the available bandwidth, i.e., the data is too big and the pipe too small.
Any suggestions? I imagine some of you are doing something similar. Thanks, Em.
Bigger pipe, smalller data, or both.
October 21, 2009 at 9:08 am
Thanks all. I am going to explore the compression options.
October 21, 2009 at 9:42 am
I like Bru's idea. Create a job to just send all "new" backups across the pipe on a regular basis. That way you'll be moving files as quick as you can, and if you have staggered backups, even a little, they'll get offsite fairly quickly.
The only other thing I could think of would be to use mirroring, log shipping, or even replication to slide data changes across the wire more regularly. You could set up one larger server, single SQL instance and just have multiple databases over there. In a DR mode you could always move a db to another server at the db site, but you'd at least have the copy.
Keep in mind a backup has lots of data that hasn't changed (usually) and lots of indexes, which both can add a lot of space to the file size that doesn't need to be copied on a regular basis.
October 21, 2009 at 1:24 pm
Thanks for replying Steve. I'm not going to have direct access to the pipe which will essentially be an encrypted tunnel created by the two hardware devices. I'm taking a strong look at Hyperbac. If I can resolve this without modifying my existing backup code that would be wonderful.
>>Keep in mind a backup has lots of data that hasn't changed
Good point, I don't think I want to get into mirroring or log shipping at this time but differential backups could help as well.
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply