December 16, 2008 at 9:05 am
We use ArcServe to backup database .bak files to tape nightly. This product has worked well, but we have concerns that it will not be able to perform fast enough for large database backups such as 500+ GB. Which leads to my questions.
What backup software do you use for backing up large .bak files to tape and how fast does it perform?
Do you use Microsoft SQL Server's built-in backup solution for backing up large databases to disk or do you use a third-party product?
Thanks, Dave
December 16, 2008 at 9:15 am
The native software is the fastest for throwing out pages, but if you use software with compression (Litespeed, SQL Backup, etc.) it can save time because less is written to disk at the expense of more CPU.
For large backups, many people snap them on the SAN, which is quicker than trying to get to disk. Don't forget that you want to throw to disk, then have something move the .bak file to tape, not direct .bak to tape from SQL. Tape is slooowwww.
Also, you don't need to get the backup to tape overnight, just anytime before the next set of backup files get written
December 16, 2008 at 9:25 am
Hi Steve,
We always backup to disk first, but the problem we see on the horizon is the speed to both backup to tape and restore from tape. We will be looking at using Microsoft's DPM as a possible solution, but that would also replace SQL Server .bak backups so we need to do a lot of testing to show it can do what we need it to do.
Can you elaborate on the SNAP to the SAN?
Thanks, Dave
December 16, 2008 at 9:37 am
Boy, life is changing when we have 1TB USB disks for a few hundred bucks.
Back when 500GB was a lot of space, most dbs this size were on the SAN. Typically you can take a LUN (volume) on the SAN and SNAP a copy of it on the SAN instantaneously. Not sure I'd do it with live MDFs, but it should copy the backup volume (that's separate, right?) pretty much instantaneously. Kind of works like snapshots in SQL, but it gets you a copy.
Large db restores are problematic. If you look at some of the SQL Cat work (www.sqlcat.com Or org) they find they really have to consider filegroup restores, archive data in some filegroups, etc) in order to have reasonable recovery for multi-tb databases. If you have 100TB of data, you can't back up or restore without a SAN, and then perhaps not in 24 hours.
December 16, 2008 at 9:42 am
SAN SNAP is amazing.
Behind the scenes there is a mirror of the data sitting on disk that is synchronized with the disk (EMC used to call it a BCV, others called it a split mirror, there are several acronyms) that is presented to your server. At a set time you can perform a SNAP backup, whereby a checkpoint is established and written, then the BCV disk is split away from the production disk. From there a backup can be performed on the split disk direct to tape, or just held until ready to do the next backup, at which point it is resychronized and the process repeated.
Direct attached tape drives can back this data up ridiculously fast.
Another huge benefit of this is that because this is a block by block copy of the data it is an exact copy of your production SQL Server data to the point in time on the SNAP. You can attach that volume to another server, get the databases into a consistent state and run your checkdb's, fragmentation assessments and the like with 0 impact on your production machine. This was you know how the data is, and what steps you might need to perform to keep it in tip top working shape. Detach the db's and disk when you are done and everything is ready for the next backup.
There is a cost overhead in the additional disk, but is well worth it, especially when it comes to very large databases.
Of course there are many backup compression tools out there if SNAP is not an option. I would recommend giving a few of them a try, and find out what fits you best.
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply