June 12, 2003 at 9:37 pm
We are running SQL Server 2000 EE with SP2 in four nodes datacenter cluster and database is over 500GB currently. We perform full backup weekly, differential backup daily and transaction log backup every 30 minutes. These backups will be backed up to SAN later. I would like to have your opinions and solutions on following issuses.
1. How do you minimize the data lost?
During the full backup, transaction log backup will be blocked until the full backup completion. In such large database, the full backup will take at least few hours (in our case, It takes over 6 hours) to complete, If something unluckily happens to the server before the full backup completion and database were corrupted, we will lose not only full backup, but also all transactions after the full backup starting. More than 30 minutes data will be lost.
2. How do you reduce the backup time?
In order to improve full database backup to reduce the backup time, we backup the database into multiple backup files in differential drives.
3. How do you verify each full backup can be restored?
We perform quarterly restoration test to another server but we did experience once the full backup was corrupted and was unable to be restored.
4. What is your recommendation?
We are considering to implement snapshot backup solution from HP. Log shipping is another option.
5.What are your strategy/solution to backup such large database?
Thanks and appreciate your inputs.
June 13, 2003 at 7:01 am
Allen,
Looks to me like you've got the backups tuned as well as they can be. I can only offer some suggestions/experiences:
1. How do you minimize the data lost?
Reduce the backup time. See Q2.
2. How do you reduce the backup time?
I'd do striped backups, but you're already doing so. I've never had the need to try out SQL Litespeed (one of this site's sponsors) but it may be worth investigating.
3. How do you verify each full backup can be restored?
For a mission critical database I've worked on (but closer to 150gb than 500gb) we used Compaq's EVM on the DB backup SAN drives. This split/mirror thingy let us attach a copy of the backup drive to another server and restore them there daily. Also provided for a read-only reporting server.
4. What is your recommendation?
Log shipping is good, knowing that you can fail over in maybe under an hour, allowing for a few glitches. But you must remember to send frequent backups of master and msdb over to the failover server too (for manual restoration if needed).
Just as good is a good full database backup sitting safely somewhere, so a combination of the 2 - if it can be afforded these days - is optimal.
...and nothing beats a tested/documented restore procedure.
5. What are your strategy/solution to backup such large database?
Sorry, haven't worked worked with a 500gb DB. I'll be coming to you for advice when I do
Cheers,
- Mark
June 13, 2003 at 10:12 am
great advice above. One more I'd add - SQL Litespeed. Works great and has reduced our largest backup substantially in size and time. Went from 60G native to 9GB with Litespeed. Times went down from 90 minutes to around 40 minutes.
http://www.sqlservercentral.com/columnists/sjones/sqllitespeedlightningfastbackups.asp
Steve Jones
June 13, 2003 at 1:06 pm
Thanks both for excellent suggestions.
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply