August 15, 2011 at 12:38 pm
This is not my job but I heard from my colleague that he is stalled at a backup for a 800G database, I asked him if there is a way to archive part of the DB and he said no.
Does anyone have experience on doing this kind of task? Can the huge DB be splited into smaller ones and distributed into different locations? If yes, how to revise the program to get data from different locations?
The DB (not including transaction log) is huge itself and by nature I do understand the whole piece needs to be reserved for 2 years and that's why it grows to a monster.
Thank you very much.
August 15, 2011 at 1:38 pm
It is possible, you can create file groups and backup the file groups, you can find a lot of information on the internet regarding this, check this link. In case you want to speed the backup process there are tools that can help you. Check for tools from Red Gate and Quest, there are many other tools, but those two I know to be very good.
Good luck!
August 16, 2011 at 10:10 am
A few other things to consider:
Differential backups
If a subset of that 2 years of data is read-only, you can archive it off into a separate database, then you only need to backup the "live" / updated data. If necessary, you can join the two DB's together with views (or other processes). We've done this with one of our large DB's and it's working quite well.
Be careful with file/filegroup backups. They're quite finicky IME.
August 16, 2011 at 10:39 am
800GB isn't really a monster. There are 1TB drives these days and lots of backup products are built to handle multi-TB backups.
Best advice, use backup compression first. If you have Enterprise Edition, you can use that, or use a third party product like Red Gate's SQL Backup to shrink the backups, which will make them run faster. There are other products that work as well in a similar manner.
If you want to backup less data anyway, and can move some data to separate tables/filegroups, then you have other options. You can separate the data into another database and potentially back the older data up less, or you can separate things into different filegroups and work with filegroup backups, skipping the older filegroup most of the time.
August 17, 2011 at 5:29 am
In general, throw disks at the problem. Assuming a reasonably well tuned set of queries, good cpu and enough memory, the most likely issue with that much data is I/O, so the more disks and disk controllers you can distribute the load to, the better.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
August 17, 2011 at 10:32 am
Steve Jones - SSC Editor (8/16/2011)
Best advice, use backup compression first. If you have Enterprise Edition, you can use that
not available in SQL Server 2005, you will need a 3rd party product
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
August 17, 2011 at 10:37 am
I'm in favor of adding disks. I am also in favor of implementing filegroup backups. The drawback to this is the time it will take you to figure out how to break up your tables into filegroups.
800GB is not too terribly large. With a bunch of disks and maybe even multiple controllers as Grant suggested, you could back it reasonably quickly. I would even consider splitting it to multiple files when backing it up.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
August 17, 2011 at 11:54 am
Perry Whittle (8/17/2011)
Steve Jones - SSC Editor (8/16/2011)
Best advice, use backup compression first. If you have Enterprise Edition, you can use thatnot available in SQL Server 2005, you will need a 3rd party product
I know just the tool for the job...
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
August 17, 2011 at 12:07 pm
You should consider moving to SQL Server 2008 (or R2) Enterprise Edition to be able to implement native backup compression, compression of the large tables in the database, and partitioning of the large tables.
Native backup compression should speedup the backups, and table compression will speedup backups and queries against large tables.
Partitioning the tables will allow you to distribute the data across multiple filegroups, and allow scans against the large tables to be executed in parallel across multiple partitions.
Backing up to multiple files is also something you should consider. It lets you backup to multiple locations if you don't have space in a single location, and it lets you spread the IO workload over multiple disk arrays. You can do this with any SQL Server version and edition.
Sample backup command:
backup database[MyDatabase]
to
disk= 'S:\MyDir\Full\MyDatabase_001.BAK' ,
disk= 'T:\MyDir\Full\MyDatabase_002.BAK' ,
disk= 'U:\MyDir\Full\MyDatabase_003.BAK' ,
disk= 'V:\MyDir\Full\MyDatabase_004.BAK' ,
disk= 'W:\MyDir\Full\MyDatabase_005.BAK'
with
Type = FULL
August 17, 2011 at 1:20 pm
Grant Fritchey (8/17/2011)
I know just the tool for the job...
😀 hmm, let me think now i wonder what it could be 😉
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
August 17, 2011 at 1:23 pm
Michael Valentine Jones (8/17/2011)
You should consider moving to SQL Server 2008 (or R2) Enterprise Edition
given the costs it may just be worth spending the few hundred dollars on a licence for SQLBackup. Depends if the OP can handle the partitioning features
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
August 17, 2011 at 1:30 pm
Grant Fritchey (8/17/2011)
Perry Whittle (8/17/2011)
Steve Jones - SSC Editor (8/16/2011)
Best advice, use backup compression first. If you have Enterprise Edition, you can use thatnot available in SQL Server 2005, you will need a 3rd party product
I know just the tool for the job...
I do too. I would also recommend that same tool.:-D
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
August 17, 2011 at 1:41 pm
Grant Fritchey (8/17/2011)
Perry Whittle (8/17/2011)
Steve Jones - SSC Editor (8/16/2011)
Best advice, use backup compression first. If you have Enterprise Edition, you can use thatnot available in SQL Server 2005, you will need a 3rd party product
I know just the tool for the job...
I bet it is not LiteSpeed 😛
August 18, 2011 at 6:53 am
i back up more than a few databases over 500GB to tape. one or two are close to 2tb. we have a tape robot with LTO-4 tape and netbackup. if i had to do it to disk i would seriously consider Windows 2008 on the source and destination servers just for the SMB protocol improvements. and the source and destination servers should be at least proliant G5's or whatever the dell equivelant is with SATA and/or SAS drives. anything with the old SCSI drives you are asking for trouble. don't bother comparing the RPM numbers since they are useless
in my case i can do a 1.5TB database backup to tape in around 11 hours
August 18, 2011 at 7:34 am
alen teplitsky (8/18/2011)
i back up more than a few databases over 500GB to tape. one or two are close to 2tb. we have a tape robot with LTO-4 tape and netbackup. if i had to do it to disk i would seriously consider Windows 2008 on the source and destination servers just for the SMB protocol improvements. and the source and destination servers should be at least proliant G5's or whatever the dell equivelant is with SATA and/or SAS drives. anything with the old SCSI drives you are asking for trouble. don't bother comparing the RPM numbers since they are uselessin my case i can do a 1.5TB database backup to tape in around 11 hours
I will love to work once with databases that big. The biggest database I have had was about 450GB. Working with big databases (VLDB) makes you think out of the box to solve problems like this. How do you do your disaster recovery for thosse databases? Mirroring? SAN Replication? LogShipping?
Viewing 15 posts - 1 through 15 (of 23 total)
You must be logged in to reply to this topic. Login to reply