August 22, 2011 at 10:22 am
Just to share my experience with 4.5 TB database backup -
use Red Gate Sql Backup 5, compression- 2, backup locally first then move to remote storage
August 22, 2011 at 10:26 am
Yuri55 (8/22/2011)
Just to share my experience with 4.5 TB database backup -use Red Gate Sql Backup 5, compression- 2, backup locally first then move to remote storage
And does that work well for you? Just curious.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
August 22, 2011 at 10:33 am
Ignacio A. Salom Rangel (8/18/2011)
alen teplitsky (8/18/2011)
i back up more than a few databases over 500GB to tape. one or two are close to 2tb. we have a tape robot with LTO-4 tape and netbackup. if i had to do it to disk i would seriously consider Windows 2008 on the source and destination servers just for the SMB protocol improvements. and the source and destination servers should be at least proliant G5's or whatever the dell equivelant is with SATA and/or SAS drives. anything with the old SCSI drives you are asking for trouble. don't bother comparing the RPM numbers since they are uselessin my case i can do a 1.5TB database backup to tape in around 11 hours
I will love to work once with databases that big. The biggest database I have had was about 450GB. Working with big databases (VLDB) makes you think out of the box to solve problems like this. How do you do your disaster recovery for thosse databases? Mirroring? SAN Replication? LogShipping?
none, most of this is static archive data so if it's offline for a day or so it's not a big deal. we could probably mirror it if we wanted to since the data changes once every few days as new data is uploaded
the craziest thing i've seen lately is that a few servers with 12 SATA hard drives in a RAID 5 configuration have the fastest restore performance. my guess is that i can restore 1.5TB in 12 hours or so
with our active databases we do SAN replication but going to start doing it via db mirroring soon
August 22, 2011 at 10:37 am
For 2-3 years (do not remember exactly)- no problem at all.
I use it for pretty big DBs backup (> 1 TB) and small ones.
Before (when MD did not have backup compression) we really suffered from
more than 1 TB DB backup
August 23, 2011 at 7:30 am
Back in 2001 I was dealing with a customer who had a 14 TB database. Although this was not on SQL Server their approach to backup would work fine with SQL Server. They had other databases, with a total of (I think) 22 TB production data.
Their main problem was that running a backup of 14 TB took longer than 24 hours, but they needed a daily backup for DR purposes.
They split their database into 7 filegroups (actually it was a multiple of 7, but I don't remember the number and it is not that important). On day 1 they did a full backup of Filegroup 1 and differentials of Filegroups 2 to 7. On day 2 they did a full backup of Filegroup 2 and differentials of Filegroups 1, 3, 4, 5, 6 7. This rotated throughout the week, so they always had a full backup of their database that was no older than 7 days. This reduced their daily backup window to under 6 hours, which they could cope with.
Another approach that would work with medium-sized databases between maybe 200 GB and 1 TB is to use multiple backup files. In my last job we had a 350 GB database that we wanted to back up each day.
Running a single-threaded backup with Litespeed took a bit over 20 hours to complete. We tweaked this to run as 4 threads producing 4 output files. Each thread automatically backs up a different portion of the database, and the entire backup job now took about 3 1/2 hours. Because the backup could complete during slack time in user activity, it ran at fastest possible speed. When it was single-threaded it spent some of its time competing with peak user activity, which only made the single-threaded backup take longer.
We found that increasing the number of threads above 4 did not really help, but 4 threads run faster than 3. However, this ran on kit that is now obsolete so it is definitely worth testing with more threads on a modern 12 or 24 core server.
We also found that native SQL backup would run faster with multiple output files than with a single file, so this approach is likely to work with many backup products.
Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.
When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara
August 23, 2011 at 8:24 am
Use Hyperbac here and works a treat as we are always having to refresh multiple virtual environments from production.
August 23, 2011 at 8:27 am
I know of a few people through friends that get 1TB < 60 min. You can do this with dedicated drives for backup and striping across multiple disks.
If you can get some test drives, try sticking a 4-6 drive set, single drives each (no RAID) out there and run a backup to the drives, striping your backup across all of the drives (4-6 files).
Should get good throughput on that, depending on drives and drive tech/speed (SATA/SCSI/10k/15k/etc)
August 23, 2011 at 8:40 am
is this on the latest hardware? i have a 400GB database that we back up to tape and disk every day. the tape is the real backup and disk is a daily dev/qa dump. the tape backup is faster than the disk backup
Viewing 9 posts - 16 through 23 (of 23 total)
You must be logged in to reply to this topic. Login to reply