January 19, 2016 at 8:42 am
We use SQL Server native backups for our database and transaction log backups. We're curious if this is still the preferred approach, especially when dealing with terabyte backups, or is there a shift towards using some type of VSS solution, such as DPM. What do you do at your company?
Also, how do you handle off-site backups? We have an individual who is pushing DPM so to reduce the size of the nightly backups being written to tape, and then sent daily off-site for disaster recovery purposes. His argument is that currently we are performing nightly FULL backups of SQL, and that is causing 7+ TB of data to be written to tape nightly. DPM would only write changes to tape and drastically reduce the size of tape backups. However, we could switch to SQL differential backups to reduce the size of nightly backups, but would first like to know what other DBAs are doing to handle database backups?
Thanks
January 19, 2016 at 9:54 am
I can say what we have. Might or might not work for you.
First of all, avoid DPM like the plague. It does the dumbest things you could ever think of. It's not suitable for enterprise grade solutions. Might work for small shops with limited needs. Example: it takes log backups to the same disk where the log file resides, then pushes the tlog backups through the wire. When you're running out of tlog disk space due to heavy activity, the last thing you want is something writing to your tlog disk. I could go on forever with many other flaws, but this one alone should be enough to tell what kind of solution this is.
That said, we are using Legato Networker (again, not state of the art, but works for us) with a disk storage for short-term backups (15 days on average). It also has a Data Domain storage that we use for some database backups and for all monthly backups.
The Data Domain is replicated to another offsite Data Domain, that stores copies for disaster recovery.
Shortcomings of this solution is the amount of data that travels through the network: backups cannot be compressed and the deduplication happens on the Data Domain. Solutions such as DD boost use some CPU cycles of the client machine (SQL Server in this case) to perform compression and deduplication. Vendors say it works like a charm. Skeptics say it still performs worse than compressing backups natively.
-- Gianluca Sartori
January 19, 2016 at 10:18 am
I agree stay away from DPM for backing up databases directly. We use Idera's backup software, does very good job. Compresses and encrypts all the backup files, though all this can be done using native sql also. DPM comes in and back's up those files to tapes. Iron Mountain comes and picks up those tapes to be sent to off site location.
January 19, 2016 at 5:59 pm
Also interested in what others are doing.
We have a native backup to separate LUN which is picked up by an archiving solution (currently using both Avamar & Commvault).
Dedupe is fairly pointless with backup compression, and you need to consider how long it will take to recover a database, will it meet SLA if you need to restore from tape?
January 20, 2016 at 2:10 am
Disclosure: I work for Redgate Software.
We, obviously, sell a backup product. Prior to SQL 2014, the security advantages of encryption were a bigger selling point. With 2014 +, if you don't need heavy compression, then you can use native backups and get good compressed, encrypted backups. SQL Backup Pro offers tuning of the compression level (at the expense of CPU) for more/less space used, and then more/less time or bandwidth needed to move backups off site.
Prior to working here, I used another 3rd party tool to perform backups at a large enterprise, and it worked well for SQL 2000/2005/2008. We did notice a CPU hit, but it was a tradeoff in letting us get full backups off the box quickly and move them to tape offsite. We did have a few security concerns with PII/PCI information, and encryption was necessary for those systems. The advantages of <= SQL 2012 is that you get compression first, then encryption with third parties.
I've never liked DPM. Not as reliable or stable as I want as a DBA, but things might have changed in the last few years. It has been awhile since I dealt with that.
I do recommend considering the differential backups. We (in 2001) crossed the 1TB threshhold for database backups and at that time moved from daily fulls to weekly fulls with daily diffs (and hourly or less logs) to reduce space needs. The tradeoff is RTO. Make sure you let all business users know that it could impact recovery, and let them decide. I try to have numbers on the frequency of restores available to help decide, as well as costs of continuing full backups.
I don't like dedupe solutions. They seem to have restore issues over time (some products). I think they are better suited for non-database work, but as with anything, they might work for some of your databases and not others. I'd certainly want testing over time here to be sure that I wasn't introducing another point or failure or dependency of a separate tape/file that could get damaged.
January 26, 2016 at 12:08 pm
Thanks all. We're setting up a call with Microsoft to get a better understanding of how DPM works with SQL. We'll inquire about the impact of I/O being paused during backups, DPM creating temporary backup files on the data and log drives, whether or not DPM can perform point-in-time restores (we've been told it cannot), and how database mirroring and availability groups are affected, if at all. We may elect to implement differential backups on our larger databases if indeed the size of daily backups is presenting issues for our infrastructure team who manages tape/offsite backups.
Dave
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply