January 25, 2012 at 10:08 am
I'm experimenting with checking the 'Compress backups' box with sample databases and am seeing much better results (more compression, faster backups) than I'd expected. Seems like a feature that's too good to be true.
So, I have to ask the question, are there any good reasons to avoid backup compression? I'm using the Standard edition (not Enterprise) and database sizes range between 400MB and 100GB in size (.mdf files).
Appreciate any advice!
Rob Schripsema
Propack, Inc.
January 25, 2012 at 10:28 am
Only real downside is the additional CPU load required to compress/uncompress (during restore) the backups.
Especially for the additional load backups put on a system under load.
I think you're limited on the versions you can restore to aswell (2008 Standard upwards?).
It's a great feature, hence the clamour to get it included in Standard edition on 2k8 R2 (was Enterprise only on vanilla 2k8).
I typically see 60-70% compression when using, not all data or data types compress though!
January 25, 2012 at 11:08 am
Have look on this article.
http://www.sqlmag.com/article/product-review/sql-server-backup-compression-shootout
January 25, 2012 at 11:17 am
You do need to check with your system admin that they aren't using anything like NetBackup's De-duplication technology when they make tape backups of the SQL backup files. The net effect of compression and de-duplication can result in larger tape backups then expected.
Have a look at this whitepaper: http://www.datadomain.com/pdf/DataDomain-Microsoft-SQL-Server-Whitepaper.pdf
Cheers
Leo
Leo
Nothing in life is ever so complicated that with a little work it can't be made more complicated.
January 25, 2012 at 4:08 pm
I'm getting great compression on my datadomain device using 2008 native compresson. The backup and restore times are faster as well.
Yes I read the whitepaper, but testing in my environment showed it wasn't really much of an issue.
January 25, 2012 at 4:48 pm
Leo.Miller (1/25/2012)
You do need to check with your system admin that they aren't using anything like NetBackup's De-duplication technology when they make tape backups of the SQL backup files. The net effect of compression and de-duplication can result in larger tape backups then expected.
Thanks for the heads up -- but we're not using tape, nor are we using any third-party tools in the backup scenarios. Just plain old SQL backup to hard drives.
Rob Schripsema
Propack, Inc.
January 25, 2012 at 4:49 pm
umasingh (1/25/2012)
Have look on this article.http://www.sqlmag.com/article/product-review/sql-server-backup-compression-shootout
At this point we're trying to avoid using third-party tools for the backups. We've looked at all three of those products, and while they have many benefits, it isn't enough to warrant their use at this point.
But thanks for the info!
Rob Schripsema
Propack, Inc.
January 25, 2012 at 5:18 pm
To answer your actual question, no.
We support multiple SQL servers for multiple companies and unless they have a particular reason to not use compression we have been using it by default. Space and time savings have been reasonably consistent, and so far we haven't had any issues with restores or corruption.
I would watch CPU usage on servers that already have a high CPU usage during the expected backup window. As the white paper showed CPU usage can go up, and if you are doing parallel backups or have an already busy CPU this could be a constraint. Use good monitoring and alerts to keep an eye on this.
At a previous employer we used LiteSpeed (pre 2008 days). I've found SQL compression at least as good without the problem of chagning the syntax for LiteSpeed. Also restoring LiteSpeed backups to another server required either LiteSpeed to be installed or you had to pre-extract the backup to a SQL compatible format.
I'd recommend SQL compression for all my clients now.
Leo
Leo
Nothing in life is ever so complicated that with a little work it can't be made more complicated.
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply