January 6, 2009 at 9:53 pm
Comments posted to this topic are about the item SQL Server 2008 and Data Compression
January 7, 2009 at 7:34 am
Thank you for your article. I'd like to know if anyone uses synonyms that point to a different database. I was thinking about moving all of my 'Fact' tables from the general database into a separate database and creating synonyms that point from the general database to the tables in the 'Fact' database. My rationale for this is that I could easily separate and manage the 'Master' tables from the 'Fact' tables and then I could set up the 'Fact' database with compression and leave the general database intact. Has anyone else done anything like this and if so what were your results?
January 7, 2009 at 7:52 am
compression works at the "table" level why would you care to move it out of the database?
* Noel
January 7, 2009 at 8:18 am
Good point. I was thinking it was at the database level. I have other reasons for moving it into a separate database but it appears that compression is irrelevant. Thanks.
January 7, 2009 at 8:18 am
It doesn't make sense to move fact tables out. As mentioned above, compression works at the table level, and also at the index level. You can compress a table and not the indexes, or vice versa, or compress both.
January 7, 2009 at 9:04 am
hm, good .sql, but is biggest, rsrs i'm never look one this...
because i'm amateur, rsrs
January 7, 2009 at 9:24 am
Good article. It was helpful to know about the allocation units. I am in the process of designing a data mart type application DB where we'll be storing billions of rows in some of our fact tables. I have been looking forward to testing out the performance benefits of data compression. Your timing for this article could not have been better. Thanks for sharing.....
January 7, 2009 at 9:32 am
The performance benefits and the space savings. I've always found it difficult to get additional storage on our SAN, so compression gives me a lot more leeway from a time perspective to get the disk space I need.
One of the things I neglected to mention was that compressed tables are still compressed further as a part of the SQL Server 2008 compressed backup (or third party backup solution), so your backups could actually be an order of magnitude than what they are currently. This is especially useful when you need to refresh development or test environments from those backups (provided that your dev and test are running a SQL Server edition that supports compression).
January 7, 2009 at 10:15 am
Nice article...:)
January 8, 2009 at 10:47 am
Great article and I especially like all the links to the references. A good follow-up article might be on how compression improves the performance of applications because more data can reside in buffers in memory instead of disk. Microsoft claims data warehouse performance can be increased by up to 40% due to less disk I/O. See the page compression sections of these articles:
January 8, 2009 at 11:17 am
The only problem with that is that in a real world scenario everyone will get different results. Depending upon the datatypes that you use, the free space in your pages and the like you will get differing amounts of compression and different performance gains/losses.
Everyone needs to run their own evaluation, however it would be great to have a location where folks could put their real world examples (such as prior space utilization, prior performance, schemas, new utilization, new performance and a calculation of how much compression has improved (or degraded) their performance).
From a community perspective that could really help folks make a slightly more informed decision prior to going through the work involved in testing it all out.
January 8, 2009 at 10:23 pm
I blogged about this, and thoroughly recommend it to anyone who is stuck in production on SQL 2005 (vardecimal option for financial databases), or who has SQL 2008:
[font="Verdana"]Town of Mount Royal, QC
SQL Server DBA since '99
MCDBA, MCITP, PMP, MVP '10, Azure Data Platform Data Engineer
hugo@intellabase.com [/font]
https://drive.google.com/file/d/1qnyiGWyGvDz6Q2VtLPGEsRufy9CUqw-t/view (MCDBA 2001, data eng associate coming asap)
January 9, 2009 at 8:02 am
does this compression methodology play nicely with TDE?
January 9, 2009 at 8:16 am
It's invisible to TDE.
January 9, 2009 at 8:39 am
We have a warehouse database that is normally around 400gigs which compresses down to 80gigs with row level compression. Performance of queries varies widely depending on the disk io needed to satisfy the query. I do have one very disk io intensive query that normally runs in 2 minutes, but now runs in 1 minute. Cutting the time in half is very impressive. Both compressed and uncompressed databases are on the same server running under the same sql server instance. The server is one HP loaned us to do some testing with. It is their latest DL580 with 24 intel cores and 32 gigs of ram and is blazing fast.
Something interesting I noticed, but have not had a chance to dig into is that once the data is cached in memory and the same queries ran a second time, the performance seems to take about 20% longer for the compressed database. Seems odd since I am pretty sure data in cache does not get compressed. Both data and indexes are using row level compression. I haven’t had a chance to look at optimization plans yet to see if they are different.
Viewing 15 posts - 1 through 15 (of 22 total)
You must be logged in to reply to this topic. Login to reply