December 4, 2014 at 12:41 am
I found it pretty interesting. I checked the size of a database, before implementing database compression across all the user tables in a database. And Post implementation of compression too I checked the size of the database.
I did not find any difference. But if I expand the table and check propetires->storage and I can see that PAGE compression is implemented across all the tables, but no compaction in the size of the db. It still remains the same.
Am I missing something?
Thanks.
December 4, 2014 at 1:32 am
Database files never reduce their size automatically.
And I would strongly recommend against going and putting page compression on all tables. Have you tested the performance? Is the CPU overhead of the compression acceptable?
Generally you want to test carefully first, put compression only where it makes sense (good reduction in table size) and where the CPU overhead of the decompression isn't a problem.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
December 4, 2014 at 5:08 am
SQL-DBA-01 (12/4/2014)
I found it pretty interesting. I checked the size of a database, before implementing database compression across all the user tables in a database. And Post implementation of compression too I checked the size of the database.I did not find any difference. But if I expand the table and check propetires->storage and I can see that PAGE compression is implemented across all the tables, but no compaction in the size of the db. It still remains the same.
Am I missing something?
As Gail said think very carefully before just blindly implementing compression across all objects.
This will show you space usage for a given object name in your database
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
GO
SELECTs.nameAS SchemaName
, OBJECT_NAME(o.OBJECT_ID)AS TableName
, ISNULL(i.name, 'HEAP')AS IndexName
, CASE i.[type]
WHEN 0 THEN 'HEAP'
WHEN 1 THEN 'Clustered'
WHEN 2 THEN 'NonClustered'
WHEN 3 THEN 'XML'
WHEN 4 THEN 'Spatial'
ENDAS IndexType
, CASE
WHEN i.data_space_id > 65600 THEN ps.name
ELSE f.name
ENDAS FG_or_PartitionName
, p.[rows]AS [RowCnt]
, p.data_compression_descAS CompressionType
, au.type_descAS AllocType
, au.total_pages / 128AS TotalMBs
, au.used_pages/ 128AS UsedMBs
, au.data_pages/ 128AS DataMBs
FROM sys.indexes i
LEFT OUTER JOIN sys.partition_schemes ps ON i.data_space_id = ps.data_space_id
LEFT OUTER JOIN sys.filegroups f ON i.data_space_id = f.data_space_id
INNER JOIN sys.objects o ON i.object_id = o.object_id
INNER JOIN sys.partitions p ON i.object_id = p.object_id AND i.index_id = p.index_id
INNER JOIN sys.schemas s ON o.schema_id = s.schema_id
INNER JOIN sys.allocation_units au ON
CASE
WHEN au.[type] IN (1,3) THEN p.hobt_id
WHEN au.[type] = 2 THEN p.partition_id
END = au.container_id
WHERE o.is_ms_shipped <> 1 AND OBJECT_NAME(o.object_id) = 'yourtablename'
ORDER BY TotalMBs DESC
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
December 4, 2014 at 11:37 pm
There is a nice whitepaper which assists you to decide which objects to compress and which compression to use.
http://technet.microsoft.com/en-us/library/dd894051(v=sql.100).aspx
May 22, 2015 at 1:17 pm
Hi, I guess everyone's suggesting be careful before using compression but no one's answering your question.
Yes, it will show the database size same as it was before you compressed all the tables inside it- as you have not reclaimed the space released by your data compression.
Run following: e.g. if your database name is 'Staging', it will be as follows
DBCC SHRINKDATABASE (Staging)
GO
Right click the database and check under properties to see the updated DB size.
Please note: Shrinking the database will severely fragment its contents. This is single threaded operation and might take long time to complete depending on how big your database is.
Took me 1 min 20 secs for DB size of 25 GB DB (All the tables inside the DB were already compressed)
Hope that helps!!
May 22, 2015 at 1:38 pm
parag.sv (5/22/2015)
Hi, I guess everyone's suggesting be careful before using compression but no one's answering your question.Yes, it will show the database size same as it was before you compressed all the tables inside it- as you have not reclaimed the space released by your data compression.
Run following: e.g. if your database name is 'Staging', it will be as follows
DBCC SHRINKDATABASE (Staging)
GO
Right click the database and check under properties to see the updated DB size.
Please note: Shrinking the database will severely fragment its contents. This is single threaded operation and might take long time to complete depending on how big your database is.
Took me 1 min 20 secs for DB size of 25 GB DB (All the tables inside the DB were already compressed)
Hope that helps!!
And if you do that follow it with a rebuild of your indexes which will also result in your database growing.
May 22, 2015 at 2:11 pm
parag.sv (5/22/2015)
Hi, I guess everyone's suggesting be careful before using compression but no one's answering your question.Yes, it will show the database size same as it was before you compressed all the tables inside it- as you have not reclaimed the space released by your data compression.
Run following: e.g. if your database name is 'Staging', it will be as follows
DBCC SHRINKDATABASE (Staging)
GO
Right click the database and check under properties to see the updated DB size.
Please note: Shrinking the database will severely fragment its contents. This is single threaded operation and might take long time to complete depending on how big your database is.
Took me 1 min 20 secs for DB size of 25 GB DB (All the tables inside the DB were already compressed)
Hope that helps!!
I believe that Gail's comment on it not changing the size of the database did answer the question.
I'd also recommend that if you do tell someone to shrink the database (thanks for including the warning there), you be very explicit as to how because there is no need to shrink the logfile in the process and they will need to rebuild all indexes on tables large enough where an index matters.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply