September 19, 2014 at 9:21 am
Hi everyone --
Would really appreciate any feedback on this problem.
First I wanted to say that none of what I describe here is my design.....
I have a table that resides on its own filegroup that contains an image column. This column contains compressed (7zip) versions of data from several tables that pertain to client transactions. The table is purged daily based on CreationDate and an Active flag (DDL is at the bottom of this post).
Since July, I've seen much higher growth for this file, but so far am unable to explain why.
I track growth daily, and here are some of the stats:
row_count76029
reserved133078048 (133.07GB on 2014-04-26)
data133076112
index_size1760
unused176
rundate2014-04-26
row_count76099
reserved154606576 (154.6GB on 2014-09-18)
data154599544
index_size3576
unused3456
rundate2014-09-18
Rowcount is about the same, and so I'm somewhat at a loss to explain what the growth is attributed to.
Thinking that this might be due to space not being reclaimed when rows were deleted, after restoring the db on a test server, I did the following:
1. created a new temporary filegroup
2. made the new file/filegroup the default
3. copied the table to this new file group via SELECT INTO
4. verified the size of the new table (it was not any different)
Thanks in advance for any feedback on this issue.
Best wishes to all,
sqlnyc
CREATE TABLE [dbo].[CompressedDataStore]
(
[PId] [INT] IDENTITY(1, 1) NOT NULL
,[Id] [INT] NOT NULL
,[CompressedData] [IMAGE] NULL
,[CreationDate] [DATETIME] NOT NULL
,[Active] [BIT] NOT NULL
,[PGUID] [UNIQUEIDENTIFIER] NOT NULL
,CONSTRAINT [PK_CompressedDataStore] PRIMARY KEY CLUSTERED ( [PId] ASC )
WITH ( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,
ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 100 )
ON [CompressedDataStore]
,CONSTRAINT [UQ_CompressedDataStore_PGUID] UNIQUE NONCLUSTERED
( [PGUID] ASC )
WITH ( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,
ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 100 )
ON [CompressedDataStore]
)
ON [CompressedDataStore] TEXTIMAGE_ON [CompressedDataStoreSeparateFile]
ALTER TABLE [dbo].[CompressedDataStore] ADD CONSTRAINT [DF_CompressedDataStore_Active] DEFAULT ((1)) FOR [Active]
ALTER TABLE [dbo].[CompressedDataStore] ADD CONSTRAINT [DF_CompressedDataStore_PGUID] DEFAULT (NEWSEQUENTIALID()) FOR [PGUID]
September 19, 2014 at 9:30 am
The ghost cleanup should get 100 records at a time after delete, but I'd think over days you'd be fine.
This talks a bit about that process and tracking: http://social.msdn.microsoft.com/forums/sqlserver/en-US/2dc3a15b-684f-4d8e-acaa-4d815ef13543/sql-2008-image-data-type-freeing-db-space
I might refactor this table to use varchar(max) instead of image. I don't think that image -> varchar(max) should affect your app, and I'm not completely sure of the underlying handling, but I know that work has been done to varchar and not image to make it more efficient (in v out of row). Not sure if that's back ported or if image/text get mapped to varchar(max).
I wonder if you've hit some bug. Are you just issuing a
delete CompressedDataStore
where creationdate < datediff()
I'd expect this to release space after hours, if not minutes.
Have you tried an index rebuild on this table?
September 19, 2014 at 11:08 am
Hi Steve,
Thanks very much for your reply.
I will research the link you posted.
Long ago, when this server was running SQL 2008R2 RTM, there was indeed an issue with ghost cleanup. But now we are running SQL 2008R2 SP2, which along the way had a number of bug fixes related to the ghost cleanup process. I will verify again to make sure this not the issue.
The delete is something like:
delete CompressedDataStore
where (CreationDate < @InActiveRecordsDate )
OR (Active = 0 AND CreationDate < @ActiveRecordsDate )
When I copied the table to a new file group, I did an index rebuild, but of course that doesn't touch any of the LOB data.
sys.allocation_units returns:
type_descLOB_DATA
total_pages 19321790
used_pages 19321607
data_pages 0
The image column stores compressed versions of client records, using some zip library (not my design). If a client needs to roll back to a previous version of their data, this archive is decompressed and voila. I'm not sure that varchar(max) could/would handle this, but also doubt management will change without extremely dire circumstances, as this process is one of the cornerstones of the system.
Thanks,
sqlnyc
September 19, 2014 at 12:16 pm
Sorry I don't have more. The LOB stuff is a bit of magic at times, since it's so poorly documented.
I'm suspecting that something is holding onto allocations for the table, for some reason. No indexing of the LOB column? Does the table have a PK?
I might email Paul at SQLSkills and ask him, maybe see if he will comment here or write a blog on what's happening. Can you repro this easily?
September 19, 2014 at 12:17 pm
Look at this as well, see if there's something you can decode from your system using a few of Kimberly's scripts: http://www.sqlskills.com/blogs/kimberly/understanding-lob-data-20082008r2-2012/
September 19, 2014 at 12:18 pm
You did add 70 rows, so while the growth is a little higher than average, it's not ridiculously high. Yes, you do want to convert to varbinary(max) instead of image when you can, but that's not likely critical either.
You could run sp_clean_db_file_free_space to insure that all ghost space on that file has been cleaned up.
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
September 19, 2014 at 1:01 pm
Hi Scott and Steve - thanks very much for your replies.
I've emailed Paul Randal about this in the past.
His reply was:
"The issue is that LOB extents are not deallocated or compacted in any way by the DELETE operations, so your backups have been backing up extents with unused (but full of non-compressible bytes) pages in them. With the operations you did, you created new, compact LOB storage with no free, unused pages in them. Hence the drop in size."
This was specifically related to a question I asked him about why a zip of a BAK might be smaller after doing the copy-to-temp-table steps I outlined earlier. I will research this further, but as you mentioned Steve, there's not tons of info about this on the web.
Will report back to this thread with my findings.
Gents - thanks again for your efforts.
sqlnyc
September 19, 2014 at 1:13 pm
Just found this, which may be viable, but is sort of a brute force way to do it:
http://itknowledgeexchange.techtarget.com/sql-server/deleting-lob-data-and-shrinking-the-database/
sqlnyc
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply