June 16, 2009 at 6:25 am
As part of my testing in preparation for a migration of a large database (1.8TB) from SQL 2000, to SQL 2005 on new hardware, I restored a copy of the database to the new server, migrated logins and started some testing. Basically the original database was stored on just two data files... leaving me with HUGE files. My goal was to spread this out across 20 or so datafiles and then shrink down the original files.
I rebuilt indexes on the largest tables across new filegroups each with multiple files. The smaller tables I left in the orginal filegroups, and rebuilt the indexes there for good measure. This part went exactly as expected (Thanks to Alex Karmanov and his very useful "Tables moved to proper filegroups" script.)
The shrinking operation is a bit perplexing though.
The first data file shrank from 500GB to 50GB in about 30 minutes- outstanding.
The second data file started at 1.1TB and had about 70GB of space used... I ran dbcc shrinkfile (filename, 100000) for 12 hours, after which I stopped it an ran dbcc shrinkfile (filename, truncateonly).
At that point the file had shrank to 660GB:
total size MB SpaceUsedMB AvailableSpaceMB
658253.18750069512.437500588740.750000
So, for the next 24 hours, I ran dbcc shrinkfile (filename, 100000), after which I stopped it an ran dbcc shrinkfile (filename, truncateonly). The file shrank my very little- maybe 10 GB.
Thinking I'm clever, I thought "well, the file is mostly free space, so maybe it would be faster to empty the file"... so I added an additional file to the original file group (sized at 100000) and ran dbcc shrinkfile(filename, emptyfile). Still, after another 24 hours, the new file had grown by about 18GB, and the original file did not shrink any further with a shrink truncateonly.
My thought is neither of these shrink operations is hitting the page closest to the end of the datafile and it's likely that there are huge chunks of free space in the middle of the file... eventually that last little peice will get moved to the front and after that happens a shrinkfile (filename, truncateonly) will return a huge amount of space to the OS? Any other explanations for such drastic deceleration of the shrink process?
June 16, 2009 at 7:05 am
If you run the following, do you get a reason in the log_reuse_wait_desc column?
select name, log_reuse_wait, log_reuse_wait_desc, recovery_model_desc
from sys.databases where name = 'YourDB'
Randy
June 16, 2009 at 7:07 am
I get NOTHING (literally, the result is the word NOTHING). The db is in simple mode btw.
June 16, 2009 at 8:11 am
Is anything connecting to the file - a service, something you cannot see from SSMS?
-----
I don't know if it would help to visualize how much space is being consumed by the file but here's a good script to return that info from dbcc showfilestats:
create table #Data(
FileID int NOT NULL,
[FileGroupId] int NOT NULL,
TotalExtents int NOT NULL,
UsedExtents int NOT NULL,
[FileName] sysname NOT NULL,
[FilePath] varchar(max) NOT NULL,
[FileGroup] varchar(MAX) NULL
)
create table #Results(
db sysname NULL ,
FileType varchar(4) NOT NULL,
[FileGroup] sysname not null,
[FileName] sysname NOT NULL,
TotalMB numeric(18,2) NOT NULL,
UsedMB numeric(18,2) NOT NULL,
PctUsed numeric(18,2) NULL,
FilePATH nvarchar(MAX) NULL,
FileID int null
)
create table #Log(
db sysname NOT NULL,
LogSize numeric(18,5) NOT NULL,
LogUsed numeric(18,5) NOT NULL,
Status int NOT NULL,
[FilePath] varchar(max) NULL
)
INSERT #Data (Fileid , [FileGroupId], TotalExtents ,
UsedExtents , [FileName] , [FilePath])
EXEC ('DBCC showfilestats')
update #Data
set #data.Filegroup = sysfilegroups.groupname
from #data, sysfilegroups
where #data.FilegroupId = sysfilegroups.groupid
INSERT INTO #Results (db ,[FileGroup], FileType , [FileName], TotalMB ,
UsedMB , PctUsed , FilePATH, FileID)
SELECT DB_NAME() db,
[FileGroup],
'Data' FileType,
[FileName],
TotalExtents * 64./1024. TotalMB,
UsedExtents *64./1024 UsedMB,
UsedExtents*100. /TotalExtents UsedPct,
[FilePath],
FileID
FROM #Data
order BY 1,2
insert #Log (db,LogSize,LogUsed,Status )
exec('dbcc sqlperf(logspace)')
insert #Results(db, [FileGroup], FileType, [FileName], TotalMB,UsedMB, PctUsed, FilePath, FileID)
select DB_NAME() db,
'Log' [FileGroup],
'Log' FileType,
s.[name] [FileName],
s.Size/128. as LogSize ,
FILEPROPERTY(s.name,'spaceused')/8.00 /16.00 As LogUsedSpace,
(s.Size/128. - FILEPROPERTY(s.name,'spaceused')/8.00 /16.00) UsedPct,
s.FileName FilePath,
s.FileId FileID
from #Log l , master.dbo.sysaltfiles f , dbo.sysfiles s
where f.dbid = DB_ID()
and (s.status & 0x40) 0
and s.fileid=f.fileid
and l.db = DB_NAME()
SELECT r.*,
CASE WHEN s.maxsize = -1 THEN null
else CONVERT(decimal(18,2), s.maxsize /128.)
END MaxSizeMB,
CONVERT(decimal(18,2), s.growth /128.) GrowthMB
FROM #Results r
INNER JOIN dbo.sysfiles s
ON r.FileID = s.FileID
ORDER BY 1,2,3,4,5
DROP TABLE #Data
DROP TABLE #Results
DROP TABLE #Log
June 16, 2009 at 8:22 am
there are issues with the VLFs sometimes. You can use this script to try and shrink things. Note that it will mess with your log and backups, so I'd plan on running a full when it finishes. It's worked well for me in the past.
http://www.sqlservercentral.com/scripts/Maintenance+and+Management/30026/
June 16, 2009 at 8:53 am
Thanks Steve! Can you provide any specifics when you say "mess up my logs and backups" ?
June 16, 2009 at 9:07 am
Sorry, poor choice of words. No danger, no issues with integrity.
It works by building lots of small transactions to fill the VLF inside the log, then trying to shrink. Since the active VLF might be nearer the end of the file (as you suspect), you need to fill it and get it to roll around to the beginning of the file to physically shrink the file.
The "mess" is that your log backups might grow a bit while this runs, and be larger. Also, since you have lots of bogus transactions, you might not want these for DR, so your backup scheme, while it will work, might not be what you want, so I typically have run a backup when it finishes to get back to a set point in time.
June 16, 2009 at 9:21 am
hmm- I just want to make sure I understand you correctly (and vice versa). The file I'm trying to shrink is a datafile- not a log file... but the concept is the same right? I shouldn't expect problems running this against a data file?
June 16, 2009 at 9:32 am
Ah, data file, sorry, no this won't help.
My apologies for not reading closely enough. I'll blame it on a lack of sleep, which is again, my fault.
If the data file won't shrink, I'd look at rebuilding some clustered indexes. Perhaps they're too fragmented to allow shrinkage? I'll have to ask around more. Actually, I'll drop a note to the Twitter world.
June 16, 2009 at 9:36 am
Oops- I was talking about the log file, too.
Randy
June 16, 2009 at 9:49 am
ignore this one... posted twice... not sleep related- just user error. 😉
June 16, 2009 at 9:51 am
No prob- I COMPLETELY understand how lack of sleep affects one's posts!
As for rebuilding the clustered indexes.... that's the strange thing. Before any of the shrink operations, I rebuilt all of the clustered indexes (and the non-clustered for that matter)... everything should be pretty well organized. All I can think of is that maybe there is a table or two out there without any indexes (I didnt design the thing!)... maybe scattered across the file.
Anyways- it seems to be going a lot faster today- something like 10GB/hr.
Really this whole conversation is academic... I was just trying to understand why shrink file would start off so fast, then get so much slower. My best guess is it must be how small peices of data are arranged in the data file... maybe there was a lot of free space toward the end of the file, which was freed in the first twelve hours, then most of the "spaceused" must be in the middle of the file... something like this (where F is free space and S is SpaceUsed):
SSSSFFFFFFFFFFFFFFFFFFFFFFFFFFSFSFSFSFSFSFSFSFSFSFSFSSSSSFFFFFFFFFFFFFFFFFFFFFFFSSSSS
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply