February 4, 2007 at 5:35 am
I have a db that has one 58 gig data file. The server it was housed on finally ran out of disk space due to this app. I moved stuff off for several years to attempt to keep the app running. The end users have finally agreed to deleting data. After deleting, the data is occupying 35 gig out of the 58. There are numerous indexes, occpuying easily 1/3 of the data file. I started to shrink it. I ran a shrink db ('dbdata',15). As there is not enough room on that server, I moved the db to a new server (3.2 ghz, 4 gig ram, 10,000 RPM RAID 5 drives (300 GIGS)). I have been running shrinkdb for 22 hours. I've been watching activity and saw its has page io latch and the numbers are changing. There is no other activity on the server, I even turned off sql agent. Is this normal for it to take this long?
The server is primarily used as a spare where I restore databases to.
February 4, 2007 at 6:25 am
With 10,000 RPM RAID 5 drives, yes, this will run for a very long time. Do not be suprised if the run time exceeds 40 hours.
Recommend:
15K RPM drives
36Gb drives - no larger
For the transaction log, use dedicated RAID-1 disks and do not stripe. If the transaction log needs to be larger than the 36Gb, have 2 seperate sets of disk and have multiple files for the transaction log.
For the data files, expect to use a maximum of 80% of the space for data files. Stripping the disk may or may not work.
You also may need to put tempdb on its own disk.
SQL = Scarcely Qualifies as a Language
February 4, 2007 at 4:31 pm
Some times it may take many hours...
If you have LOB in your db... it may not shrink at all...see the following article...
DBCC SHRINKFILE and SHRINKDATABASE commands may not work because of sparsely populated text, ntext, or image columns
http://support.microsoft.com/kb/324432
MohammedU
Microsoft SQL Server MVP
February 6, 2007 at 11:59 am
use DBCC SHRINKFILE instead of SHRINKDATABASE
February 6, 2007 at 12:34 pm
thanks for all the info.
I ran shrinkdatabase and it still wasn't finished after 42 hours. I had to stop it to get the app up and running again. I tried shrinkfile previously and this wasn't going faster either after 26 hours.
For the hardware suggestion, that's always something to aim for, but I don't think we'll ever get it.
If I eventually try again, do I need to have equivalent disk space free (ie 58 gig) or could I run it with 8 to 10 gig free?
February 6, 2007 at 1:57 pm
February 6, 2007 at 2:47 pm
Indexes were actually dropped and rebuilt a week ago. There are daily bulk inserts, so it's possible they need defragging weekly.
February 6, 2007 at 9:52 pm
February 7, 2007 at 3:06 am
Do you really need to shrink at all? It sounds like you have plenty of disk space... are you doing this to free up space for something else? If not, bear in mind that shrinking consumes system resources (as you are seeing) and can cause disk fragmentation.
John
February 9, 2007 at 11:03 am
You might try the brute force method: Create a new database on the 'spare' server and then your tables - avoid the indexes and triggers if you can. bcp the data from your production database to the 'spare' server. Once completed, install your other objects - triggers, procs, etc. Backup the new database, verify all objects between db's, drop the old database, and then restore the new database onto your production system and rebuild the indexes.
Ugly solution, but if you're in a pinch...
February 9, 2007 at 1:05 pm
ya, I had actually thought about that. I still have the scripts and format files from a previous bcp we had done.
February 9, 2007 at 7:18 pm
You are taking the wrong approach to shrinking the database.
The way to do it is to shrink it by database file in small imcrements in a loop, say 50 MB at a time. It will take a while, but it will be making slow, steady progress.
This script should do what you want:
-- Shrink_DB_File.sql
declare @sql varchar(8000) declare @name sysname declare @sizeMB int declare @UsedMB int declare @FreeMB int declare @ShrinkMB int
-- Desired free space in MB after shrink set @FreeMB = 1000
-- Increment to shrink file by in MB set @ShrinkMB = 50
-- Name of Database file to shrink set @name = 'MyDatabaseFileName'
-- Get current file size in MB select @sizeMB = size/128. from sysfiles where name = @name
-- Get current space used in MB select @UsedMB = fileproperty( @name,'SpaceUsed')/128.
select [StartFileSize] = @sizeMB, [StartUsedSpace] = @UsedMB, [File] = @name
-- Loop until file at desired size while @sizeMB > @UsedMB+@FreeMB+@ShrinkMB begin
set @sql = 'dbcc shrinkfile ( ' + @name + ', '+convert(varchar(20),@sizeMB-@ShrinkMB)+' ) '
print 'Start ' + @sql
exec ( @sql )
print 'Done ' + @sql
-- Get current file size in MB select @sizeMB = size/128. from sysfiles where name = @name -- Get current space used in MB select @UsedMB = fileproperty( @name,'SpaceUsed')/128.
end
select [EndFileSize] = @sizeMB, [EndUsedSpace] = @UsedMB, [File] = @name
February 10, 2007 at 12:38 pm
The shrink script seems to work pretty good. Thanks!
February 12, 2007 at 9:37 am
Cool script. Thanks !
RegardsRudy KomacsarSenior Database Administrator"Ave Caesar! - Morituri te salutamus."
September 2, 2010 at 9:37 am
Hi Michael,
I found your script very useful. Now i am stuck in a situation where i have to shrink the datafile to reclaim disk space. the datafile that i intend to shrink is around 500 gb and unused free space is around 90gb. Do you think your query would be helpful in this scenario or should i just go ahead and shrink the datafile the usual way. In my case what would be the ideal increments?...Thanks again for the wonderful script...
Viewing 15 posts - 1 through 15 (of 17 total)
You must be logged in to reply to this topic. Login to reply