March 20, 2009 at 10:45 am
I am trying to find a way of effectively shrinking a database. I know dbbcshrinkfile is not ideal as it fragments the database with performance implications.
We have introduced an archive mechanism so our database only holds current data but out database is now 55% empty and would like to reclaim some of this disk space. we are using SQL 2000.
I have tried running dbccShrinkfile but after 36 hours thought better of it.
I have also tried backing up the database removing the database and restoring from the backup but this restored the empty pages. Does anyone have any ideas how I can shrink the database without causing too much fragmentation
March 20, 2009 at 10:48 am
how big is the database currently and how much space is free?
PS. for sql2000 issues you may want to post in the sql2000 section
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
March 20, 2009 at 10:50 am
the database is 28GB with 16GB free - will post on SQL2000 forum instead
March 20, 2009 at 10:58 am
is that just the data file or does that include log as well?
is it absolutely critical to release the space, why not just leave it there. it will most likely use it again!
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
March 20, 2009 at 11:03 am
That is just the data file. The database is used for BI purposes and updated once a day therefore is set to simple recovery mode.
The machine doesn't have a lot of space remaining on it although not critical yet if left unchecked !!
March 20, 2009 at 12:21 pm
It is often better to shrink database files in small increments so that it can make continuous, incremental progress, instead of trying to shrink by a large amount in one command. This makes it easier to shrink a database that must be shrunk by a large amount, and makes it easier to interrupt without losing all progress.
Shrink DB File by Increment to Target Free Space
http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=80355
You will need to re-index or defrag and update statistics after you shrink it.
March 21, 2009 at 2:29 am
run sp_spaceused and return the results.
Also sp_helpdb - requested this to check maxsize,initial size and filegrowth if its in % change it to some 50 or 100 MB
March 21, 2009 at 5:09 pm
Ummm.... just a couple of things to clear up here, folks. First, DBCC ShrinkFile does not now nor has it ever nor will it ever cause fragmentation. It's regrowth of a database after it's been shrunk that causes fragmentation.
Second, if you're going to use sp_SpaceUsed, you really need to make sure you use the second operand (parameter) of the sproc to force usage to update or you can and will get false readings especially after a shrink.
Last, but not least, you should have a good disk defragging system on your server. Norton Speeddisk or DiskKeeper are both great products and can be set to defrag your hard drives in the background and only when the server has the bandwidth available to do it. It's truly a Set'n'Forget venture and worth it's weight in gold.
--Jeff Moden
Change is inevitable... Change for the better is not.
March 21, 2009 at 9:41 pm
Jeff Moden (3/21/2009)
Ummm.... just a couple of things to clear up here, folks. First, DBCC ShrinkFile does not now nor has it ever nor will it ever cause fragmentation. It's regrowth of a database after it's been shrunk that causes fragmentation...
It is true that it does not cause file fragmentation.
However, it will cause fragmentation of tables and indexes as it moves the data from the end of the file to areas closer to the beginning of the file. That is why I recommend running a re-index or defrag followed by statistics update after performing a large file shrink.
I only recommend shrinking a database file in a situation were a lot of data has been removed and it will not be growing larger.
March 22, 2009 at 3:02 am
cloud (3/20/2009)
The database is used for BI purposes and updated once a day therefore is set to simple recovery mode.The machine doesn't have a lot of space remaining on it although not critical yet if left unchecked !!
so, we have daily inserts, probably atuto create\update stats, i'm guessing you have a maint plan that rebuilds indexes! All of this will consume space in the data file. Fragmentation shouldnt occur on the shrink (especially if you rebuild the indexes after), but the subsequent growing action will cause file fragmentation at the disk level (i.e. the data file will not be contiguous on disk). If its not critical i personally would leave the space allocated, it sounds as though it will get re used soon enough.
Maybe you should consider increasing your storage space!
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
March 22, 2009 at 5:29 pm
Michael Valentine Jones (3/20/2009)
It is often better to shrink database files in small increments so that it can make continuous, incremental progress, instead of trying to shrink by a large amount in one command.
Theoretically this is great but can cause a problem in practice sometimes:
I think this is the reason it takes an hour to take 5MB off one of our databases!
.
March 23, 2009 at 4:59 am
A good rulle of thumb is 'never shrink a database if you expect it to grow again within 3 months'. There can be exceptions to this, but you should justify each exception. Each time a database is shrunk and grows again you get NTFS fragmentation, and this will harm your performance.
Lack of available disk space is not a justification for repeated shrink operations. It is a fire-fighting operation while you get more disk space installed. If your database is having data regularly added to it, then it is like pouring gasoline in at one end while trying to put out the fire at the other end. Eventually you will run out of disk space.
Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.
When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara
March 23, 2009 at 5:05 am
I absolutely agree. Databases (MDF, NDF) should only be shrunk when the space isn't going to be required any more. A typical example might be where adequate housekeeping wasn't in place for a new database, this has been fixed, and the database is now twice as large as it ever needs to be.
I don't like shiinking logs at all unless again some problem wth the backups has caused them to grow larger than they need to be.
.
March 23, 2009 at 5:57 am
If your database is not configured for Replication or Logshipping you can use following command to shrink your databse log file to death:
backup log with truncate_only
dbcc shrinkfile ( )
But truncating is not a recommended option. And your log backup and differntial backup chain will be broken and u need to take a full backup. Also its been 'with truncate_only' or 'no_log' has been removed from SQL 2008.
March 23, 2009 at 6:11 am
SUBRAHMANYA HEDGE (3/23/2009)
If your database is not configured for Replication or Logshipping you can use following command to shrink your databse log file to death:backup log with truncate_only
dbcc shrinkfile ( )
But truncating is not a recommended option. And your log backup and differntial backup chain will be broken and u need to take a full backup. Also its been 'with truncate_only' or 'no_log' has been removed from SQL 2008.
You forgot one of the most important reasons not to do that... point in time recovery.
I wouldn't do this to any database that wasn't set to the SIMPLE recovery mode. Rather, do a real backup first and then you can work with the file...
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 1 through 15 (of 16 total)
You must be logged in to reply to this topic. Login to reply