January 5, 2010 at 4:40 pm
I've inherited some very large db's (large for my small company's limited resources anyway). The one I'm dealing with now is around 100GB. The thing is much of this space is wasted. We only need to keep two years worth of data, much of this data is MUCH older than that. The original designers never thought about moving stale data offline. I've cleaned up the transaction log, and switched to "simple recovery". I've also setup some jobs to purge this stale data on a regular, frequent basis. Will the recovered space this stale data occupies be returned to the OS automatically, or will I have to manually shrink the files?
SQL2K5/WS2003/Simple Recovery.
.
January 5, 2010 at 4:48 pm
You have to manually shrink the files. This is usually not a good idea as the files tend to grow back to their previous size during normal processing, but it sounds like you may have a good reason to consider shrinking them since you've purged a bunch of data off. Even though you are purging data regularly, I would still only shrink the data file one time after your first purge. The file will grow after that to accomodate new data coming into the DB, but you don't want to shrink each time you purge. That extra space that your scheduled purge cleans up should be left in the file for new data so your data file does not become fragmented on the disk and so you don't have to incur the overhead of growing the file during normal business hours.
Also, make sure that simple recovery mode is what you want. Understand that using this recovery mode will make point-in-time recovery of your database impossible. Keep this in mind as you design your backup/recovery strategy for this DB. Also, I'd get buy-in from management on your backup/recovery strategy as they should be able to define the requirements for the amount of acceptable data loss in the event of a DB failure. Don't make this decision on your own, they the application owners decide.
January 5, 2010 at 7:18 pm
Thanks John.
My logic here is that I have a 100GB DB that probably only needs to be 20GB at the most. That's quite a waste of resources. I hate to take it away from SQL Server, but I am pressed for diskspace as nobody wants to upgrade this legacy system.
I totally get the difference between "full" and "simple" recovery. You really would not believe the transaction log mess I had to clean up just to get to this point! Simple is more than adequate for this application. It's really just a reporting db that gets loaded with summarized customer data once a day. I was hoping that there might be an easier way to recover diskspace using the simple model. Shrinking just doesn't feel like a good thing to do.
Thanks for the quick response!
.
January 5, 2010 at 8:05 pm
BSavoie (1/5/2010)
Thanks John.My logic here is that I have a 100GB DB that probably only needs to be 20GB at the most. That's quite a waste of resources. I hate to take it away from SQL Server, but I am pressed for diskspace as nobody wants to upgrade this legacy system.
I totally get the difference between "full" and "simple" recovery. You really would not believe the transaction log mess I had to clean up just to get to this point! Simple is more than adequate for this application. It's really just a reporting db that gets loaded with summarized customer data once a day. I was hoping that there might be an easier way to recover diskspace using the simple model. Shrinking just doesn't feel like a good thing to do.
Thanks for the quick response!
Shrinking a data file in this scenario is okay - where you get into problems is when you schedule the shrink operation. What you also need to know is that you need to have space available in the data file for maintenance operations and growth. So, don't shrink the file down fully - leave enough space available in the file so you can reindex your largest table which would require approximately 1 1/2 times the size of the table. Then increase that by whatever growth factor you have (if known).
Don't use SHRINKDATABASE - use SHRINKFILE so you can direct the operation to a specific file and size. Note: once you perform the shrink, your indexes will be fragmented and you will need to rebuild them.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
January 5, 2010 at 8:33 pm
Awesome, thank you Jeffery. That's pretty much where I was going. I'd hoped someone had a better answer. 🙂
Is the "Available Free Space" figure on the "Shrink Files" dialog box accurate? Is that the amount I can expect to get back?
.
January 6, 2010 at 2:06 am
BSavoie (1/5/2010)
Is the "Available Free Space" figure on the "Shrink Files" dialog box accurate? Is that the amount I can expect to get back?
Pretty much, yes. If this was upgraded from SQL 2000, then run DBCC UPDATEUSAGE to make sure it's 100% accurate.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply