November 26, 2014 at 6:02 pm
Hello,
I've read lots about why you shouldn't normally shrink databases in posts such as this:
http://www.brentozar.com/archive/2009/08/stop-shrinking-your-database-files-seriously-now/
But we have a situation where we are required to copy the live db to various non production environments for testing. Part of this process involves truncating a number of tables with masses of blob data. So we're freeing up quite a lot of space. The question is how to reclaim this? The database is peculiar in that it's got no clustered indexes so I can't rebuild indexes on another filegroup with drop to move and rebuild.
I've tried dbcc shrinkfile specifying a size. I've tried to shrink the file in increments. The problem is I'm just not getting much space released. I get maybe 2-3%. I suspect this is because we're dealing with heaps with some tables that have sparsely populated blob / image data.
So what's the best approach. Is there an alternative to shrinking? Should I recreate all the db objects in a new database? It doesn't matter if the process takes a while or if it has to be done manually.
Regards,
Peter
November 26, 2014 at 6:49 pm
Those articles refer mainly to production dbs which need to be in excellent shape. I'm not recommending it but you might use it to generate light copies for development/testing environments.
How much information will you keep and how much will you remove? If you're not keeping a lot of information, I would script the db and load the information needed.
Why don't you have clustered indexes? Just curious.
November 26, 2014 at 8:42 pm
Hi Luis,
We're freeing up 25% of the space as we're removing a lot of image data but it's still a big database (115gb) with 150m rows of data across 3000 tables.
So will scripting the entire db and all the objects in it allow me to reduce the size and avoid fragmentation? Is there a better way to do this than the built in 'Generate Scripts'?
The database is an ERP system that's been designed with no clustered indexes.
Regards,
Peter
November 26, 2014 at 8:55 pm
peteroc (11/26/2014)
Hi Luis,We're freeing up 25% of the space as we're removing a lot of image data but it's still a big database (115gb) with 150m rows of data across 3000 tables.
So will scripting the entire db and all the objects in it allow me to reduce the size and avoid fragmentation? Is there a better way to do this than the built in 'Generate Scripts'?
The database is an ERP system that's been designed with no clustered indexes.
Regards,
Peter
I know this isn't what you want to hear but hard disk is relatively inexpensive. Wouldn't it just be easier to add some hard disk space?
As for having a database of this size with no CIs goes, I'll just hold my tongue. 😉
--Jeff Moden
Change is inevitable... Change for the better is not.
November 26, 2014 at 9:16 pm
Hi Jeff,
I'm with you regarding the adding more disk space but It's not my decision and I have to explore the options. I also think it's complicated by the fact of the missing CI's. Do you think this could be why the space isn't being freed up?
Regards,
Peter
November 27, 2014 at 1:18 am
peteroc (11/26/2014)
Hi Jeff,I'm with you regarding the adding more disk space but It's not my decision and I have to explore the options. I also think it's complicated by the fact of the missing CI's. Do you think this could be why the space isn't being freed up?
Heaps shouldn't prevent a shrink from running. They'll slow it down most likely, but not prevent it.
Oh, and for reference, I have a job that runs once a week on my dev server - set all DBs to simple recovery, shrink logs (data files are already small)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 27, 2014 at 2:57 am
Hi Peter,
As you are already truncating the tables with the blob data in, is it possible you could try dropping the tables instead? You could then run the shrink and re-create the tables afterwards.
November 27, 2014 at 4:13 am
peteroc (11/26/2014)
Hello,I've read lots about why you shouldn't normally shrink databases in posts such as this:
http://www.brentozar.com/archive/2009/08/stop-shrinking-your-database-files-seriously-now/
But we have a situation where we are required to copy the live db to various non production environments for testing. Part of this process involves truncating a number of tables with masses of blob data. So we're freeing up quite a lot of space. The question is how to reclaim this? The database is peculiar in that it's got no clustered indexes so I can't rebuild indexes on another filegroup with drop to move and rebuild.
I've tried dbcc shrinkfile specifying a size. I've tried to shrink the file in increments. The problem is I'm just not getting much space released. I get maybe 2-3%. I suspect this is because we're dealing with heaps with some tables that have sparsely populated blob / image data.
So what's the best approach. Is there an alternative to shrinking? Should I recreate all the db objects in a new database? It doesn't matter if the process takes a while or if it has to be done manually.
Regards,
Peter
have you tried rebuilding the heaps to free unused pages
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply