March 30, 2010 at 4:24 pm
Dear Friend Master SQL Guru, 😀
Ok - before you flame me, I have a legitimate (I think) reason to shrink a database*. We restore a copy of our live database to several development/test environments on a regular basis. The dev environments are much less beefy than production and have less disk resources, and to be honest, don't need all of the data from production anyway. So we remove about 85% of it in a staging area before pushing it out to the places it needs to be. Some tables we truncate since they are not useful in a dev/test environment, others we just pare down to just include recent data. When we have it as small as we can get it, we do a shrink then a backup so that the filesizes are small when we do the restore.
It takes quite awhile to do the whole process, but by far the most time is taken up doing the shrink. I've tried giving it a 125% size buffer so it isn't trying to shrink down to the last byte and tried running an incremental shrink where I run shrink multiple times taking out 5% each time - neither made significant difference. I've thought about copying out the data I need into another database, but that seems like it might take a long time to set up and maintain since I have quite a few tables, but if that's the last resort, I guess I have to go there.
Any ideas what I can do to speed up the shrink or another process I can use to get a smaller copy of the database out to multiple locations?
Thanks,
Chad
*Database shrinks, especially on a regular basis are BAD. If you needed that much space at one point, you will likely need it again and shrinking the database fragments it to death. In addition, you'll experience a performance hit as the database grows back again. If you have removed a significant portion of your database, don't think it will grow to its original size, and need the disk space, then a one time shrink followed by rebuilding your indexes may beneficial. I think I fall into this latter category for this specific instance.
April 1, 2010 at 9:53 am
Looks like there are no good ideas. If it gets real bad, I may try to script something, but I feel more comfortable deleting data then programatically recreating the database - it seems less likely that something will be left out and seems more certain to be a perfect copy of the production schema.
Thanks!
Chad
April 1, 2010 at 10:41 am
Two things I do when reloading Production to Dev/Test is change the recovery model to Simple then shrink the transaction log. The log files in production are approximately 8 GB in Full recovery model. In test/dev, it doesn't need to be that big, even when they are doing problem resolution testing.
April 1, 2010 at 1:28 pm
Replace your disks with SSD?
Seriously, I like what Lynn suggested.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
April 1, 2010 at 1:46 pm
SSD? Oh wow. I'm drooling now.
I do set the DB to simple before I start and shrink the TLog after all the other scripts have run. The first time it took over 24 hours to run the scripts, but with a few changes (most notably disabling indexes during deletes) I have the whole thing down to about 10 hours, but that's still about 2 hours larger than the window I have, so I'm hoping to trim a little more off.
I'm guessing that the time it takes to shrink is proportional to the used space in the database, right? So if I can get the DB even smaller, the shrink should take less time? It makes sense because it has to move all the data around before it can shrink the file, so that is another direction I might explore.
Thanks Lynn and Jason for your replies.
Chad
April 1, 2010 at 1:55 pm
You're welcome.
How big is the database? And log?
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
April 1, 2010 at 2:01 pm
do you try a shrink with truncateonly first? you might get lucky and remove a chunk with no effort that way.
I wonder if a reindex before the shrink (I know completely back to front from normal) might mean the shrink has less data that needs moving and go quicker
---------------------------------------------------------------------
April 1, 2010 at 2:52 pm
Try moving the tables you are truncating into a new file group
The files in the new file group should be able to shrink fairly rapidly, since they will be empty or almost empty.
April 1, 2010 at 2:54 pm
Chad Crawford (3/30/2010)
Dear Friend Master SQL Guru, 😀
Ha, I just barely noticed that.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
April 1, 2010 at 4:46 pm
CirquedeSQLeil (4/1/2010)
How big is the database? And log?
This particular database is about 80GB and the log is around 2 GB. I need to get them around 10 GB together. The log shrinks nicely, of course. The data has a little way to go still.
george sibbald (4/1/2010)
do you try a shrink with truncateonly first? you might get lucky and remove a chunk with no effort that way.I wonder if a reindex before the shrink (I know completely back to front from normal) might mean the shrink has less data that needs moving and go quicker
Good thought - I bet I do get a good chuck right off the bat since I'm removing about 85% of the data. I'll see if that makes a difference in the overall time. I didn't think about reindexing first, I'll try that next and let you know how it ends up.
Michael Valentine Jones (4/1/2010)
Try moving the tables you are truncating into a new file groupThe files in the new file group should be able to shrink fairly rapidly, since they will be empty or almost empty.
Good thought, I'll try that too. I haven't done a lot of work with filegroups, would I need to shrink the file group that the tables came from, or the ones they were going to? Maybe I should move all the tables to a new filegroup and then let the old one go? I'll try it a couple different ways unless you have some specific suggestions.
It will take me a few days to go through all the suggestions and get it tweaked, but I'll let you know my progress and appreciate any feedback you have.
CirquedeSQLeil (4/1/2010)
Chad Crawford (3/30/2010)
Dear Friend Master SQL Guru, 😀Ha, I just barely noticed that.
Dedicated lurker there. Haven't missed a post for at least a year! Thought about going back and reading the ones I missed, but then decided that wouldn't be something that I wanted to admit to doing anyway.
Chad
April 1, 2010 at 5:05 pm
Chad Crawford (4/1/2010)
CirquedeSQLeil (4/1/2010)
Chad Crawford (3/30/2010)
Dear Friend Master SQL Guru, 😀Ha, I just barely noticed that.
Dedicated lurker there. Haven't missed a post for at least a year! Thought about going back and reading the ones I missed, but then decided that wouldn't be something that I wanted to admit to doing anyway.
Chad
Reading that entire thread would take a while. There might be some nuggets in it worth reading - but there is a lot to sift through.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
April 2, 2010 at 8:05 am
1) give yourself a BIG tlog to start before doing the shrink so it doesn't have to grow.
2) script and drop ALL indexes if you are deleting 85% of the data. Do the deletes, shrink database, grow data file back out manually, create indexes. Now your indexes aren't fragmented all to hell. Now shrink the tlog.
3) consider getting Hyperbac. This now (or shortly will) allow you to restore a compressed database backup and use it as a NATIVE sql server database - while STILL COMPRESSED!! Pretty damn amazing stuff actually - poor man's database compression. I got to see it while still in beta. I am advising all of my clients to get this product, and several have been using it for quite some time. I even don't have to worry about promoting it here on SQLServerCentral any more since Red-Gate just bought them! :w00t:
Disclaimer: I have a close relationship with Hyperbac, use their products and recommend them to my clients. Also, if you care to you can mention TheSQLGuru sent you you will get a discount and my daughter will get a few coins for her college fund. Feel free to contact JAven@hyperbac.com.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
April 2, 2010 at 8:50 am
One potentially efficient approach revolves around the fact that you are deleting 85% of the data.
That must take a while, and then you have the shrink operation too...
I would be tempted to restore the production database to the staging server, and then run a script to set up snapshot replication with row (and possibly column) filters.
No deletion or shrinking would be required: the (relatively small) filtered data is just sent to the subscribers and bulk loaded by replication into pre-sized databases.
Paul
April 2, 2010 at 11:01 am
TheSQLGuru (4/2/2010)
2) script and drop ALL indexes if you are deleting 85% of the data. Do the deletes, shrink database, grow data file back out manually, create indexes. Now your indexes aren't fragmented all to hell. Now shrink the tlog.3) consider getting Hyperbac.
Ohhhh... I like that. I was disabling the indexes, but I re-enabled them before doing the shrink. In retrospect your suggestion seems obvious - you get the benefit of the small size without the fragmentation. Brilliant! Thank you.
I'll check out Hyberbac, sounds like it might be something worth looking at, I appreciate the tip!
Paul White NZ (4/2/2010)
One potentially efficient approach revolves around the fact that you are deleting 85% of the data.That must take a while, and then you have the shrink operation too...
I would be tempted to restore the production database to the staging server, and then run a script to set up snapshot replication with row (and possibly column) filters.
No deletion or shrinking would be required: the (relatively small) filtered data is just sent to the subscribers and bulk loaded by replication into pre-sized databases.
Paul
Nice - I didn't even consider replication. I've only used snapshot replication to get transactional replication started, so it slipped my mind. There would be a little maintenance involved to add new tables and procedures, but we could just make that part of our deployment process.
April 2, 2010 at 11:10 am
Chad Crawford (4/2/2010)
TheSQLGuru (4/2/2010)
2) script and drop ALL indexes if you are deleting 85% of the data. Do the deletes, shrink database, grow data file back out manually, create indexes. Now your indexes aren't fragmented all to hell. Now shrink the tlog.3) consider getting Hyperbac.
Ohhhh... I like that. I was disabling the indexes, but I re-enabled them before doing the shrink. In retrospect your suggestion seems obvious - you get the benefit of the small size without the fragmentation. Brilliant! Thank you.
I'll check out Hyberbac, sounds like it might be something worth looking at, I appreciate the tip!
Paul White NZ (4/2/2010)
One potentially efficient approach revolves around the fact that you are deleting 85% of the data.That must take a while, and then you have the shrink operation too...
I would be tempted to restore the production database to the staging server, and then run a script to set up snapshot replication with row (and possibly column) filters.
No deletion or shrinking would be required: the (relatively small) filtered data is just sent to the subscribers and bulk loaded by replication into pre-sized databases.
Paul
Nice - I didn't even consider replication. I've only used snapshot replication to get transactional replication started, so it slipped my mind. There would be a little maintenance involved to add new tables and procedures, but we could just make that part of our deployment process.
You could do it that way, or you could setup the replication with the snapshot and filters to get what you need. You would recreate it each time you need to rebuild dev.
Just a thought.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
Viewing 15 posts - 1 through 15 (of 36 total)
You must be logged in to reply to this topic. Login to reply