March 11, 2014 at 10:24 am
Hey all,
Here's my scenario. We have a 1.2TB database that we are now going to break up into multiple files. Unfortunately the vendor of this database does not yet support filegroups (I know). So, I have to figure out my best path of attack here.
Here's what I've done thus far in a test scenario:
- Created 7 additional files between 4 new drives
- REBUILD all indexing
- Half of the data split evenly between the 7 new files
- The other half the remains in the primary file (I'm assuming blob data types and heaps are the cause?)
So, I now need to figure out how to try and even out the primary file with the newly created files. My current plan was something like this:
- Run DBCC EMPTYFILE to push the remaining data into the other files.
- Once the above has completed, cap the other 7 files and allow the primary to grow back out to matching size of the others
- Remove cap off other files
With this comes two questions:
1) Is there a better, easier way to do this? Currently I'm going on over 26 hours doing an EMPTYFILE! Not to mention, it's a bit of a baby sitting effort that I'd prefer not go through; however without utilization of filegroups, I'm not sure what else I can do?
2) Can/Should I expect blocking to occur when running an EMPTYFILE? I've only done some basic operations; however they appear to process just fine. Yet this is a very highly utilized database that will see a lot of traffic.
Thanks
March 11, 2014 at 2:17 pm
If it were me, I would not use SHRINKFILE with EMPTYFILE. EMPTYFILE marks a data file so that the data engine will no longer write to it. The only way I know of to allow the data engine to write to the file again is with a SQL restart. A detach and attach of the db might also work but I've never tried.
I would just use DBCC SHRINKFILE and lots of online rebuilds (assuming Enterprise edition). The proportional fill algorithm in SQL should do the rest.
Shrink from 1.2TB to 1TB
Rebuild Indexes
Shrink from 1TB to 800GB
Rebuild Indexes
and so on until you have shrunk the mdf to the same size as the other files and rebuilt the indexes one last time.
The shrink operation should not block anything but will cause IO and index fragmentation which could cause things to run slower.
The other option would be to create a new empty database with the desired number of data files and then script out all the objects and import all the data into the new empty database.
March 12, 2014 at 6:33 am
Thanks for the suggestions Jeremy,
But, rebuilding indexes isn't going to move heaps/blobs though will it?
March 12, 2014 at 6:43 am
Adam Bean (3/12/2014)
Thanks for the suggestions Jeremy,But, rebuilding indexes isn't going to move heaps/blobs though will it?
Correct, it wont redistribute the blob pages to the new files. You have no alternative but to shrink/empty the files.
If you check sys.dm_db_partition_stats you will be able to identify for certain how much data you have to move.
March 12, 2014 at 6:55 am
MysteryJimbo (3/12/2014)
Adam Bean (3/12/2014)
Thanks for the suggestions Jeremy,But, rebuilding indexes isn't going to move heaps/blobs though will it?
Correct, it wont redistribute the blob pages to the new files. You have no alternative but to shrink/empty the files.
If you check sys.dm_db_partition_stats you will be able to identify for certain how much data you have to move.
Thanks Jimbo. Empty I'd agree; however shrink won't accomplish anything though correct? I attempted to do a shrinkfile at a size less than the min, and it didn't move anything. So it seems I have two options if we want to move the remaining data out of the original file:
1) EMPTYFILE
2) New database
Correct?
March 12, 2014 at 7:04 am
I don't think EMPTYFILE is a viable route though, after 48 hours, I still have 200+ gb in the file ... that's entirely too long on such a high traffic database, so I can't even pursue that route at this time as I'm going to assume I will effect users.
March 12, 2014 at 7:24 am
Can you create a duplicate table and trickle feed a back fill into the duplicate table? I would do this into an alternative filegroup.
March 12, 2014 at 11:56 am
With the idea of creating a new db and migrating ... anything better/easier than built-in SSMS functionality and/or RedGate tools?
March 13, 2014 at 5:29 am
Adam Bean (3/11/2014)
Hey all,Here's my scenario. We have a 1.2TB database that we are now going to break up into multiple files. Unfortunately the vendor of this database does not yet support filegroups (I know). So, I have to figure out my best path of attack here.
Here's what I've done thus far in a test scenario:
- Created 7 additional files between 4 new drives
- REBUILD all indexing
- Half of the data split evenly between the 7 new files
- The other half the remains in the primary file (I'm assuming blob data types and heaps are the cause?)
So, I now need to figure out how to try and even out the primary file with the newly created files. My current plan was something like this:
- Run DBCC EMPTYFILE to push the remaining data into the other files.
- Once the above has completed, cap the other 7 files and allow the primary to grow back out to matching size of the others
- Remove cap off other files
With this comes two questions:
1) Is there a better, easier way to do this? Currently I'm going on over 26 hours doing an EMPTYFILE! Not to mention, it's a bit of a baby sitting effort that I'd prefer not go through; however without utilization of filegroups, I'm not sure what else I can do?
2) Can/Should I expect blocking to occur when running an EMPTYFILE? I've only done some basic operations; however they appear to process just fine. Yet this is a very highly utilized database that will see a lot of traffic.
Thanks
You cannot emptyfile on the database primary file, the first file created in the database cannot be removed, neither can the first log file.
Essentially to the vendor the filegroups should be transparent, do they regularly create objects in the database, might be the only time they see a problem.
If you really want to keep everything in 1 filegroup i would create a new temporary filegroup and move objects into it. Shrink\expand the files in the primary filegroup to the same size and growth rate, then move the objects back in. Remove the temporary filegroup when you have finished with it
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
March 13, 2014 at 6:14 am
Hey Perry,
Thanks for the response. You can indeed perform an EMPTYFILE on the primary file; however it will end in error as you can't move the system objects, yet it does move the data to other files. Still though, I don't feel comfortable doing this on such a large file as current testing has put this process at around 50 hours, which simply is not ok with the utilization of this database.
As for your idea of a temporary filegroup ... a good idea, but I have questions. So, say I went forth with creating 7 additional files and I still want the data to be evenly spread among them, how would I leverage filegroups?
- Would I simply create a FG per file
- Perform a REBUILD allowing whatever will move on its own to move
- Examine what objects are still associated with the primary FG and move them manually thereafter to different FG's
- Drop all FG's
?
I'm simply not overly familiar with using files and filegroups in general.
Thanks!
March 13, 2014 at 7:57 am
Adam Bean (3/13/2014)
You can indeed perform an EMPTYFILE on the primary file; however it will end in error as you can't move the system objects, yet it does move the data to other files.
Yes, exactly, but you can't delete that file. On a large file it will take sometime before you hit the error though 😉
Adam Bean (3/13/2014)
As for your idea of a temporary filegroup ... a good idea
😎 Yes i know i have also used it myself in the past
Adam Bean (3/13/2014)
but I have questions. So, say I went forth with creating 7 additional files and I still want the data to be evenly spread among them, how would I leverage filegroups?- Would I simply create a FG per file
- Perform a REBUILD allowing whatever will move on its own to move
- Examine what objects are still associated with the primary FG and move them manually thereafter to different FG's
- Drop all FG's
?
I'm simply not overly familiar with using files and filegroups in general.
Thanks!
Just create a new FG with one file, it's temporary anyway so don't go overboard 😉
new FG with one file and move all objects into it. Resize the primary filegroup files and move objects back, then delete the temporary filegroup and its datafile.
:exclamationmark:Danger Will Robinson:exclamationmark:
Be careful about pushing too many files into one filegroup, you'll get to a point where the overhead required on these files will hamper performance
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
April 7, 2014 at 7:19 am
So, back to working on this and seeing some very odd behavior.
Test #1) EMPTYFILE - left 20gb in the primary file
Test #2) EMPTYFILE - left 24mb in the primary file
Test #3) EMPTYFILE - left 150gb in the primary file
I'm at a loss to why I'm seeing different results here. I'm playing with test #3 now and every additional time I run EMPTYFILE it fails:
Cannot move all contents of file to other places to complete the EMPTYFILE operation.
So it's obviously hitting the system objects; however why in this run is it ignoring even more user objects?
Are there any good tools/scripts out there to completely migrate a database into another? I attempted to use RedGate compare and I ran into a multitude of issues. I plan on retrying; however was curious if this would be the best bet at this time and what if any known tools/scripts exist to make it easier.
Thanks
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply