November 30, 2007 at 9:56 am
I'm running out of disk on one of my servers run SQL Server 2000. It is home to several databases with numerous files and filegroups. I've added disk and my desire is to extend the largest filegroups for the only the largest, fastest-growing DB by adding files to it's filegroups.
To address the space issue and allow the all the databases room to grow, I would like to shrink the MDF files in these file groups and have SOME of the data move to the newly-created NDF files on the new volume without filling it up. To be clear, I am trying to shrink the DATA files, not the log files. I only want to shrink the files enough to provide free space on the original volume for other DBs to grow and leave some space for growth on the new volume.
I created a test db with one MDF file of 1MB and allowed it to autogrow to 2MB. Then, I filled it up. I
added a second file to the filegroup with a size of 4MB.
The size of the MDF file remains unchanged when I attempt to shrink it using:
DBCC SHRINKFILE (my_data.MDF, 1)
or
DBCC SHRINKFILE (my_data.MDF, 1, notruncate)
GO
DBCC SHRINKFILE (my_data.MDF, 1, truncateonly)
GO
I'm finding I can only get data to move from the MDF to the NDF file by using:
DBCC SHRINKFILE (my_data.MDF, EMPTYFILE)
I don't want want to empty the file, only shrink it.
Any suggestions?
Thanks,
Kevin
November 30, 2007 at 10:02 am
First, you don't want to shrink as a general rule. Keep that in mind and try to avoid it.
Second, in your case, it makes sense, but SHRINKFILE doesn't move data.
You need to create objects in the NDF file using the logical name. For sprocs/UDFs, this is a drop/create process.
For tables, you need to rebuild the clustered index ON THE NEW file. So look up the syntax in BOL and recreate the clustered index (only this one for data) on the NDF file.
Once that is done, shrink will work on the MDF. However once shrink takes place, it can cause lots of fragmentation, so you want to rebuild your indexes again.
November 30, 2007 at 10:18 am
Thanks Steve... I appreciate the quick reply.
I'm in the process of building a new server as a home to these databases. This is a band-aid... I'm trying to do this without scheduling down time and with a minimum of effort. Shrinking seemed the most promising.
In my simple test, there was only one table with no indexes. Still didn't matter. Dropping and re-creating objects will require the database(s) to be down anyway. So, there's no advantage...
It look like I have to bite the bullet and request the down time. I'll simply detach one of the databases and move it over to the new volume to get my space.
December 10, 2007 at 10:37 am
I had a similar situation where we wanted to take a large mdf and split it into multiple files. Because this database is from a vendor that has an obscene number of database objects, and the vendor doesn't really recommend trying to manage the objects with filegroups, we had to find another way to split the file and shrink the original mdf.
This is a trick I came up with and tested on SQL Server 2000 Standard Edition with SP3a installed.
1) Add a number of additional ndf files matching the final file total you'd like in the end, including mdfs and ndfs -- one of the ndfs will serve as a swap file.
2) Use DBCC SHRINKFILE with EMPTYFILE on the mdf. This will spread the data across all of the ndfs.
3) Take the database offline. Bring it immediately back online again. This action seems to take off the 'lock' that EMPTYFILE puts on the mdf file that was emptied.
4) Use DBCC SHRINKFILE with EMPTYFILE on your swap ndf file. This will move the data back to the mdf (as well as the other data files).
5) Drop the swap ndf once it is empty.
Also, there were a couple things I noticed while testing this. First, don't use autogrow on your data files when using EMPTYFILE. The DBCC SHRINKFILE with EMPTYFILE command will sometimes just quit when an autogrow operation is due, and subsequent attempts to run the DBCC will not shrink the file in question any further, even if a substantial amount of data remains. If this happens to you, taking the database offline and then bringing it back online will allow you to use the DBCC SHRINKFILE with EMPTYFILE again. Second, when you move the data back out of your swap ndf file, it's not going to necessarily distribute neatly across the remaining data files. In the long wrong this won't be a problem if you use fixed datafile sizes, which you probably should anyway during the data moving process described because of the autogrow operation issue. Finally, this was a pretty lengthy process. It took about 4-5 hours on the 20 GB database on which I tested.
As always, I'd recommend testing this extensively before trying on a production environment.
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply