Primary File wont shrink

  • Hello guys,

    I've been looking for solutions about this issue for quite a time now, but I can't figure one. My question is quite simple : Is there any way to shrink a primary file after using

    DBCC SHRINKFILE(filename,EMPTYFILE).

    I d'ont want tu delete it, only shrink it to an acceptable size.

    The fact is that I have a 170GB database running, and the server can't handle this DB anymore. We figured we could use SAN capability and other tricks that would increase performance by using multiple files in that DB (one filegroup, one file... ouch !). So we wanna move to multiple files, and DTS is too long to simplyu copy records to a new DB with file well set.

    So we figured we could copy data to a temp file (170GB) in the primary filegroup, then empty primary data file, then create 3 files 45GB each and shrink primary file to 45GB. Then we could empty the temp file, delete it (it works) and all the data would be allocated between the four files...

    But the two steps we can't have working are the shrinking of primary file, and the fact that the reallocation doesn't use this specific file (it rather grows one of the other files than this one).

    If someone could help, it would be grrrrrrrreatly appreciated.

    Thx a lot

  • 1st prob:

    - make sure there's enough space in a temporary filegroup before your DBCC SHRINKFILE (increase by 10%, if space is a serious problem, modify the fill factor of your indexes, keeping a record of the original amount and reindex)

    - resize the primary filegroup after you've shrunk it to your required 45GB

    2nd prob:

     - restrict the filegrowth of your 3 ndf's to 45GB which should force your primary datafile to be used.

     

    Good luck

    Max

  • Hi,

    Thanks for the answer, but I'll complete my explanations to make the problem more precise. The primary file has actually been shrunk, it only contains 2 MB of data, which is filled by system tables (I believe...). But the problem I encounter is that I cannot reduce the size of the data file, and I end up with a data file of 170 GB containing 2MB of data...

    Thanks for your answer, your second point seems to actually work since I have successfully put data to the formerly locked file.

    Is there a way to rearrange data so that it would be equally put among the four files

    Thanks

  • Hi,

    I have recreated your situation and am also unable to shrink my empty datafile. It seems that datafiles can't be shrunk, which would release the partioned data. This is, I believe, contrary to DBCC SHRINKFILE where you should be able to specify the size you want to shrink the datafile to.

    In Enterprise manager, right click your database and select "View" and then "Taskpad", you should be able to see your datafiles graphically. Click on the down arrow next to "Space allocated" and select "Shrink Database", there should be a button at the bottom called "Files..." at the bottom, click this. At the top of the next screen you can select the actual datafile you want to shrink and in the middle you should be able to select "Shrink file to:" and the minimum size will be displayed next to it.

    As far as I was aware you could "truncate" the datafiles here, but obviously not, so unfortunately I don't have an answer to your first problem, other than to recreate the db from scratch.

    As for your last question, you have to specify which table or index you want on which datafile. If you simply modify the current property setting for an existing table, I am not sure if the data will be copied to the new datafile.

    It seems to me that you have 2 options at this point:

    1. recreate the database, taking some time to analyise the table size and projected growth (as well as the indexes), then decide where you're going to put your objects and then re-populate the new database using DTS.

    2. if you want to go large, consider some sort of RAID (preferably 1+0), where it shouldn't really matter if all the data is on 1 datafile or not.

    Of course I'm not in your position so please don't take my advise without serious consideration and of course if you can shrink the primary datafile that would be first prize.

    Max

    Max

  • Hi,

    Thanks again for your answer.

    Alas, DTS is not an option in my case, because it takes approximately 48 hours to do it, and we cannot shut down the server for so much time. For the RAID part, I believe our disk already are on RAID, but one the server point of view, we have concluded that the thread handling of the server would be improved if using multiple files (and our editor told us to do so)

    Actually we managed to shrink a primary datafile on a test DB so it's wierd we cannot reproduce it on the production DB. We are on the way trying a restore/backup in order to see if the shrinkfile command could be delayed until some action are taken.

    We hope the solution is near, because we have to have this done for friday .

    If someone sees a solution there, I take it !

  • unlucky deadline...

    I also tried to backup and restore the database to no avail. I found the pasted text in an msdn article:

    You cannot shrink an entire database to be smaller than its size when created. Therefore, if a database was created with a size of 10 megabytes (MB) and grew to 100 MB, the smallest the database could be shrunk to, assuming all the data in the database has been deleted, is 10 MB.

    What you can therefore try is to backup the db, create a new one, restore the backup in the new db and then go on your merry way (hopefully).

    good luck

    Max

  • Hi,

    I have actually done what I wanted to do so far, so I figured I could post my solutions as you guys have helped me find it.

    For Max : you cannot shrink an entire database to be smaller than its original size, but you're always allowed to shrinkfile to a size smaller than the original.

    The solution to my problem was simply to restart the SQL Server engine . Don't ask me why it helped...

    So if you ever want to transfer a large DB to a new one with more than one files, here is the way I am going to use (tested and approved)

    1. Create a file which is as large as the data in your primary file (call it "buffer")
    2. Empty the primary file (DBCC SHRINKFILE (<FILENAME>,EMPTYFILE))
    3. Restart SQL Server Engine
    4. Shrink the primary file to the Data size divided by the number of files you're gonna create (DBCC SHRINKFILE(<FILENAME>,<NEWSIZE&gt)
    5. Create all the new files with the size of data divided by the number of files
    6. Restrict their growth in order to fill the primary file in the next operation
    7. Empty the buffer file (DBCC SHRINKFILE(BUFFER,EMPTYFILE))
    8. Delete the buffer file (ALTER DATABASE REMOVE FILE (NAME=BUFFER))
    9. Set final size of data files and unrestrict their growth according to the final configuration needed

    I also tried to import data after that and it has been randomly allocated (but still balanced) betwwen all the files in the filegroup.

    Thanks a lot guys and see you soon

  • I much prefer your method

    good job

    Max

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply