SHRINKFILE or REBUILD Index

  • Hi everyone.

    We have a database with 1,1TB mdf file. I want to split it to two datafiles (add new .ndf file). How I know, there are 2 methods. Do rebuild big clustered indexes and then can shrink .mdf file. And other way is add new .ndf file, do DBCC SHRINKFILE ('filename', EMPTYFILE). After it new values will be replaced in both datafiles. Which of methods is more safety, faster, better? We will do first on TEST and then in PRODUCTION environment.

    And maybe there is another way to split datafile?

  • Will the new file be in the same filegroup as the existing file?  What is the driver for doing this work - ease of administratio, performance, something else?

    John

  • Yes, new file will be in the same filegroup. Driver is that, how I now one big file is not good for best practices. Our database grows and this is bad for storage administration too I think. So we want to split .mdf to smaller files for performance and windows/storage administration.

  • Farik013 wrote:

    Yes, new file will be in the same filegroup. Driver is that, how I now one big file is not good for best practices. Our database grows and this is bad for storage administration too I think. So we want to split .mdf to smaller files for performance and windows/storage administration.

    There isn't any solid 'best' practice stating you should split a database across multiple files.  Nor is there anything that indicates the size of a file has a negative impact on performance.

    If you are only doing this to satisfy some idea of 'best practice' - I would not recommend moving forward.

    However - if your storage team has identified that having separate files of no more than a certain size would help on their end, then making this change could be considered.  It will not improve performance...but may improve administration of the system.

    If the goal is to improve performance, then splitting a file *might* help - as long as you also look at the full IO stack.  That is, you need to make sure you add HBAs - configure paths appropriately on the fabric - insure each volume presented from the SAN is either on separate spindles or the SAN has a large enough write-cache, etc...

    Now - if you decide to move forward, using SHRINKFILE is not the way.  Using SHRINKFILE will move all data from the original file to the new file.  You don't want that...you want data to spread equally across both files...and the only way to do that is to rebuild the indexes.  Of course - both files must be the same size to allow for a fairly equal distribution of the data, but that will not happen immediately and probably will never get to a point where the usage is equal across both files.

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • Separate files may have been a best practice on local SCSI arrays when you would separate these onto different PHYSICAL SCSI arrays, but likely have little benefit now.

    There are administration benefits, such as being able to backup the filegroup.

    Like other posters, what do you hope to accomplish by doing this?  1.1 TB is not that big.

    Michael L John
    If you assassinate a DBA, would you pull a trigger?
    To properly post on a forum:
    http://www.sqlservercentral.com/articles/61537/

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply