Reindex - does it make data contiguous, space permitting

  • I have a large database with several very large files in the Primary file group. I am moving 90+ percent of the total data by volume and row count to other file groups for reasons of mainteance and I then want to shrink the Primary file group. I have noticed that, for whatever reason, the SHRINKFILE operation takes forever.

    My question is this: If I reindex the smaller tables remaining in the primary file group, will SQL2005 consolidate the data leaving a lot of completely unused space in the file and will this make the SHRINKFILE operations run faster?

  • Are you aware that SHRINKFILE is going to fragment all of your indexes by 99.5% or more?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Let me restate my question.

    After moving some huge tables to their own file groups, I will end up with five files in the primary file group with about 95% free space containing about 200 tables with a total of about 15 million rows.

    I want to reduce the primary file group to a single file with a reasonable amount of space.

    How should I proceed.

  • ShrinkFile with EMPTYFile the files you want to remove. Then, once those are gone, shrinkfile on the remaining file to get it to the size you want, then rebuild all indexes in that filegroup.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • So, here is an after action report of sorts.

    I received a 2 TB database containing about 225 tables where 10 tables accounted for 90% of the volume of the database. This data was spread across 8 filegroups.

    Due to server capacity issues (only 32 GB RAM), running DB maintenance tasks was a bit of problem even though I am running a development environment with none of the usual production availability issues. In particular, I have noticed that the SHRINKFILE operation takes a very long time to run.

    I moved the largest tables to other filegroups - in total I moved 99% of the data and total rows to other file groups leaving the original file groups about 99% free. The total time to move these tables was about 36 hours with the largest table - a 1 TB gem - taking close to 24 hours - 8 hrs for the data and cluster index and 16 hrs to build 4 additional indexes.

    I then ran DBREINDEX on the tables remaining in the original primary file group on the assumption that this would consolidate data that was undoubtedly scattered all over the file groups. My wholly untested theory is that SHRINKFILE will run faster is data isn't scattered all over the file being shrunk.

    Then I ran SHRINKFILE with the EMPTYFILE option on 7 of the 8 files in the PRIMARY file group. This process took about 5 hours.

    I am shrinking the remaining file group and will then run DBREINDEX one last time to fix indexes fragmented by the shrink operation.

    So, I will end up with a database that is more maintainable and one where the most dynamic tables (heavy insert and delete activity) are isolated into there own file groups.

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply