Decrease size of database that is over 400 gb

  • Hello,

    Besides shrinking data files which is not recommended, how else can we decrease the size the database?  I can think of deleting indexes that are not used and removing repetitive indexes, any other tips?  We are also looking into truncating thousands of rows but I don't think it's going to dent the size.  How can space be reclaimed?

  • Page compression is generally very effective at reducing data size.  But you will need to either: (1) shrink the data file after compressing, which will take a long time and fragment the data; or (2) create a new filegroup and compress the tables into the new filegroup.  Then, it will be much faster to shrink the original / PRIMARY filegroup.

    SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".

  • Thanks ScottPletcher, #2 sounds like a better option.

  • Old article from Paul Randall, but good reading:

    https://www.sqlskills.com/blogs/paul/why-you-should-not-shrink-your-data-files/

     

    DBASupport

  • A serious word of caution on the use of PAGE compression...  If you're into doing any index maintenance, PAGE compression will nearly double the time and CPU it takes.  You can also have a perfectly good index that never suffers from page splits/fragmentation even in an INSERT/UPDATE in the "hot spot" and page compression can make it become a serious page splitting problem.

    If you have any in-Row LOBs or even some wide variable width columns that you're not handling correctly, you can end up with some serious space wastage in the form of "Trapped Short Rows" and, yes, there's a fix for that that has nothing to do with row or page compression.  And seriously consider the use of the COMPRESS/DECOMPRESS functions for LOBS and wide character based columns especially on History a Log tables, etc.

    Last but not least, check for page density on all of your indexes.  Low page density is not always a problem but it can also be a major problem.

     

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • I would just shrink the data files and rebuild the indexes including the clustered indexes.

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply