Shrink database only x gig a week.

  • I just added the new columnstore indexes on to the database.
    This saved 2 TB of space on the indexes.

    I like to shrink the database by 10 GIG for a few weeks at a time

    Anyone got a script that checks the free space MG similar to below but also would then do the shrink database if the freespace mb > 1000 (1 GIG)

    SELECT DB_NAME()AS DbName,

    name AS FileName,

    size/128.0 AS CurrentSizeMB,

    size/128.0 - CAST(FILEPROPERTY(name,'SpaceUsed')AS INT)/128.0 AS FreeSpaceMB

    FROM sys.database_files;

  • You realize that you have to rebuild your indexes when you do that, right? Only use SHRINKDB if you need to get back space you're not going to use for the next six months or so. You're just making extra work for yourself.

  • have a look at my post on https://www.sqlservercentral.com/Forums/1866405/dbcc-shrinkfile?PageIndex=1
    by setting @shrink_target to your desired value it will do what you need.

  • frederico_fonseca - Friday, December 28, 2018 12:40 PM

    have a look at my post on https://www.sqlservercentral.com/Forums/1866405/dbcc-shrinkfile?PageIndex=1
    by setting @shrink_target to your desired value it will do what you need.

    The trouble is that, no matter the size of the chunks, DBCC SHRINKFILE still moves pages from the end of the data file to the first available opening nearest the beginning of the file.  That will cause index "inversion" (logical order is the reverse of the physical order) and that type of fragmentation is absolutely devastating to any code that needs to do read aheads because the read-aheads will be limited to single pages.

    Any non "truncate only" shrink must be carefully planned with the full realization that you're going to invert many indexes (both clustered ad non-clustered) and that serious index maintenance will be a part of all that (Teaser: although I am working on an unconventional method that may eliminate the rebuilds).

    Getting back to the problem at hand, if you're going to do incremental shrinks over time, make sure that you immediately follow the shrinks with a check of not just "logical fragmentation" but one of "fragment size", as well.  If either one is out of whack, you need to fix the problem with some proper index maintenance.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden - Monday, December 31, 2018 7:12 AM

    frederico_fonseca - Friday, December 28, 2018 12:40 PM

    have a look at my post on https://www.sqlservercentral.com/Forums/1866405/dbcc-shrinkfile?PageIndex=1
    by setting @shrink_target to your desired value it will do what you need.

    The trouble is that, no matter the size of the chunks, DBCC SHRINKFILE still moves pages from the end of the data file to the first available opening nearest the beginning of the file.  That will cause index "inversion" (logical order is the reverse of the physical order) and that type of fragmentation is absolutely devastating to any code that needs to do read aheads because the read-aheads will be limited to single pages.

    Any non "truncate only" shrink must be carefully planned with the full realization that you're going to invert many indexes (both clustered ad non-clustered) and that serious index maintenance will be a part of all that (Teaser: although I am working on an unconventional method that may eliminate the rebuilds).

    Getting back to the problem at hand, if you're going to do incremental shrinks over time, make sure that you immediately follow the shrinks with a check of not just "logical fragmentation" but one of "fragment size", as well.  If either one is out of whack, you need to fix the problem with some proper index maintenance.

    I strongly advice against using DBCC SHRINKFILE in this way, it's much better to add data files/file groups and shuffle the objects around to reclaim the space.
    😎

  • Eirikur Eiriksson - Monday, December 31, 2018 7:23 AM

    Jeff Moden - Monday, December 31, 2018 7:12 AM

    frederico_fonseca - Friday, December 28, 2018 12:40 PM

    have a look at my post on https://www.sqlservercentral.com/Forums/1866405/dbcc-shrinkfile?PageIndex=1
    by setting @shrink_target to your desired value it will do what you need.

    The trouble is that, no matter the size of the chunks, DBCC SHRINKFILE still moves pages from the end of the data file to the first available opening nearest the beginning of the file.  That will cause index "inversion" (logical order is the reverse of the physical order) and that type of fragmentation is absolutely devastating to any code that needs to do read aheads because the read-aheads will be limited to single pages.

    Any non "truncate only" shrink must be carefully planned with the full realization that you're going to invert many indexes (both clustered ad non-clustered) and that serious index maintenance will be a part of all that (Teaser: although I am working on an unconventional method that may eliminate the rebuilds).

    Getting back to the problem at hand, if you're going to do incremental shrinks over time, make sure that you immediately follow the shrinks with a check of not just "logical fragmentation" but one of "fragment size", as well.  If either one is out of whack, you need to fix the problem with some proper index maintenance.

    I strongly advice against using DBCC SHRINKFILE in this way, it's much better to add data files/file groups and shuffle the objects around to reclaim the space.
    😎

    Agreed but, eventually, you have to do something to recover the space and a lot of people don't have the room prior to the shrink to add filegroups with new data files in them.

    I AM fortunate in that I've convinced the infrastructure folks to provide me my own fairly large "DBA-Only" drive where, prior to any index maintenance (as an example), I can rebuild the largest 1 to 4 indexes with the "CREATE INDEX with Drop Existing" trick to temporarily move the indexes to that other drive.  Then I can do my version of index maintenance (I never use REORGANIZE anymore) without getting the file growth that occurs when you rebuild any index over 8GB (128 extents).  When I'm done with the indexes on the PRIMARY file group, then I rebuild the indexes I moved back to the PRIMARY and Bob's your uncle. 😀

    And, just to be advised, I do such index maintenance while I'm in the Bulk Logged recovery model.  That does two things... rebuilds are minimally logged in the Bulk Logged recovery model and that makes REBUILD a lot faster and a whole more lightweight than REORGANIZE.  You'll also get better use out of Fill Factors during REBUILDs than REORGANIZEs, as well.

    And, yes... every system such have such a "DBA-Only" drive available on it.  It has seriously simplified a whole lot of things for me, especially when it comes to space recovery by rebuilding indexes.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • It would be great if everyone had enough space and time to move tables from one filegroup to another to be able to shrink files. few have though.

    In some cases you are also required to move the files back to the original FG or risk loosing support from the vendor of whatever software you are using which adds to the time it takes to perform the operations.
    so while bad for somethings it is one of the options and it is faster than doing a singleton shrink to the final target size.

    for those not aware of it in db's with significant size rebuilding on a new FG, shrinking original FG an rebuilding again on old FG can be significantly faster than just shrinking using the method I posted on the link above .
    I've done both and do use the option on the link for non prod db's where databases are being restored and partially cleared down from prod and where performance is not an issue.

    Jeff - regarding the index inversion would that been "fixed" afterwards with a full index rebuild?

  • frederico_fonseca - Monday, December 31, 2018 8:35 AM

    It would be great if everyone had enough space and time to move tables from one filegroup to another to be able to shrink files. few have though.

    {snip}

    Jeff - regarding the index inversion would that been "fixed" afterwards with a full index rebuild?

    Totally agreed on the space issues being "nice to have" but most don't.  Like I said, I'm fortunate to have been able to convince folks where I work of the benefits of having a "DBA-only" drive.  It's paid for itself many times over in space savings within the individual databases especially during rebuilds.  Hopefully, someone will read this and do their own analysis to come up with justification for such a thing. 

    As for the index inversion, yes... that will be repaired by either a REBUILD or a REORGANIZE although I seriously don't recommend using REORGANIZE for anything even when logical fragmentation is between the "Best Practice" (and, I've found it's anything but a "Best Practice" for my databases) numbers of 10% and 30% average fragmentation.

    For index maintenance after a shrink, most people will naturally avoid REORGANIZE simply because the fragmentation caused by the index inversion will be a whole lot larger than the normal 30% threshold that people recommend for when to REBUILD the index.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden - Monday, December 31, 2018 8:55 AM

    frederico_fonseca - Monday, December 31, 2018 8:35 AM

    It would be great if everyone had enough space and time to move tables from one filegroup to another to be able to shrink files. few have though.

    {snip}

    Jeff - regarding the index inversion would that been "fixed" afterwards with a full index rebuild?

    Totally agreed on the space issues being "nice to have" but most don't.  Like I said, I'm fortunate to have been able to convince folks where I work of the benefits of having a "DBA-only" drive.  It's paid for itself many times over in space savings within the individual databases especially during rebuilds.  Hopefully, someone will read this and do their own analysis to come up with justification for such a thing. 

    As for the index inversion, yes... that will be repaired by either a REBUILD or a REORGANIZE although I seriously don't recommend using REORGANIZE for anything even when logical fragmentation is between the "Best Practice" (and, I've found it's anything but a "Best Practice" for my databases) numbers of 10% and 30% average fragmentation.

    For index maintenance after a shrink, most people will naturally avoid REORGANIZE simply because the fragmentation caused by the index inversion will be a whole lot larger than the normal 30% threshold that people recommend for when to REBUILD the index.

    Thanks for the confirmation - As I always do that after any shrink at least from my side its ok.

    As for this particular thread and also as was recommended  on the second post it would be a option to be used - not necessarily the fastest but we can't tell without full details of remaining db both in size and number of tables.

  • Jeff Moden - Monday, December 31, 2018 8:22 AM

    Eirikur Eiriksson - Monday, December 31, 2018 7:23 AM

    Jeff Moden - Monday, December 31, 2018 7:12 AM

    frederico_fonseca - Friday, December 28, 2018 12:40 PM

    have a look at my post on https://www.sqlservercentral.com/Forums/1866405/dbcc-shrinkfile?PageIndex=1
    by setting @shrink_target to your desired value it will do what you need.

    The trouble is that, no matter the size of the chunks, DBCC SHRINKFILE still moves pages from the end of the data file to the first available opening nearest the beginning of the file.  That will cause index "inversion" (logical order is the reverse of the physical order) and that type of fragmentation is absolutely devastating to any code that needs to do read aheads because the read-aheads will be limited to single pages.

    Any non "truncate only" shrink must be carefully planned with the full realization that you're going to invert many indexes (both clustered ad non-clustered) and that serious index maintenance will be a part of all that (Teaser: although I am working on an unconventional method that may eliminate the rebuilds).

    Getting back to the problem at hand, if you're going to do incremental shrinks over time, make sure that you immediately follow the shrinks with a check of not just "logical fragmentation" but one of "fragment size", as well.  If either one is out of whack, you need to fix the problem with some proper index maintenance.

    I strongly advice against using DBCC SHRINKFILE in this way, it's much better to add data files/file groups and shuffle the objects around to reclaim the space.
    😎

    Agreed but, eventually, you have to do something to recover the space and a lot of people don't have the room prior to the shrink to add filegroups with new data files in them.

    I AM fortunate in that I've convinced the infrastructure folks to provide me my own fairly large "DBA-Only" drive where, prior to any index maintenance (as an example), I can rebuild the largest 1 to 4 indexes with the "CREATE INDEX with Drop Existing" trick to temporarily move the indexes to that other drive.  Then I can do my version of index maintenance (I never use REORGANIZE anymore) without getting the file growth that occurs when you rebuild any index over 8GB (128 extents).  When I'm done with the indexes on the PRIMARY file group, then I rebuild the indexes I moved back to the PRIMARY and Bob's your uncle. 😀

    And, just to be advised, I do such index maintenance while I'm in the Bulk Logged recovery model.  That does two things... rebuilds are minimally logged in the Bulk Logged recovery model and that makes REBUILD a lot faster and a whole more lightweight than REORGANIZE.  You'll also get better use out of Fill Factors during REBUILDs than REORGANIZEs, as well.

    And, yes... every system such have such a "DBA-Only" drive available on it.  It has seriously simplified a whole lot of things for me, especially when it comes to space recovery by rebuilding indexes.

    It is amazing what few bucks can do, think of USB-3 SSD, stretch database etc.
    😎   
    Obviously those are not always an option but there are times when one has to think outside the "box"

  • Wow thanks for the information.

    I strongly advice against using DBCC SHRINKFILE in this way, it's much better to add data files/file groups and shuffle the objects around to reclaim the space.

    I haven't actually done this moving data files around.

    If the database has 5 filegroups on drive X and then you move them to Drive Y, restart SQL, how does this free up the space in your database?

  • TRACEY-320982 - Monday, December 31, 2018 9:58 AM

    Wow thanks for the information.

    I strongly advice against using DBCC SHRINKFILE in this way, it's much better to add data files/file groups and shuffle the objects around to reclaim the space.

    I haven't actually done this moving data files around.

    If the database has 5 filegroups on drive X and then you move them to Drive Y, restart SQL, how does this free up the space in your database?

    No - moving the files to another drive does not free up space.

    What was being mentioned was creating new file groups, rebuild indexes on those new FG so that the tables and indexes move from one file to another which then allows for shrinking or dropping of the old file (when empty)

    small example below - assumes a database named "test" and data location will need to be changed.
    USE [master]
    GO
    ALTER DATABASE [test] ADD FILEGROUP [demo]
    GO
    ALTER DATABASE [test] ADD FILE ( NAME = N'demo_file', FILENAME = N'C:\sql_server_data\MSSQL12.SQL2014\MSSQL\DATA\demo_file.ndf' , SIZE = 5120KB , FILEGROWTH = 1024KB ) TO FILEGROUP [demo]
    GO

    USE [test]
    if object_id('inprimary') is not null
    drop table inprimary;
    create table inprimary
    (id int
    ) on [PRIMARY]

    create clustered index pk_inprimary on inprimary
    (id
    )
    on [PRIMARY]

    -- rebuild index on demo filegroup so it moves from default primary to demo
    -- this just means recreate the index on new fg - either with drop existing or with a explicit drop and create statements

    create clustered index pk_inprimary on inprimary
    (id
    )
    with (drop_existing = on)
    on demo
    /*
    recreate all indexes on new fg - once all are done the original fg can be shrinked without any issues.
    and it can either be dropped (if not primary) or all data can be moved back to the original FG by recreating the indexes again on that FG

    if tables are only HEAPs they can be moved by creating a clustered index on the desired FG and dropping it afterwards.
    */
    -- clean up
    if object_id('inprimary') is not null
    drop table inprimary;
    GO
    ALTER DATABASE [test] REMOVE FILE [demo_file]
    GO
    ALTER DATABASE [test] REMOVE FILEGROUP [demo]
    GO

  • Eirikur Eiriksson - Monday, December 31, 2018 9:53 AM

    Jeff Moden - Monday, December 31, 2018 8:22 AM

    Eirikur Eiriksson - Monday, December 31, 2018 7:23 AM

    Jeff Moden - Monday, December 31, 2018 7:12 AM

    frederico_fonseca - Friday, December 28, 2018 12:40 PM

    have a look at my post on https://www.sqlservercentral.com/Forums/1866405/dbcc-shrinkfile?PageIndex=1
    by setting @shrink_target to your desired value it will do what you need.

    The trouble is that, no matter the size of the chunks, DBCC SHRINKFILE still moves pages from the end of the data file to the first available opening nearest the beginning of the file.  That will cause index "inversion" (logical order is the reverse of the physical order) and that type of fragmentation is absolutely devastating to any code that needs to do read aheads because the read-aheads will be limited to single pages.

    Any non "truncate only" shrink must be carefully planned with the full realization that you're going to invert many indexes (both clustered ad non-clustered) and that serious index maintenance will be a part of all that (Teaser: although I am working on an unconventional method that may eliminate the rebuilds).

    Getting back to the problem at hand, if you're going to do incremental shrinks over time, make sure that you immediately follow the shrinks with a check of not just "logical fragmentation" but one of "fragment size", as well.  If either one is out of whack, you need to fix the problem with some proper index maintenance.

    I strongly advice against using DBCC SHRINKFILE in this way, it's much better to add data files/file groups and shuffle the objects around to reclaim the space.
    😎

    Agreed but, eventually, you have to do something to recover the space and a lot of people don't have the room prior to the shrink to add filegroups with new data files in them.

    I AM fortunate in that I've convinced the infrastructure folks to provide me my own fairly large "DBA-Only" drive where, prior to any index maintenance (as an example), I can rebuild the largest 1 to 4 indexes with the "CREATE INDEX with Drop Existing" trick to temporarily move the indexes to that other drive.  Then I can do my version of index maintenance (I never use REORGANIZE anymore) without getting the file growth that occurs when you rebuild any index over 8GB (128 extents).  When I'm done with the indexes on the PRIMARY file group, then I rebuild the indexes I moved back to the PRIMARY and Bob's your uncle. 😀

    And, just to be advised, I do such index maintenance while I'm in the Bulk Logged recovery model.  That does two things... rebuilds are minimally logged in the Bulk Logged recovery model and that makes REBUILD a lot faster and a whole more lightweight than REORGANIZE.  You'll also get better use out of Fill Factors during REBUILDs than REORGANIZEs, as well.

    And, yes... every system such have such a "DBA-Only" drive available on it.  It has seriously simplified a whole lot of things for me, especially when it comes to space recovery by rebuilding indexes.

    It is amazing what few bucks can do, think of USB-3 SSD, stretch database etc.
    😎   
    Obviously those are not always an option but there are times when one has to think outside the "box"

    As I remind people, "Before you can think outside the box, you must first realize... you're in a box". 😀

    Like you said, some of these things are not an option for some shops.  For example, the reasons why we're an "on premise" shop (with the appropriate remote DR site) also keep us from entertaining ideas such as "stretch databases" (secretly thanking my lucky stars there).   For similar reasons, the idea of using awesome USB 3.1 technology such as Samsung's T3 SSD devices are outside the box that we're allowed to live in.  Very fortunately for me, the people with the purse-strings understand the need and so my requests have been accommodated with virtually no resistance.

    For a lot of other shops though, the things you suggest would be very helpful in (as my cousin, who is an Air Force vet, would say) "keeping the bird in the air".  The addition of things like a USB-3/3.1 SSD would be just what the doctor ordered for such a thing as a "DBA-Only Drive".

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • TRACEY-320982 - Monday, December 31, 2018 9:58 AM

    Wow thanks for the information.

    I strongly advice against using DBCC SHRINKFILE in this way, it's much better to add data files/file groups and shuffle the objects around to reclaim the space.

    I haven't actually done this moving data files around.

    If the database has 5 filegroups on drive X and then you move them to Drive Y, restart SQL, how does this free up the space in your database?

    Just to emphasize and support what Eirikur and Frederico have both stated, just moving files around does nothing.  The free-space trapped inside the data files will move with the file.  It's kind of like trying to remove bird-poop from your windshield using a greasy napkin.

    Having the other drives/filegroups/files just gives you a place to take all the necessary and sometimes complex steps to reduce the size of data and remove the filespace without causing the original (PRIMARY) file group to grow.  Just moving the files does nothing except take time and space.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden - Tuesday, January 1, 2019 9:26 AM

    As I remind people, "Before you can think outside the box, you must first realize... you're in a box". 😀

    I know how to spell the name of that box, starts with a SH  and ends in IT
    😎

Viewing 15 posts - 1 through 15 (of 19 total)

You must be logged in to reply to this topic. Login to reply