Where did our 30GBs go?

  • We have a bunch (~130 DBs) of databases that have the exact same table structure.

    We had an operation run over the weekend that added a column to an audit log table in every db if it didn't exist, and also set all values to 0 where the column was null (or that one column to be added). These columns are not included in any indexes, as far as I can tell. Some of the DBs have 1000000+ rows in their audit log table.We  seem to have lost ~30GB from this. However, after trying to release unused space from the data file (36.88GB reserved, 31.17 used), none of the space is actually released. Reorg of the indexes doesn't do anything (which isn't surprising to me as this isn't an included column - maybe i'm thinking about that incorrectly).

    Any suggestions?

  • scarr030 - Monday, December 10, 2018 10:38 AM

    We have a bunch (~130 DBs) of databases that have the exact same table structure.

    We had an operation run over the weekend that added a column to an audit log table in every db if it didn't exist, and also set all values to 0 where the column was null (or that one column to be added). These columns are not included in any indexes, as far as I can tell. Some of the DBs have 1000000+ rows in their audit log table.We  seem to have lost ~30GB from this. However, after trying to release unused space from the data file (36.88GB reserved, 31.17 used), none of the space is actually released. Reorg of the indexes doesn't do anything (which isn't surprising to me as this isn't an included column - maybe i'm thinking about that incorrectly).

    Any suggestions?

    When you release unused space it issues a shrink with TRUNCATEONLY. This will only release space if the space is free at the end of the file. As you have just added columns the space at the end of the file will probably be in use so TRUNCATEONLY won't do anything.

  • Jonathan AC Roberts - Monday, December 10, 2018 7:22 PM

    scarr030 - Monday, December 10, 2018 10:38 AM

    We have a bunch (~130 DBs) of databases that have the exact same table structure.

    We had an operation run over the weekend that added a column to an audit log table in every db if it didn't exist, and also set all values to 0 where the column was null (or that one column to be added). These columns are not included in any indexes, as far as I can tell. Some of the DBs have 1000000+ rows in their audit log table.We  seem to have lost ~30GB from this. However, after trying to release unused space from the data file (36.88GB reserved, 31.17 used), none of the space is actually released. Reorg of the indexes doesn't do anything (which isn't surprising to me as this isn't an included column - maybe i'm thinking about that incorrectly).

    Any suggestions?

    When you release unused space it issues a shrink with TRUNCATEONLY. This will only release space if the space is free at the end of the file. As you have just added columns the space at the end of the file will probably be in use so TRUNCATEONLY won't do anything.

    Is there anyway to verify that was actually the issue? The column added is of type numeric(3,0),and as of this morning, we have 275,476,273 rows with that column on our instance. If I assume each column is taking up the maximum storage space (19 bytes? clearly that's not the case here, but I'm using this as a reference....17 bytes+2bytes: https://docs.microsoft.com/en-us/sql/t-sql/data-types/decimal-and-numeric-transact-sql?view=sql-server-2017), then that comes out to only ~5.2 GBs. I'm clearly wrong, right?

    Edit: apologies if the above is basically gibberish. Metadata/storage etc has never been my strong suit and I'm still learning.... probably have some misconceptions and ignorance.

  • scarr030 - Tuesday, December 11, 2018 6:08 AM

    Jonathan AC Roberts - Monday, December 10, 2018 7:22 PM

    scarr030 - Monday, December 10, 2018 10:38 AM

    We have a bunch (~130 DBs) of databases that have the exact same table structure.

    We had an operation run over the weekend that added a column to an audit log table in every db if it didn't exist, and also set all values to 0 where the column was null (or that one column to be added). These columns are not included in any indexes, as far as I can tell. Some of the DBs have 1000000+ rows in their audit log table.We  seem to have lost ~30GB from this. However, after trying to release unused space from the data file (36.88GB reserved, 31.17 used), none of the space is actually released. Reorg of the indexes doesn't do anything (which isn't surprising to me as this isn't an included column - maybe i'm thinking about that incorrectly).

    Any suggestions?

    When you release unused space it issues a shrink with TRUNCATEONLY. This will only release space if the space is free at the end of the file. As you have just added columns the space at the end of the file will probably be in use so TRUNCATEONLY won't do anything.

    Is there anyway to verify that was actually the issue? The column added is of type numeric(3,0),and as of this morning, we have 275,476,273 rows with that column on our instance. If I assume each column is taking up the maximum storage space (19 bytes? clearly that's not the case here, but I'm using this as a reference....17 bytes+2bytes: https://docs.microsoft.com/en-us/sql/t-sql/data-types/decimal-and-numeric-transact-sql?view=sql-server-2017), then that comes out to only ~5.2 GBs. I'm clearly wrong, right?

    Edit: apologies if the above is basically gibberish. Metadata/storage etc has never been my strong suit and I'm still learning.... probably have some misconceptions and ignorance.

    For a given key(s), the new column could end up effectively taking almost 8K bytes.

    SQL stores data in 8K pages (never more), with ~8060 bytes for data.  If the data on a given page exceeds that size, SQL will move roughly half the rows to a new page (this is called a "page split").  That means another 8K is allocated to the table.  It also means both pages are just over half full.

    Also be aware that removing a column does not immediately free up the space.  And that when you alter a column to a different data type, such as int to bigint, that requires the full size of the new data -- in that case 8 bytes -- because allocates all new space for the new column, it does not overlay the existing column data with the longer value.

    SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".

  • ScottPletcher - Tuesday, December 11, 2018 10:27 AM

    scarr030 - Tuesday, December 11, 2018 6:08 AM

    Jonathan AC Roberts - Monday, December 10, 2018 7:22 PM

    scarr030 - Monday, December 10, 2018 10:38 AM

    We have a bunch (~130 DBs) of databases that have the exact same table structure.

    We had an operation run over the weekend that added a column to an audit log table in every db if it didn't exist, and also set all values to 0 where the column was null (or that one column to be added). These columns are not included in any indexes, as far as I can tell. Some of the DBs have 1000000+ rows in their audit log table.We  seem to have lost ~30GB from this. However, after trying to release unused space from the data file (36.88GB reserved, 31.17 used), none of the space is actually released. Reorg of the indexes doesn't do anything (which isn't surprising to me as this isn't an included column - maybe i'm thinking about that incorrectly).

    Any suggestions?

    When you release unused space it issues a shrink with TRUNCATEONLY. This will only release space if the space is free at the end of the file. As you have just added columns the space at the end of the file will probably be in use so TRUNCATEONLY won't do anything.

    Is there anyway to verify that was actually the issue? The column added is of type numeric(3,0),and as of this morning, we have 275,476,273 rows with that column on our instance. If I assume each column is taking up the maximum storage space (19 bytes? clearly that's not the case here, but I'm using this as a reference....17 bytes+2bytes: https://docs.microsoft.com/en-us/sql/t-sql/data-types/decimal-and-numeric-transact-sql?view=sql-server-2017), then that comes out to only ~5.2 GBs. I'm clearly wrong, right?

    Edit: apologies if the above is basically gibberish. Metadata/storage etc has never been my strong suit and I'm still learning.... probably have some misconceptions and ignorance.

    For a given key(s), the new column could end up effectively taking almost 8K bytes.

    SQL stores data in 8K pages (never more), with ~8060 bytes for data.  If the data on a given page exceeds that size, SQL will move roughly half the rows to a new page (this is called a "page split").  That means another 8K is allocated to the table.  It also means both pages are just over half full.

    Also be aware that removing a column does not immediately free up the space.  And that when you alter a column to a different data type, such as int to bigint, that requires the full size of the new data -- in that case 8 bytes -- because allocates all new space for the new column, it does not overlay the existing column data with the longer value.

    Okay, that makes sense. I'm just having trouble reconciling that fact in my mind. It's just hard to believe (for me) that adding a column would take up 25GBs worth of pages/storage. Is there any way to account for this space or even calculate how much will be taken before adding columns like that in the future?

    Thanks for your help!

  • scarr030 - Tuesday, December 11, 2018 10:49 AM

    ScottPletcher - Tuesday, December 11, 2018 10:27 AM

    scarr030 - Tuesday, December 11, 2018 6:08 AM

    Jonathan AC Roberts - Monday, December 10, 2018 7:22 PM

    scarr030 - Monday, December 10, 2018 10:38 AM

    We have a bunch (~130 DBs) of databases that have the exact same table structure.

    We had an operation run over the weekend that added a column to an audit log table in every db if it didn't exist, and also set all values to 0 where the column was null (or that one column to be added). These columns are not included in any indexes, as far as I can tell. Some of the DBs have 1000000+ rows in their audit log table.We  seem to have lost ~30GB from this. However, after trying to release unused space from the data file (36.88GB reserved, 31.17 used), none of the space is actually released. Reorg of the indexes doesn't do anything (which isn't surprising to me as this isn't an included column - maybe i'm thinking about that incorrectly).

    Any suggestions?

    When you release unused space it issues a shrink with TRUNCATEONLY. This will only release space if the space is free at the end of the file. As you have just added columns the space at the end of the file will probably be in use so TRUNCATEONLY won't do anything.

    Is there anyway to verify that was actually the issue? The column added is of type numeric(3,0),and as of this morning, we have 275,476,273 rows with that column on our instance. If I assume each column is taking up the maximum storage space (19 bytes? clearly that's not the case here, but I'm using this as a reference....17 bytes+2bytes: https://docs.microsoft.com/en-us/sql/t-sql/data-types/decimal-and-numeric-transact-sql?view=sql-server-2017), then that comes out to only ~5.2 GBs. I'm clearly wrong, right?

    Edit: apologies if the above is basically gibberish. Metadata/storage etc has never been my strong suit and I'm still learning.... probably have some misconceptions and ignorance.

    For a given key(s), the new column could end up effectively taking almost 8K bytes.

    SQL stores data in 8K pages (never more), with ~8060 bytes for data.  If the data on a given page exceeds that size, SQL will move roughly half the rows to a new page (this is called a "page split").  That means another 8K is allocated to the table.  It also means both pages are just over half full.

    Also be aware that removing a column does not immediately free up the space.  And that when you alter a column to a different data type, such as int to bigint, that requires the full size of the new data -- in that case 8 bytes -- because allocates all new space for the new column, it does not overlay the existing column data with the longer value.

    Okay, that makes sense. I'm just having trouble reconciling that fact in my mind. It's just hard to believe (for me) that adding a column would take up 25GBs worth of pages/storage. Is there any way to account for this space or even calculate how much will be taken before adding columns like that in the future?

    Thanks for your help!

    You could.  You'd call view sys.dm_db_index_physical_stats for index 1 with 'DETAILED' as the last param.  That would give you the existing avg free space per page, row size, and number of rows per page.  From that you could do a good rough calc of whether you had enough free space to add the column without splitting a lot of pages.

    SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".

  • ScottPletcher - Tuesday, December 11, 2018 11:15 AM

    scarr030 - Tuesday, December 11, 2018 10:49 AM

    ScottPletcher - Tuesday, December 11, 2018 10:27 AM

    scarr030 - Tuesday, December 11, 2018 6:08 AM

    Jonathan AC Roberts - Monday, December 10, 2018 7:22 PM

    scarr030 - Monday, December 10, 2018 10:38 AM

    We have a bunch (~130 DBs) of databases that have the exact same table structure.

    We had an operation run over the weekend that added a column to an audit log table in every db if it didn't exist, and also set all values to 0 where the column was null (or that one column to be added). These columns are not included in any indexes, as far as I can tell. Some of the DBs have 1000000+ rows in their audit log table.We  seem to have lost ~30GB from this. However, after trying to release unused space from the data file (36.88GB reserved, 31.17 used), none of the space is actually released. Reorg of the indexes doesn't do anything (which isn't surprising to me as this isn't an included column - maybe i'm thinking about that incorrectly).

    Any suggestions?

    When you release unused space it issues a shrink with TRUNCATEONLY. This will only release space if the space is free at the end of the file. As you have just added columns the space at the end of the file will probably be in use so TRUNCATEONLY won't do anything.

    Is there anyway to verify that was actually the issue? The column added is of type numeric(3,0),and as of this morning, we have 275,476,273 rows with that column on our instance. If I assume each column is taking up the maximum storage space (19 bytes? clearly that's not the case here, but I'm using this as a reference....17 bytes+2bytes: https://docs.microsoft.com/en-us/sql/t-sql/data-types/decimal-and-numeric-transact-sql?view=sql-server-2017), then that comes out to only ~5.2 GBs. I'm clearly wrong, right?

    Edit: apologies if the above is basically gibberish. Metadata/storage etc has never been my strong suit and I'm still learning.... probably have some misconceptions and ignorance.

    For a given key(s), the new column could end up effectively taking almost 8K bytes.

    SQL stores data in 8K pages (never more), with ~8060 bytes for data.  If the data on a given page exceeds that size, SQL will move roughly half the rows to a new page (this is called a "page split").  That means another 8K is allocated to the table.  It also means both pages are just over half full.

    Also be aware that removing a column does not immediately free up the space.  And that when you alter a column to a different data type, such as int to bigint, that requires the full size of the new data -- in that case 8 bytes -- because allocates all new space for the new column, it does not overlay the existing column data with the longer value.

    Okay, that makes sense. I'm just having trouble reconciling that fact in my mind. It's just hard to believe (for me) that adding a column would take up 25GBs worth of pages/storage. Is there any way to account for this space or even calculate how much will be taken before adding columns like that in the future?

    Thanks for your help!

    You could.  You'd call view sys.dm_db_index_physical_stats for index 1 with 'DETAILED' as the last param.  That would give you the existing avg free space per page, row size, and number of rows per page.  From that you could do a good rough calc of whether you had enough free space to add the column without splitting a lot of pages.

    +1000
    That's the ticket.  Since the column was added and it was forced to contain new data, there were likely a huge number of page splits, which will seriously lower the page density (avg pct of page fullness).  In order to recover that space, you'll need to rebuild the Clustered Indexes to squeeze the "air" out of the pages.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden - Tuesday, December 11, 2018 6:23 PM

    +1000
    That's the ticket.  Since the column was added and it was forced to contain new data, there were likely a huge number of page splits, which will seriously lower the page density (avg pct of page fullness).  In order to recover that space, you'll need to rebuild the Clustered Indexes to squeeze the "air" out of the pages.

    Okay, so I went ahead and rebuilt all clustered indexes over the weekend. I tried to truncate the data file again, but it's still allocating 5 more gigs than what's actually sued. This is on a 60GB DB. What else am I missing here?

  • scarr030 - Monday, December 17, 2018 6:42 AM

    Jeff Moden - Tuesday, December 11, 2018 6:23 PM

    +1000
    That's the ticket.  Since the column was added and it was forced to contain new data, there were likely a huge number of page splits, which will seriously lower the page density (avg pct of page fullness).  In order to recover that space, you'll need to rebuild the Clustered Indexes to squeeze the "air" out of the pages.

    Okay, so I went ahead and rebuilt all clustered indexes over the weekend. I tried to truncate the data file again, but it's still allocating 5 more gigs than what's actually sued. This is on a 60GB DB. What else am I missing here?

    You will need to do a normal shrink operation if you need to make the file smaller. Or you can create a new data file and empty the file by migrating the data to that.

  • Jonathan AC Roberts - Monday, December 17, 2018 6:45 AM

    scarr030 - Monday, December 17, 2018 6:42 AM

    Jeff Moden - Tuesday, December 11, 2018 6:23 PM

    +1000
    That's the ticket.  Since the column was added and it was forced to contain new data, there were likely a huge number of page splits, which will seriously lower the page density (avg pct of page fullness).  In order to recover that space, you'll need to rebuild the Clustered Indexes to squeeze the "air" out of the pages.

    Okay, so I went ahead and rebuilt all clustered indexes over the weekend. I tried to truncate the data file again, but it's still allocating 5 more gigs than what's actually sued. This is on a 60GB DB. What else am I missing here?

    You will need to do a normal shrink operation if you need to make the file smaller. Or you can create a new data file and empty the file by migrating the data to that.

    Well, maybe I'm misunderstanding what you mean. If the DB is only actually 60GBs, why does it need to reserve 65GB of disk space for the .mdf file? FWIW, I did just do some more looking: it's set to an initial size of what it's reserved size is.... So, that's probably why. Does that value increase on it's own? Is it something I should touch?

    Thanks again, everyone.

  • scarr030 - Monday, December 17, 2018 7:10 AM

    Jonathan AC Roberts - Monday, December 17, 2018 6:45 AM

    scarr030 - Monday, December 17, 2018 6:42 AM

    Jeff Moden - Tuesday, December 11, 2018 6:23 PM

    +1000
    That's the ticket.  Since the column was added and it was forced to contain new data, there were likely a huge number of page splits, which will seriously lower the page density (avg pct of page fullness).  In order to recover that space, you'll need to rebuild the Clustered Indexes to squeeze the "air" out of the pages.

    Okay, so I went ahead and rebuilt all clustered indexes over the weekend. I tried to truncate the data file again, but it's still allocating 5 more gigs than what's actually sued. This is on a 60GB DB. What else am I missing here?

    You will need to do a normal shrink operation if you need to make the file smaller. Or you can create a new data file and empty the file by migrating the data to that.

    Well, maybe I'm misunderstanding what you mean. If the DB is only actually 60GBs, why does it need to reserve 65GB of disk space for the .mdf file? FWIW, I did just do some more looking: it's set to an initial size of what it's reserved size is.... So, that's probably why. Does that value increase on it's own? Is it something I should touch?

    Thanks again, everyone.

    Post the DDL for the table.  Then post the results of a sys.dm_db_index_physical run on the table.

    Heh... and stop shrinking stuff until we know what's going on.  The shrink is just trashing things.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden - Monday, December 17, 2018 8:00 AM

    Post the DDL for the table.  Then post the results of a sys.dm_db_index_physical run on the table.

    Heh... and stop shrinking stuff until we know what's going on.  The shrink is just trashing things.

    See below for DDL. And no more shrinks!


    SET ANSI_NULLS ON
    GO

    SET QUOTED_IDENTIFIER ON
    GO

    SET ANSI_PADDING ON
    GO

    CREATE TABLE [dbo].[AUDIT_TRAIL](
        [ID] [numeric](8, 0) IDENTITY(1,1) NOT NULL,
        [MENU_SRNO] [numeric](8, 0) NULL,
        [USER_ID] [numeric](8, 0) NULL,
        [ACTION] [numeric](8, 0) NULL,
        [OBJECT_ID] [numeric](8, 0) NULL,
        [DATETIME] [datetime] NULL,
        [USER_TYPE] [char](1) NULL,
        [SUB_TYPE] [char](2) NULL,
        [COMMENT] [varchar](7000) NULL,
        [MOD_USER] [numeric](8, 0) NULL,
        [MOD_TIMESTAMP] [datetime] NULL,
        [IP_ADDRESS] [varchar](30) NULL,
        [PT_ID] [numeric](8, 0) NULL,
        [BOOL_FAILURE] [numeric](1, 0) NOT NULL,
        [EXTRA_PARAM] [varchar](50) NULL,
        [ISO_DATETIME] [varchar](30) NULL,
        [HASH] [varchar](100) NULL,
        [BOOL_MISMATCH] [numeric](1, 0) NULL,
        [SUPPORT_USER_NAME] [varchar](255) NULL,
        [LOGIN_FROM] [numeric](3, 0) NULL,
    CONSTRAINT [PK_ID] PRIMARY KEY CLUSTERED
    (
        [ID] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    ) ON [PRIMARY]

    GO

    SET ANSI_PADDING OFF
    GO

    ALTER TABLE [dbo].[AUDIT_TRAIL] ADD DEFAULT (0) FOR [BOOL_FAILURE]
    GO

    ALTER TABLE [dbo].[AUDIT_TRAIL] ADD DEFAULT ((0)) FOR [BOOL_MISMATCH]
    GO

    ALTER TABLE [dbo].[AUDIT_TRAIL] ADD DEFAULT ((0)) FOR [LOGIN_FROM]
    GO

  • Here's what I think is the troublemaker for you...
        [COMMENT] [varchar](7000) NULL,

    Large comments like that can really put the screws to two things...

    The first is page density just because the data is likely to be stored IN ROW.  If you have a lot of comments over 4K Bytes, how many rows per page do you think you have?  The answer is probably 1 row per page.  My recommendation would be to change that bugger to VARCHAR(MAX), run sp_tableoption with the "large value types out of row" option set to one, and then do an "in-place" update (SET COMMENT = COMMENT ) of all the COMMENT column to force existing data to go out of row.  Don't forget to rebuild the Clustered Index after that to free up the space and seriously increase the page density of that index.  Do the same for any Non_Clustered indexes that may INCLUDE the Comment column.

    The second thing is that it's a target for "ExpAnsive Updates", especially as they go from NULL to any other value.  Such things can and do cause massive bad page fragmentation during an UPDATE.  If the COMMENT column is only ever inserted into, then you won't have the "ExpAnsive" Update problem with that column, although you may still have that problem with the smaller VARCHAR columns in your table.

    Regardless of whether or not the COMMENT column suffers from the  "ExpAnsive" Update, it's still causing wicked low page density and that's going to affect the performance of ALL of the queries that end up using the Clustered Index.  The same holds true for HEAPs.

    And doing a REORGANIZE after adding a column will prove nearly fruitless in many cases.  Yes, it does try to condense pages but it won't expand the pages if there's no more room on a page and a whole lot of pages that contain less than the FILL FACTOR could still end up having less than the FILL FACTOR.  For something like a column add with a forced update (like you did), you could end up with a shedload of pages that are only 5-50% full.  You should do a REBUILD on the Clustered Index, instead.  Surprisingly, you may end up with fewer pages than you end up with REORGANIZE because of all the space you're able to recover from pages filled less than the FILL FACTOR.

    Last but not least, if you end up with a SORT for either REORGANIZE or REBUILD (and you usually will), the SORT can cause some growth in the MDF file that will appear as free space when the index maintenance is complete.  You can use the SORT_IN_TEMPDB=ON option with a REBUILD to prevent most of that.  REORGANIZE won't cause nearly as much sorting but it takes a pretty nasty toll on the Transaction Log File and has no such option as SORT_IN_TEMPDB.

    How bad can REORGANIZE hit the TLOG file?  I did a REORGANIZE on a 146GB Clustered Index that reported only 12% fragmentation.  The fragmentation only appeared at the logical end of the index.  The rest of the index was static and filled to 100% for all those other pages.  The REORGANIZE caused the TLOG file to grow from 37GB to about 246GB!!!!  Although a lot of people say the duration to do so isn't important, it took 1 hour and 21 minutes to complete and disk I/O was hopping during the entire duration.  Even after that, there was still some page density problems in the formally logical fragmentation are of the index.  I did a rebuild of the same index on a "clone" of the box in the BULK LOGGED recovery model and the whole REBUILD only took 12 minutes and there were no partially full pages left anywhere in the index.

    To summarize... I recommend that you move the VARCHAR(7000) COMMENT column to VARCHAR(MAX) and force existing values out of row as I outlined above.  REBUILD instead of REORGANIZING.  If you only have STANDARD EDITION and can't afford the down time, consider not doing any index maintenance on the big stuff  until you can.  For the small stuff, REBUILD will be very fast.  If you need to "let people in", consider doing a REBUILD and add a WAITFOR DELAY of a minute or so before doing another REBUILD.  If you do the REBUILDs in the BULK LOGGED Recovery Model, the REBUILDs will be minimally logged and run about twice as fast as they would in the FULL RECOVERY model.

    There is a bit of a warning that goes along with the BULK LOGGED mode.  If a TLOG back contains even 1 byte of minimally logged work, you cannot do a Point-in-Time RESTORE to the middle of the TLOG file.  You must either stop the RESTORE before getting to such a file or use the entire file.  Sounds spooky but not if you're aware and you do TLOG file backups frequently.  The BULK LOGGED mode also won't break the LSN chain in the TLOG files.  Use of the SIMPLE Recovery Model will and it will be an instantaneous break.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden - Monday, December 17, 2018 4:52 PM

    I recommend that you move the VARCHAR(7000) COMMENT column to VARCHAR(MAX) and force existing values out of row as I outlined above.  

    I thought the rows were only moved if the length of the text exceeded 8000 characters?

  • Jonathan AC Roberts - Monday, December 17, 2018 7:37 PM

    Jeff Moden - Monday, December 17, 2018 4:52 PM

    I recommend that you move the VARCHAR(7000) COMMENT column to VARCHAR(MAX) and force existing values out of row as I outlined above.  

    I thought the rows were only moved if the length of the text exceeded 8000 characters?

    That would normally be true if you take no special action.  Read the rest of the post you got that snippet from.  You can force LOBs to ALWAYS go out of row... and my recommendation is that you absolutely should.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 15 posts - 1 through 15 (of 20 total)

You must be logged in to reply to this topic. Login to reply