Table Using up Much More Space When Transferring to New FileGroup

  • I have a database on SQL Server 2014 SP1 CU5 with .mdf that is about 1.45 TB. I found one table that is 691 GB of that .mdf. The table has 1.7 billion rows. There is one primary filegroup and the Fusion IO card is running out of space.

    I am currently running a script where I dropped the PRIMARY KEY CLUSTERED constraint on the table and I am recreating it on the new filegroup on a new SAN drive that was 1.2 TB in size. The script has been going for many hours and for some reason the table is consuming almost the entire 1.2 TB drive. Why is it using more space as I move it to a new filegroup?

    Below is what I obtained from the internet to get me the table sizes for all the tables I eventually wanted to move.

    SET NOCOUNT ON;

    --DECLARE VARIABLES

    DECLARE @max-2 INT, @min-2 INT, @table_name NVARCHAR(256), @table_schema NVARCHAR(256), @sql NVARCHAR(4000);

    --DECLARE TABLE VARIABLE

    DECLARE @table TABLE

    (id INT IDENTITY(1, 1)

    PRIMARY KEY,

    table_name NVARCHAR(256),

    table_schema NVARCHAR(256)

    );

    --CREATE TEMP TABLE THAT STORES INFORMATION FROM SP_SPACEUSED

    IF

    (

    SELECT OBJECT_ID('tempdb..#results')

    ) IS NOT NULL

    BEGIN

    DROP TABLE #results;

    END;

    CREATE TABLE #results

    ([table_schema] [NVARCHAR](256) NULL,

    [table_name] [NVARCHAR](256) NULL,

    [table_rows] [INT] NULL,

    [reserved_space] [NVARCHAR](55) NULL,

    [data_space] [NVARCHAR](55) NULL,

    [index_space] [NVARCHAR](55) NULL,

    [unused_space] [NVARCHAR](55) NULL

    );

    --LOOP THROUGH STATISTICS FOR EACH TABLE

    INSERT INTO @table

    (table_schema,

    table_name

    )

    SELECT table_schema,

    table_name

    FROM information_schema.tables

    WHERE table_name LIKE '%[_]Archive%'; --('dbo.MortgageLoanTerms_Archive')--('HumanResources.Employee','Production.Product', 'Purchasing.Vendor') --INSERT TABLE NAMES TO MONITOR

    SELECT @min-2 = 1,

    @max-2 =

    (

    SELECT MAX(id)

    FROM @table

    );

    WHILE @min-2 < @max-2 + 1

    BEGIN

    SELECT @table_name = table_name,

    @table_schema = table_schema

    FROM @table

    WHERE id = @min-2;

    --DYNAMIC SQL

    SELECT @sql = 'EXEC sp_spaceused ''['+@table_schema+'].['+@table_name+']''';

    --INSERT RESULTS FROM SP_SPACEUSED TO TEMP TABLE

    INSERT INTO #results

    (table_name,

    table_rows,

    reserved_space,

    data_space,

    index_space,

    unused_space

    )

    EXEC (@sql);

    --UPDATE SCHEMA NAME

    UPDATE #results

    SET

    table_schema = @table_schema

    WHERE table_name = @table_name;

    SELECT @min-2 = @min-2 + 1;

    END;

    --REMOVE "KB" FROM RESULTS FOR REPORTING (GRAPH) PURPOSES

    UPDATE #results

    SET

    data_space = SUBSTRING(data_space, 1, (LEN(data_space)-3));

    UPDATE #results

    SET

    reserved_space = SUBSTRING(reserved_space, 1, (LEN(reserved_space)-3));

    UPDATE #results

    SET

    index_space = SUBSTRING(index_space, 1, (LEN(index_space)-3));

    UPDATE #results

    SET

    unused_space = SUBSTRING(unused_space, 1, (LEN(unused_space)-3));

    --INSERT RESULTS INTO TABLESIZEGROWTH

    --INSERT INTO TableSizeGrowth (table_schema, table_name, table_rows, reserved_space, data_space, index_space, unused_space)

    SELECT [table_schema],

    [table_name],

    [table_rows],

    CAST([reserved_space] AS FLOAT) / 1000 AS reserved_spaceMB,

    CAST([data_space] AS FLOAT) / 1000 AS data_spaceMB,

    CAST([index_space] AS FLOAT) / 1000 AS index_spaceMB,

    CAST([unused_space] AS FLOAT) / 1000 AS unused_spaceMB

    FROM #results

    ORDER BY data_spaceMB DESC;

    SELECT SUM(CAST([reserved_space] AS FLOAT) / 1000) AS reserved_spaceMB

    FROM #results;

    --ORDER BY reserved_spaceMB) DESC

    DROP TABLE #results;

    I used the below to actually move the table.

    DROP TRIGGER [dbo].[MortgageLoanTerms_Archive_Trigger]; --There is an AFTER trigger on this table that writes UPDATEs or DELETEs to the _Archive table.

    GO

    --Create File and FileGroup

    USE MyDatabase;

    GO

    ALTER DATABASE [MyDatabase] ADD FILEGROUP ArchiveFG1;

    GO

    ALTER DATABASE [MyDatabase] ADD FILE(

    NAME = ArchiveData1, FILENAME = 'F:\Data\ArchiveFG1.ndf', SIZE = 700000 MB --Current size of this one table is about 693 GB Allocated space on the SAN drive is 1.2 TB

    , FILEGROWTH = 1024 MB) TO FILEGROUP ArchiveFG1;

    GO

    --Move the Clustered Index to the new filegroup, which moves the table since the clustered index is the table at the leaf level

    /****** Object: Index [PK_dbo.MortgageLoanTermsArchive] Script Date: 3/11/2016 10:45:47 AM ******/

    ALTER TABLE [dbo].[MortgageLoanTerms_Archive] DROP CONSTRAINT [PK_dbo.MortgageLoanTermsArchive];

    GO

    /****** Object: Index [PK_dbo.MortgageLoanTermsArchive] Script Date: 3/11/2016 10:45:47 AM ******/

    ALTER TABLE [dbo].[MortgageLoanTerms_Archive]

    ADD CONSTRAINT [PK_dbo.MortgageLoanTermsArchive] PRIMARY KEY CLUSTERED([MortgageLoanTermsArchiveId] ASC)

    WITH(PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON ArchiveFG1;

    GO

    --Recreate the Trigger on MortgageLoanTerms

  • The first thing that comes to mind is fill factor, but that sounds like a huge difference in space used.

    What is the server level fill factor setting?

    exec sp_configure 'show advanced options', '1' RECONFIGURE WITH OVERRIDE ;

    exec sp_configure 'fill factor (%)' ;

  • Not sure I can help with the final outcome but here are a couple of problems that you're having...

    1. Dropping the clustered index isn't simple. It must recreate the table as a HEAP and to do that for any clustered index over 128 extents (just 8MB), it will first create the HEAP and then drop the clustered index. Since both of those structures contain all of the data, you'll see explosive growth in the mdf file, which is typically twice the size of the original data (reason already explained) plus about 20% for various overhead.

    2. If you're doing the drop of the clustered index in the full recovery model, you'll also see similar explosive growth in your ldf files, as well, because it will all be fully logged.

    3. If your database is using the FULL Recovery Model, the building of the clustered index will also be fully logged.

    4. Because of the size of the table, you have little chance of success of any index maintenance plans.

    5. If the MortgageLoanTermsArchiveId column isn't an IDENTITY column or the rows in the table aren't written to (INSERT only, never any UPDATEs) just once, you're going to need a seriously effective index maintenance plan for the clustered index to handle all of the wasted space created by massive page splitting.

    6. The movement of more than a TeraByte of data from one file group to another isn't a trivial task nor short-winded task even if done with absolute perfection.

    So, let's start from the beginning...

    1. Which edition of SQL Server are you using? Standard Edition or Enterprise Edition?

    2. What is the purpose of this table? From the sounds of it, it's an archive table but are the rows in the table static or can any row from any era in the table be updated at the drop of a hat? Or, if the rows are updated, is there a cutoff of how far back in the table that will occur?

    3. How long can this table be "offline" for this move?

    4. How long have you been given to complete this move?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • CJ, the server level fillfactor is set to 100% Config and run values are at 0, which is the same as 100%.

    name minimum maximumconfig_valuerun_value

    fill factor (%)0 100 0 0

  • Jeff Moden (3/13/2016)


    Not sure I can help with the final outcome but here are a couple of problems that you're having...

    1. Dropping the clustered index isn't simple. It must recreate the table as a HEAP and to do that for any clustered index over 128 extents (just 8MB), it will first create the HEAP and then drop the clustered index. Since both of those structures contain all of the data, you'll see explosive growth in the mdf file, which is typically twice the size of the original data (reason already explained) plus about 20% for various overhead.

    2. If you're doing the drop of the clustered index in the full recovery model, you'll also see similar explosive growth in your ldf files, as well, because it will all be fully logged.

    3. If your database is using the FULL Recovery Model, the building of the clustered index will also be fully logged.

    4. Because of the size of the table, you have little chance of success of any index maintenance plans.

    5. If the MortgageLoanTermsArchiveId column isn't an IDENTITY column or the rows in the table aren't written to (INSERT only, never any UPDATEs) just once, you're going to need a seriously effective index maintenance plan for the clustered index to handle all of the wasted space created by massive page splitting.

    6. The movement of more than a TeraByte of data from one file group to another isn't a trivial task nor short-winded task even if done with absolute perfection.

    So, let's start from the beginning...

    1. Which edition of SQL Server are you using? Standard Edition or Enterprise Edition?

    2. What is the purpose of this table? From the sounds of it, it's an archive table but are the rows in the table static or can any row from any era in the table be updated at the drop of a hat? Or, if the rows are updated, is there a cutoff of how far back in the table that will occur?

    3. How long can this table be "offline" for this move?

    4. How long have you been given to complete this move?

    Jeff, It is SQL Server 2014 Standard Edition SP1 CU5. The purpose of the table is to hold updates or deletes that happen in the MortgageLoanTerms table. MortgageLoanTermsArchiveId is an IDENTITY column. The _Archive table is not directly updated, rather it holds changed records from MortgageLoanTerms and serves as an audit trail of changes. The database is a log shipping primary, so we are in the full recovery model. I started see large trn files at about 0755 this morning. That one was 755 MB. Others after that have been 1 to 1.5 GB in size. I am down to 235 GB of free space now due to a t-log that has grown from 180 Gb to 400 GB.

    In terms of time to be offline, MortgageLoanTerms and the corresponding _Archive table have been offline since about 8:30 pm last night. I need them to be back online by 6am tomorrow morning. due to other application maintenance I could not start on Friday night.

    I had already searched in vain for a way to get SQL Server to tell me how far along this process was so I could know how close to completion we are. I thought about stopping my script that is moving this table, but I am not sure what condition that would leave the table in. I know that the whole thing will stop of the t-log fills up though.

  • Have you got T-Log backups running very often? If you, you might want to crank up the frequency to try to help the growth of the T-Logs from taking over the whole disk. It won't help with the single transaction caused by the move that you're doing but it might help a bit. Not sure it will be enough, though.

    I'm also going to tell you what you don't want to hear so don't kill the messenger. 😀 I don't believe this is going to be ready by the time you need it to be ready. The problem is that you're between and a hard spot. If it fails because of the T-Logs being overwhelmed, it'll likely do a rollback that will take a huge amount of time. If you cancel the job, it'll likely do a rollback that will take a huge amount of time and then you'll also be saddled with a table that still has no clustered index. Because this is a WORM table for auditing, not having a clustered index in the next week or two isn't going to hurt much.

    I'd get some help right away and mount another drive so you can temporarily add another file for the ldf file. Right after that, I'd change the settings on the current T-Log file to make sure that it doesn't overwhelm the entire drive.

    After that, it's a wait and pray game.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • About to add a second transaction log file. Transaction logs already run every 5 minutes. Sounds like though maybe I should consider killing it, but wonder then if the rollback will finish in time. Wouldn't the rollback have to finish in order for the tables to be usable?

    It is drastic, but the other thing I pondered is restoring to just before I started this transaction.

  • Leaving current problems behind for a minute to plan for the future, this large mostly static audit table can be effectively partitioned even in the Standard Edition. Done correctly, it will greatly ease the time it takes for index maintenance, statistics updates, and can virtually eliminate the time it takes to backup this table. I'll also state that a lot of people make the huge mistake of auditing INSERTs. Only UPDATEs and, sometimes, DELETEs need to be audited because, for the INSERTs, the original table holds the original data until it's update and then it contains the latest data. The audit table should only contain rows as they were before a change occurred and that means that the audit trigger should ONLY record from the DELETED logical table in triggers. Recording both INSERTED and UPDATED rows effectively doubles the size of the most of the data and can be much, much worse for rows in the base table that are frequently updated.

    If what I think has been done by the audit trigger(s) that feed this table, I'd safely predict that the table could actually be shrunk to a third or a quarter of it's current size and that should be considered prior to any partitioning attempts.

    Curious in that area... does the audit table have a "date of change" or similar date column?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • One thing that you can do to get an estimate of how far through a given process is, is first determine what the SPID is that you're running your query from and then run the following code replacing the "XX" with the SPID that you're using. For anyone viewing this tip, this only works for certain functionality but CREATE INDEX is one of those and that's what we're doing in this query.

    SELECT percent_complete FROM sys.dm_exec_requests WHERE session_id = XX;

    From there, take the total elapsed time so far in minutes, divide it by 100.0, multiply that by the difference of 100 - the percent_complete from above, and add those number of minutes to the current time to get an estimation of when this drill will complete.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • lmarkum (3/13/2016)


    About to add a second transaction log file. Transaction logs already run every 5 minutes. Sounds like though maybe I should consider killing it, but wonder then if the rollback will finish in time. Wouldn't the rollback have to finish in order for the tables to be usable?

    It is drastic, but the other thing I pondered is restoring to just before I started this transaction.

    See the method for estimating the time above in my previous post before you do that drastic thing. 😉

    I could be wrong and we'd need someone like Gail Shaw to identify whether or not the rollback would need to complete before the table is usable or not. Considering that you have the Standard Edition, that means the index build is being done in an OFFLINE fashion and so my gut tells me that, yes, the rollback would need to complete before the table is released for usage by others.

    I also don't know how fast the rollback would actually be. It could take only seconds in this case because it doesn't actually delete the data in the HEAP until the Clustered Index is available and viable. Someone like Gail would be able to answer that question off the top of their head. I'd had to do an actual test for me to find out on my own or Google would need to be very kind to my question, indeed.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • I had a tried using that column from sys.dm_exec_requests and some math, but it kept telling me that the estimated completion time was the current moment. I learned a few months back that only a few transaction types will give you an estimate. From BOL it is these. Looks like I am out of luck. After talking this through with a Dev, I opted to cancel the query. This should leave me with a HEAP table in my new filegroup. I'm ok with that for now. I can go back and have another 1 TB or so SAN drive added to put the T-log on and then run the Clustered index creation statement later. For now, I can shrink the log back down and recover the space in the T-Log. I know this is not best practice, but I have to have the space back.

    ALTER INDEX REORGANIZE

    AUTO_SHRINK option with ALTER DATABASE

    BACKUP DATABASE

    DBCC CHECKDB

    DBCC CHECKFILEGROUP

    DBCC CHECKTABLE

    DBCC INDEXDEFRAG

    DBCC SHRINKDATABASE

    DBCC SHRINKFILE

    RECOVERY

    RESTORE DATABASE,

    ROLLBACK

    TDE ENCRYPTION

  • lmarkum (3/13/2016)


    I had a tried using that column from sys.dm_exec_requests and some math, but it kept telling me that the estimated completion time was the current moment. I learned a few months back that only a few transaction types will give you an estimate. From BOL it is these. Looks like I am out of luck. After talking this through with a Dev, I opted to cancel the query. This should leave me with a HEAP table in my new filegroup. I'm ok with that for now. I can go back and have another 1 TB or so SAN drive added to put the T-log on and then run the Clustered index creation statement later. For now, I can shrink the log back down and recover the space in the T-Log. I know this is not best practice, but I have to have the space back.

    ALTER INDEX REORGANIZE

    AUTO_SHRINK option with ALTER DATABASE

    BACKUP DATABASE

    DBCC CHECKDB

    DBCC CHECKFILEGROUP

    DBCC CHECKTABLE

    DBCC INDEXDEFRAG

    DBCC SHRINKDATABASE

    DBCC SHRINKFILE

    RECOVERY

    RESTORE DATABASE,

    ROLLBACK

    TDE ENCRYPTION

    How long did it take to cancel the query or is it still rolling back?

    Also, BOL 2008 says that INDEX BUILD is one of the things that will produce a percent complete in addition to the above. I wonder if they changed that in BOL 2014.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Also, since you opted to do a rollback, consider NOT trying to do the copy as a single monolithic table again especially since it's an all or nothing evolution. Instead, consider building a partitioned view using a crawler to copy the data, say, one month at a time. When you're done, you can just drop the original table.

    Done correctly, the older months can be set to READ_ONLY if in a separate filegroup (be careful not to make a whole bunch of freespace read-only in the process) and backed up one final time. You shouldn't have to do backups or index maintenance or statistics maintenance on anything but the current month of data.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 13 posts - 1 through 12 (of 12 total)

You must be logged in to reply to this topic. Login to reply