Will table compression also cause transaction log to be smaller?

  • Hey guys, I have a table insert that is eating up a lot of transaction log space. If I had both tables compressed, would that also cause the transaction log to be smaller? Thanks.

  • Maybe. If you compress the table row and page, what goes into the tran log will be the row-compressed data (not the page compression), but it's not the main point of compression.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (8/9/2011)


    Maybe. If you compress the table row and page, what goes into the tran log will be the row-compressed data (not the page compression), but it's not the main point of compression.

    I know its not a main point of compression, but I am exploring my options here. Basically the transaction log is growing so big, that the hard drive runs out of space and the insert doesn't complete. I am trying to avoid removing indexes. I was already considering compressing these tables since they the queries involving them are typically IO bound. So I was curious if that would alleviate my current issue with the transaction log as well.

  • I'd go with inserting the data in smaller batches and running tlog backups after x batches.

  • What is the recovery model of the database where you inserts the rows? You can change to bulk logged before inserting and change it once completed.

    Ryan
    //All our dreams can come true, if we have the courage to pursue them//

  • I am currently using a simple recovery model. I don't do tlog backups, this isn't a production server. I do full backups and differential backups. This suits the need of the databases.

  • Insert in batches or use an insert method that can be minimally logged (and drop any nonclustered indexes before you start)

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (8/9/2011)


    Insert in batches or use an insert method that can be minimally logged (and drop any nonclustered indexes before you start)

    Don't drop the nonclustered indexes - disable them instead, then after the data is loaded rebuild all indexes.

    I would recommend inserting in smaller batches instead - or, use SSIS instead which would give you the ability to control the batch size and commit size using the Fast table load option on an OLEDB destination.

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • Jeffrey Williams-493691 (8/9/2011)


    GilaMonster (8/9/2011)


    Insert in batches or use an insert method that can be minimally logged (and drop any nonclustered indexes before you start)

    Don't drop the nonclustered indexes - disable them instead, then after the data is loaded rebuild all indexes.

    I would recommend inserting in smaller batches instead - or, use SSIS instead which would give you the ability to control the batch size and commit size using the Fast table load option on an OLEDB destination.

    +1, much safer down the road. That way if the index definition changes, the package / sp still works without any modification.

  • Can you post the sample query that doing insert?

    you said the query is running for long, so did you check the wait_type on what it is waiting for? like checkpoint, pageiolatch..

    Obviously inserting by dividing into batches will work

    Through wait_types we can know the specific issue and drives it from there.

    [font="Tahoma"]
    --SQLFRNDZ[/url]
    [/font]

  • Waiting on checkpoint??

    Are you talking about log_reuse_wait? Which would be a little moot in simple recovery.

    Or task waits, where your copy of the activity monitor script could come in handy?

  • Waiting on checkpoint??

    Occurs while the checkpoint task is waiting for the next checkpoint request. In this case we can issue the manual checkpoint so that if there is any log space used by other transcation changes will be commited and available for the long running query.

    Are you talking about log_reuse_wait?

    No.

    I have faced the very similar issue before where my database is in simple recovery and lot Bulk inserts are going on and Log grows.

    Log backup strategy some thing really need to be looking into

    Changing query is not possible in my environment at that time as they are coming from the application so that is not an immediate solution in my case, so I tried resolving in by dealing with wait_types

    Wait_types are really good resources to understand the root causes.

    Hopefully helps ...

    [font="Tahoma"]
    --SQLFRNDZ[/url]
    [/font]

  • Jeffrey Williams-493691 (8/9/2011)


    GilaMonster (8/9/2011)


    Insert in batches or use an insert method that can be minimally logged (and drop any nonclustered indexes before you start)

    Don't drop the nonclustered indexes - disable them instead, then after the data is loaded rebuild all indexes.

    I would recommend inserting in smaller batches instead - or, use SSIS instead which would give you the ability to control the batch size and commit size using the Fast table load option on an OLEDB destination.

    I ended up disabling the nonclustered indexes. This solved my log growth problem. I disable then just rebuild after the insert is done. Thanks for all the help guys.

  • coder_t2 (8/9/2011)


    Jeffrey Williams-493691 (8/9/2011)


    GilaMonster (8/9/2011)


    Insert in batches or use an insert method that can be minimally logged (and drop any nonclustered indexes before you start)

    Don't drop the nonclustered indexes - disable them instead, then after the data is loaded rebuild all indexes.

    I would recommend inserting in smaller batches instead - or, use SSIS instead which would give you the ability to control the batch size and commit size using the Fast table load option on an OLEDB destination.

    I ended up disabling the nonclustered indexes. This solved my log growth problem. I disable then just rebuild after the insert is done. Thanks for all the help guys.

    The extra benefit here is that you also take care of fragmentation and part of the stats updates.

    You may need to consider the clustered index and the rest of the stats too. Just to make sure everything keeps running fine.

  • @SQLFRNDZ (8/9/2011)


    I have faced the very similar issue before where my database is in simple recovery and lot Bulk inserts are going on and Log grows.

    Log backup strategy some thing really need to be looking into

    Changing query is not possible in my environment at that time as they are coming from the application so that is not an immediate solution in my case, so I tried resolving in by dealing with wait_types

    Wait types have nothing to do with log growth. Waits occur when queries need a resource that is not available. A particular query (insert, update, delete) will use a particular amount of log space no matter whether it waits for resources or not. The amount of log space is purely a factor of how much data gets changed and in what manner.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass

Viewing 15 posts - 1 through 15 (of 15 total)

You must be logged in to reply to this topic. Login to reply