August 9, 2011 at 8:13 am
Hey guys, I have a table insert that is eating up a lot of transaction log space. If I had both tables compressed, would that also cause the transaction log to be smaller? Thanks.
August 9, 2011 at 8:25 am
Maybe. If you compress the table row and page, what goes into the tran log will be the row-compressed data (not the page compression), but it's not the main point of compression.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
August 9, 2011 at 8:30 am
GilaMonster (8/9/2011)
Maybe. If you compress the table row and page, what goes into the tran log will be the row-compressed data (not the page compression), but it's not the main point of compression.
I know its not a main point of compression, but I am exploring my options here. Basically the transaction log is growing so big, that the hard drive runs out of space and the insert doesn't complete. I am trying to avoid removing indexes. I was already considering compressing these tables since they the queries involving them are typically IO bound. So I was curious if that would alleviate my current issue with the transaction log as well.
August 9, 2011 at 8:34 am
I'd go with inserting the data in smaller batches and running tlog backups after x batches.
August 9, 2011 at 8:36 am
What is the recovery model of the database where you inserts the rows? You can change to bulk logged before inserting and change it once completed.
Ryan
//All our dreams can come true, if we have the courage to pursue them//
August 9, 2011 at 8:38 am
I am currently using a simple recovery model. I don't do tlog backups, this isn't a production server. I do full backups and differential backups. This suits the need of the databases.
August 9, 2011 at 8:42 am
Insert in batches or use an insert method that can be minimally logged (and drop any nonclustered indexes before you start)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
August 9, 2011 at 2:50 pm
GilaMonster (8/9/2011)
Insert in batches or use an insert method that can be minimally logged (and drop any nonclustered indexes before you start)
Don't drop the nonclustered indexes - disable them instead, then after the data is loaded rebuild all indexes.
I would recommend inserting in smaller batches instead - or, use SSIS instead which would give you the ability to control the batch size and commit size using the Fast table load option on an OLEDB destination.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
August 9, 2011 at 2:54 pm
Jeffrey Williams-493691 (8/9/2011)
GilaMonster (8/9/2011)
Insert in batches or use an insert method that can be minimally logged (and drop any nonclustered indexes before you start)Don't drop the nonclustered indexes - disable them instead, then after the data is loaded rebuild all indexes.
I would recommend inserting in smaller batches instead - or, use SSIS instead which would give you the ability to control the batch size and commit size using the Fast table load option on an OLEDB destination.
+1, much safer down the road. That way if the index definition changes, the package / sp still works without any modification.
August 9, 2011 at 3:07 pm
Can you post the sample query that doing insert?
you said the query is running for long, so did you check the wait_type on what it is waiting for? like checkpoint, pageiolatch..
Obviously inserting by dividing into batches will work
Through wait_types we can know the specific issue and drives it from there.
August 9, 2011 at 3:12 pm
Waiting on checkpoint??
Are you talking about log_reuse_wait? Which would be a little moot in simple recovery.
Or task waits, where your copy of the activity monitor script could come in handy?
August 9, 2011 at 3:39 pm
Waiting on checkpoint??
Occurs while the checkpoint task is waiting for the next checkpoint request. In this case we can issue the manual checkpoint so that if there is any log space used by other transcation changes will be commited and available for the long running query.
Are you talking about log_reuse_wait?
No.
I have faced the very similar issue before where my database is in simple recovery and lot Bulk inserts are going on and Log grows.
Log backup strategy some thing really need to be looking into
Changing query is not possible in my environment at that time as they are coming from the application so that is not an immediate solution in my case, so I tried resolving in by dealing with wait_types
Wait_types are really good resources to understand the root causes.
Hopefully helps ...
August 9, 2011 at 3:45 pm
Jeffrey Williams-493691 (8/9/2011)
GilaMonster (8/9/2011)
Insert in batches or use an insert method that can be minimally logged (and drop any nonclustered indexes before you start)Don't drop the nonclustered indexes - disable them instead, then after the data is loaded rebuild all indexes.
I would recommend inserting in smaller batches instead - or, use SSIS instead which would give you the ability to control the batch size and commit size using the Fast table load option on an OLEDB destination.
I ended up disabling the nonclustered indexes. This solved my log growth problem. I disable then just rebuild after the insert is done. Thanks for all the help guys.
August 9, 2011 at 3:50 pm
coder_t2 (8/9/2011)
Jeffrey Williams-493691 (8/9/2011)
GilaMonster (8/9/2011)
Insert in batches or use an insert method that can be minimally logged (and drop any nonclustered indexes before you start)Don't drop the nonclustered indexes - disable them instead, then after the data is loaded rebuild all indexes.
I would recommend inserting in smaller batches instead - or, use SSIS instead which would give you the ability to control the batch size and commit size using the Fast table load option on an OLEDB destination.
I ended up disabling the nonclustered indexes. This solved my log growth problem. I disable then just rebuild after the insert is done. Thanks for all the help guys.
The extra benefit here is that you also take care of fragmentation and part of the stats updates.
You may need to consider the clustered index and the rest of the stats too. Just to make sure everything keeps running fine.
August 9, 2011 at 3:58 pm
@SQLFRNDZ (8/9/2011)
I have faced the very similar issue before where my database is in simple recovery and lot Bulk inserts are going on and Log grows.Log backup strategy some thing really need to be looking into
Changing query is not possible in my environment at that time as they are coming from the application so that is not an immediate solution in my case, so I tried resolving in by dealing with wait_types
Wait types have nothing to do with log growth. Waits occur when queries need a resource that is not available. A particular query (insert, update, delete) will use a particular amount of log space no matter whether it waits for resources or not. The amount of log space is purely a factor of how much data gets changed and in what manner.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 15 posts - 1 through 15 (of 15 total)
You must be logged in to reply to this topic. Login to reply