July 17, 2013 at 4:47 am
Hi All,
I'm trying to understand more about minimal logging for the INSERT... SELECT operation in SQL Server.
I have a database in the simple recovery model, and a clustered table that's about ~20GB in size; and I want to create an exact duplicate of that table. The table does have an IDENTITY(1,1) column, but that is not the cluster of the table; I mention this cause I don't know whether that will make a difference. As such, the insert statement has to be preceded with SET IDENTITY INSERT ON for the table.
My transaction log for that database is presized to 20GB, with 16 vlogs when I run a DBCC LOGINFO. Autogrowth is set to grow in 5GB chunks with no upper restriction.
If I run the insert with TABLOCK, it seems to still being fully logged, my t-log gets full and starts autogrowing. I'm not using an ORDER BY clause on my select however; would this affect the operation? Both tables are clustered with the same key, however.
I've also tried using trace flag 610 which seemed to stop the log from growing at first, but then it started to grow at a tiny incremental and generated a fresh set of vlogs for each incremental, resulting in > 5000 being created.
If DDL is needed, I'm happy to post, but not sure whether that will add anything in this instance so I haven't prepared it yet.
Cheers,
Matthew
Follow me on twitter @EvoDBACheck out my blog Natural Selection DBA[/url]
July 17, 2013 at 6:00 am
Hi Matthew
This page gives a pretty good idea of the requirements for minimal logging, although it doesn't mention the identity column specification.
You can try using the fn_dblog function (SELECT * FROM ::fn_dblog(null, null)) to see if minimal logging is taking place. From what I've seen, you should be getting LCX_PFS in the Context column and quite small values relative to the row length for the Log Record Length. If there is no minimal logging, then the Log Record Length will be about the same as the length of a row and the Context will be LCX_CLUSTERED for example.
To be honest though, what I usually do in these situations is to batch up the rows to be inserted. If you have a clustered index on a numeric column it is pretty straightforward to set up a while loop to copy (for example) 100k rows in a batch. In simple recovery this will allow the log to get constantly overwritten and therefore not fill up. Also, you can use a PRINT statement or an insert to a logging table to keep track of how the process is going which is useful.
Duncan
July 17, 2013 at 6:04 am
Can you verify the checklist here.
July 17, 2013 at 6:19 am
Yep to confirm:-
1) The table isn't replicated.
2) Table locking is specified in the INSERT ... SELECT statement.
3) The target table is empty. (It has a clustered index and no non clustered indexes).
4) The database is in the simple recovery model.
As far as I can tell, that should be everything?
I've had a look in fn_dblog and it's mostly filled with LCX_CLUSTERED rows for the context, I'll check the row size against the log size in a moment.
Follow me on twitter @EvoDBACheck out my blog Natural Selection DBA[/url]
July 17, 2013 at 6:34 am
OK, so I recreated the table without the identity column, and hey presto, minimal logging taking place.
So it would seem that the identity insert was causing the problem. I'll have a play with a few other scenarios, perhaps make that the clustered index key and see how it performs.
Follow me on twitter @EvoDBACheck out my blog Natural Selection DBA[/url]
July 17, 2013 at 7:18 am
Matthew Darwin (7/17/2013)
OK, so I recreated the table without the identity column, and hey presto, minimal logging taking place.So it would seem that the identity insert was causing the problem. I'll have a play with a few other scenarios, perhaps make that the clustered index key and see how it performs.
Thanks for the update. It might be worth you adding a comment to the MSDN page, or opening a Connect item to see if the documentation can be updated to reflect this. Unless it's documented somewhere else maybe.
Duncan
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply