December 2, 2008 at 4:17 am
Hello all
I have a simple recovery mode database with only one empty table, 5 int columns one of them is identity(1, 1).
I need to insert into this table about 100 000 000 records with the following script:
insert into dbSimple.dbo.t1
select t2.c2, t2.c3, t3.c4, t3.c5
from dbFull.dbo.t2 t2 join dbFull.dbo.t3 t3 on t2.t1 = t3.t1
where "some condition"
I was thinking that i can predict log size acording to this formula: 100000000* 5columns*4byts = 1.86gb, at least dbSimple.dbo.t1 will take up 1.86. But somehow during operation log took up 45gb.
The question is WHY?
December 2, 2008 at 6:38 am
Tran log growth is hard to predict. As well as the data, there are log headers, internal allocation pages that need logging, etc. Plus, if there was other stuff also running at the dame time, that would have been logged too.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
December 2, 2008 at 6:46 am
the transaction log is used for recovery so will contain lots of other data rather than just the pure insert, such as who inserted it, when, the lsn, markers for begins and commit and probably a whole of other stuff I don't know about.
Growth would also depend on the growth factor you had specified. If it was the default 10% each time the log wanted to grow it would do so by 10% of current size, i.e. a larger amount each time, so in your case thats over 4GB.
Way to control this would have been to batch the insert up and take regular log backups or put db in simple recovery mode (followed by full backup at end of insert)
---------------------------------------------------------------------
December 2, 2008 at 6:50 am
Thanks
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply