January 9, 2008 at 10:01 am
Hi all,
I have a quick question. I have database that is in simple recovery, then i have a large dts which is importing specific data via data pumps from one server to another. My question is, i've noticed the log file started at 1mb (default) and as the process runs grows to a large size, approx 2gig. It's not so much of an issue, i'm just wondering why it's doing it, is it because some of my tables have a clustered index on a small number of the tables?
Thanks in advance,
Nic
January 9, 2008 at 10:09 am
One other thing, i've just be doing some trial and error work, the very first step of the dts is a series of 10 truncate table commands, with the log file at 1mb prior to running this, the log jumps to 10mb upon completion of the step.
I've check the usage of the log and 5mb of the 10mb is still be used, i could understand if at the checkpoint it released the 10mb as free space but kept the physical file size.
Confused!
January 9, 2008 at 10:14 am
the transaction log tracks what is being changed during a given operation. In simple mode, once the operation/transaction is completed and gets committed, the space in the transaction log is truncated(meaning - it's marked as free), but the space is not released by to the OS.
Just because it truncates the space doesn't mean it doesn't need the space while the transaction is "in flight".
The assumption is that the database will again in the future require that amount of space to do something, so there's no need to release it.
I'd suggest you set the transaction log's starting size @ 2GB or so (since it seems to grow to that size), so that it doesn't have to auto-grow to that size (which will create fragmentation).
----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
January 9, 2008 at 10:16 am
That makes sense, many thanks for you help.
January 10, 2008 at 2:21 am
In addition to this if you would like to keep your Tlog size smaller i.e less than 2 gb than while doing DTS and transfering the data commit your transaction after every a few raws of import.
This will help you to minimize the size of the opentransaction and Tlog will truncate as it's commited frequently.
---------------------------------------------------
"Thare are only 10 types of people in the world:
Those who understand binary, and those who don't."
January 11, 2008 at 2:31 pm
In the Transform Data Task under options, you can set "Insert batch size" to keep your transaction log from growing out of control on large inserts.
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply