Transaction Log growing out of control!!

  • We are running a process on a database that is moving and manipulating 10s of millions of rows of data. I let it run last night by itself on a 30 GB (20GB data and now after growth 10 GB log) drive and the transaction grew so large that it ran out of growing room space. I truncated the log and ran a dbcc sqlperf(logspace) and it says I am only using 5% of the log after truncating. I started the process and checked the logspace and its already up to 12% used. The backup log truncate_only command isn't shrinking the percentage on the dbcc command. Its going to run out of space again unless I can figure out someway to clear out that log. Any suggestions would be greatly appreciated. Thanks!

  • Did you try to break up your upgrade by smaller parts.

    I do also have similar activities to feed and after remove millions of rows.

    Here is how I am working:

    declare @i int

    declare @rowcount int

    set @rowcount = 500000

    set rowcount 500000

    while @rowcount = 500000

    begin

    upgrade YourTabe

    set YourColumn = YourValue

    where YourCondition

    backup log YourDB with truncate_only

    end

    set rowcount 0

    It only makes sense if your recovery model is simple.

    It takes time but it wont blow up your log too much



    Bye
    Gabor

  • isnt there someway to clear out this log even while the process is running? I would hate to lose all these hours of time running.

  • No not while running, the reason is due to rollback possiblity it must have all the data to perform the task.

  • shouldn't it truncate the log on checkpoint if i have it set to simple backup recovery?

  • The checkpoitn will not occurr until the operation completes or rolls back. So it will but only when done.

  • Antares is right.

    You only can trunc the part of the commited work from the log.

    If you run out of space you will loose your work anyhow, because SQLServer will roll back.

    And there is no "nonlogged opereation"

    There are some small technics to speed up an upgrade but it wont boost up your work too much.

    These are:

    -disallow the Row lock and the Page lock on your tables

    -drop all the indexes and triggers you don't need for the upgrade (after the work you can recreate them)

    -work by parts (as described above)

    -if you can manage the UDL then be aware of the size of your varchar data. if the varchar data changes the size then un update will be a delete followed by an insert. (if the string values are not too huge then char is always faster then varchar and varchar is always faster then nvarchar)

    -if you are modifying the totallity of your table it could be faster to copy all your data with modifications to a new worktable and after drop your original table and rename it(plus create all the indexes, relations, triggers...)



    Bye
    Gabor

  • Just a small addition to nyulg's last point:-

    -------------------------------------------------

    -if you are modifying the totallity of your table it could be faster to copy all your data with modifications to a new worktable and after drop your original table and rename it(plus create all the indexes, relations, triggers...)

    ---------------------------------------------------

    If you adopt this approach, your log will continue to grow because of the insert statements being logged. However, if you create the worktable by using 'SELECT INTO', the log will not grow, as 'SELECT INTO' is a non-logged operation. It is also significantly quicker.

  • Thanks for all the helpful info. The non-logged operations might be the best way to go. Anyone have any good articles on other non-logged operations?

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply