Help: What to do when transaction log file is full

  •  Hi,

    I am absolutely new to DBA tasks and I really need help from you all.

    Last week, after I used DTS wizzard to copy a huge table as a backup copy, I used query analyzer to truncate the old table and insert new records into it.  It ran for a long time and came back with an error message saying "the log file is full, free up some space...".  I checked the database.  It was set to use simple recovery model and with a static size.  The database is set to backup complete with no log option.  The reason for what is that the database is more of a data mart than transactional database.

    The settings are as follows:

    Data file: space allocated: 100000MB.

    Log file: space allocated: 3000MB; automatically growth--checked; file growth: 1000 in MB; Maximum size: restricted file growth(MB) 3100

    I talked with somebody.  Some suggested me change the database to grow automatically or auto-increment by 10% while the other strongly urge not to set it to grow automatically.  One of people I talked shrink the log regularly.

    I would appreciate any imput on what's the best practice in managing log files.  Thank you,

    Ping Li

  • If you are want to limit the size of your transaction log when moving large amounts of data, why not use "bcp" (bulk copy) and the batch size switches (see below) for "bcp":

    -b batch_size

    - h "ROWS_PER_BATCH = bb"

    Check out the SQL Server Books Online for more details.

     

     


    Have a good day,

    Norene Malaney

  • BCP and Bulk insert method is good to avoid transaction log.

    http://msdn.microsoft.com/library/default.asp?url=/library/en-us/tsqlref/ts_ba-bz_4fec.asp

     

     

  • back to the log files:

    you should set autogrowth to yes, amount to 30 or 50 MB, limit to nearly the amount of available disk space.

    Reasons:

    If the transaction log is full the users can't change any data in the database.

    If the amount for growing is to big the sql server may stop during the growing, because of the disk activity. If you use a percentage of the current log size the chunks might get to big over time, if the log has been growing for a longer time.

    If space runs out and you have to react to a full log you should at least have a little space left on disk...

    regards karl

    Best regards
    karl

  • Couple of additional suggestions:

    (1)  You probably will want to set the database to "bulk logged" for the duration of your data load and then return it to simple.  If you're DTS savvy you could add a SQL Task before and after your transformation to set to "bulk logged" and then back to simple.

    (2)  Use BCP, Bulk Insert or save the package created by the DTS Wizard and then set the batch size for the transformation prior to executing it. 

    (3)  You should play with/test to determine the right batch size for your hardware as well.  Finding the "right" batch size can make a huge difference in how quickly your data loads and the size of your transaction logs, etc.  Batch size can be set too low, too high or "just right".  I recently had a table load where finding the "just right" batch size made a difference of about 2 minutes per 1 million rows loaded (cumulative save of more than 7 hours)

    Joe

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply