transaction log full error

  • Hi,

    I am coming across this error each time I am trying to run the Update statement.

    The transaction log for database 'mydatabase' is full. To find out why space in the log cannot be reused, see the log_reuse_wait. The mydatabase_log is set to By 10%, Unrestricted growth and its recovery mode has been set to SIMPLE. I don't know what I should do to clean the log file. Any help?

  • Hi,

    This means that the transaction log is full and it is awaiting to do an operation.

    run this statement:

    Select log_reuse_wait from sys.databases where name='mydatabase'

    post the value of log_reuse_wait column

    0 = Nothing;1 = Checkpoint;2 = Log backup;3 = Active backup or restore;4 = Active transaction;5 = Database mirroring;6 = Replication;7 = Database snapshot creation;8 = Log Scan;9 = Other (transient)

    If the cloumn is waiting for a log backup then you need to do a log backup. before this you need to change the recovery model to full-> do a full backup and then a transaction log backup this will give you some free space in the log file.

    And why are you using Simple Recovery model? What are your retention policies for backup and restore?

    and also it is always better to specify the log file growth in MB than in %.

  • Hi P_DBA,

    I know how to switch to a full recovery model but how would I do a log backup? I had the similar issue before and found out online that changing the recovery model to Simple would fix this. Now, when this error showed up today, the database was already in a Simple recovery model. I am a partime in this company where i write web applications in .Net. This is also the reason why I don't know too much in depth about sql server 2005. This time I was working with 4 million data and whenever i had to run the update statement, i got that log full error. Could you tell me in depth what I need to do once I set the recovery model to Full. Also, does this log file being full indicate that the server is running out of space? Please let me know.

    Thanks for your help!

  • run this statement:

    Select log_reuse_wait from sys.databases where name='mydatabase'

    post the value of log_reuse_wait column

    0 = Nothing;1 = Checkpoint;2 = Log backup;3 = Active backup or restore;4 = Active transaction;5 = Database mirroring;6 = Replication;7 = Database snapshot creation;8 = Log Scan;9 = Other (transient)

    Execute this statement and see what value is returned.

  • You can use the :

    DBCC SHRINKFILE('logfilename') to shrink the log file specifying the target file size....to do a once-off

    Please refer to this post describes your situation:;)

    http://www.sqlservercentral.com/Forums/Topic530467-146-1.aspx

    The following proactive measurements can be taken to prevent log file size to grow enormously:

    ?If we do not want point in time recovery then we can set recover model to Simple and we can backup database to only recent full backup.

    ?Set the size of the log files to a large value to avoid the automatic expansion of the transaction log

    ?Configure the automatic expansion of transaction log in terms of MB instead of %

    ?You can switch the recovery model to Bulk logged if you perform some bulk inserts, because these transactions will be logged minimally and you can switch back to full recovery model once we have finished with the operation.

    ?Design the truncations to be small.

    ?Backup the transaction log regularly to remove the inactive transactions in the transaction log.

    ?We can use the command Backup log your database name with truncate only option and shrink the logfile but running these command will break the logchain that’s why it has been deprecated in SQL 2008.

    ?Instead we can set the recovery model to Simple and run the below command to shrink the file.

    DBCC SHRINKFILE(‘logfilename’)

    :w00t:

  • I ran this and I am coming up with 0.

    Select log_reuse_wait from sys.databases where name='mydatabase'

    This is what I tried to truncate the log file.

    USE mydatabase

    GO

    DBCC Shrinkfile ('mydatabase_log', 1)

    BACKUP LOG mydatabase WITH Truncate_Only

    DBCC Shrinkfile ('mydatabase_log', 1)

    When I run this, I get this error.

    Could not locate file 'mydatabase_log' for database 'mydatabase' in sys.database_files. The file either does not exist, or was dropped.

    I right clicked on the mydatabase, clicked on Files and there is no log file for this database. I also changed the Recovery Model to FULL and ran the procedure again, but still the same error.

    Our C drive on this server is running out of space. When I go to clean up the files, like if I delete Test.mdf and Test.ldf, I get this error message "They are in use." Cannot delete it.

    Any help please?

  • Why are you shriking the file twice just back your log first ad then shrink the file as:

    USE mydatabase

    GO

    BACKUP LOG mydatabase WITH Truncate_Only

    TO DISK=".....path"

    GO

    then try to shringk the log file.....

    DBCC Shrinkfile ('mydatabase_log', 1)

  • Since we were runnng out of the space in C drive, we got rid of some .mdf and .ldf files using Unlocker. So, I came into Sql Server 2005 and deleted the mydatabase. I created a new database called mydb. When I right click on it to import the data from a different server using Import Wizard, it starts getting the data but when its about to get done, it pops up an error. There are 4 million data to be imported, so when it comes closer to get done, an error pops up. Did we screw anything here?

  • What is the error you are getting? post it here

  • When I right click on the mydb database:

    Files->

    Logical Name : mydb

    File Type: Data

    FileGroup: PRIMARY

    Initial Size(MB): 100

    Autogrowth: By 1 MB, unrestricted growth

    Path: to the C drive

    FileName: mydb.mdf

    When I go to change the size to say 10984 MB, I get an error message saying there is not enough space on the disk.

  • This time I don't how but I was able to get the data imported into new database mydb. Right now, I am in the process of running the stored procedures to update the database. Its still executing but I will be in touch with you once I get any error.

  • Hi P_DBA,

    I was able to run everything fine this time. The import was successful and all the update statements after that went fine. I don't know if deleting files made that happen but I didn't get any transaction log full error message. A special thanks to you for helping me out here. Thanks!

  • Hi,

    its my pleasure.

    But you need to take some proactive measurements that transaction will fill up again.

    they are:

    Reasons:

    Due to Uncommitted transactions

    run DBCC OPENTRAN() to check for any open transactions

    Due to Index Operations

    Running DBCC REINDEX, CREATE INDEX, Bulk Insert, Select Into with the database in Full Recovery model will cause the file grow enormously, since in Full Recovery model all the transactions are fully logged.

    If you are doing any bulk imports or large transactions make it is advisable to switch recovery model to full.

    Due to Replication

    The log reader agent which is responsible for moving the transactions from publisher to distributor and which unmark them afterwards stops functioning fro some reason, then the transactions begin to accumulate in the publisher database and will not be freed unless the log reader agent copies those to the distributor.

    We can determine why the log space is not reused and why the log it not getting truncated using log_reuse_wait and log_reuse_wait_desc columns of the sys.databases view.

    If log_reuse_wait is set to ‘0’- meaning that nothing is keeping the transaction log from reusing the existing space.

    If Replication is the cause for log file size growth then the it can be truncated under emergency situations by executing:

    EXEC sprepldone @xactid=NULL, @xactsegno=NULL, @numtrans=0,@time=0’@reset=1

    The following proactive measurements can be taken to prevent log file size to grow enormously:

    ?If we do not want point in time recovery then we can set recover model to Simple and we can backup database to only recent full backup.

    ?Set the size of the log files to a large value to avoid the automatic expansion of the transaction log

    ?Configure the automatic expansion of transaction log in terms of MB instead of %

    ?You can switch the recovery model to Bulk logged if you perform some bulk inserts, because these transactions will be logged minimally and you can switch back to full recovery model once we have finished with the operation.

    ?Design the truncations to be small.

    ?Backup the transaction log regularly to remove the inactive transactions in the transaction log.

    ?We can use the command Backup log your database name with truncate only option and shrink the logfile but running these command will break the logchain that’s why it has been deprecated in SQL 2008.

    ?Instead we can set the recovery model to Simple and run the below command to shrink the file.

    DBCC SHRINKFILE(‘logfilename’)

  • After getting these errors, I will definitely dig into your solutions and cautions so the errors don't happen in the future.

    Here is my scenario: I have this database which has only one table. This table is deleted monthly and re-imported with 4.5 million data. Right now, the recovery model is set to FULL. When I re-import the data, I have to run some update statements to update the data. Some updates can take 2-3 hours long. So, once I am done updating, I leave it there until I get the data next month. I go in there again, delete the data and reimport the data. Is it true that one database can have the maximum size of 8GB in sql server 2005? When I am deleting the old data, is the log file not deleted? Or before I reimport the data, do I need to shrink the log file everytime by running the command?

    Shrinkfile Database ('filename')..

    Thanks!

  • Hi,

    No the transaction log would not get deleted if you delete the data. Basically it logs the transactions yo performed you need to truncate the transaction log. If the log file is grown too high then you need to frequently backup the transaction logs which will remove the inactive transactions from the log and you can use truncate only option.

    And Also if you are importing 4.5 million rows of data it is advisable to change the recovery model to Bulk logged and back to full when the loading is done.

Viewing 15 posts - 1 through 15 (of 18 total)

You must be logged in to reply to this topic. Login to reply