September 24, 2009 at 11:14 pm
Hello All,
To give a brief description of the problem.
I had configured a "Job" on SQL Server 2005 to run periodically
deleting old records from a huge table that has more than 40 million
records. But the job always fails with the exception
*****************************************************
The transaction log for database 'TPDB' is full. To find out why space
in the log cannot be reused, see the log_reuse_wait_desc column in
sys.databases [SQLSTATE 42000]
*****************************************************
apparently due to the long running delete operation.
I reproduce below the SQL statement causing this problem
delete GSMMessageDetails from GSMMessageDetails MD,
GSMMessageHdr MH, TraceFile TF
where
MH.TraceFileId = TF.TraceFileId
and MD.MessageId = MH.MessageId
and DATEDIFF(day, TF.StartTime , Getdate()) > ' + cast(@Time as
varchar)
Here are the options that I tried in vain
a. Configured the Transaction Log to "Unlimited Growth"
b. Increased the size of the "tempdb".
c. Removed the transaction for the delete operation.
d. Set the Recovery Mode for the DB to SIMPLE as opposed to FULL .
No answer yet.
Is there a way to get around this problem? Please suggest
Thanks,
Phani
September 25, 2009 at 1:45 am
Setting the recovery model to SIMPLE isn't going to help here. Regardless of the recovery model, a single statement has to be atomic (it works completely or not at all), so SQL Server is going to need space to log all of the deleted rows in case it has to roll back the statement.
A couple of options.
Split the delete down into smaller chunks. Lynne Pettis wrote an article a few days ago on this subject, which may help http://www.sqlservercentral.com/articles/T-SQL/67898/
It is sometimes quicker to write all the rows you want to keep to another table, then delete the original and rename the new one. This obviously isn't an option if you don't have exclusive access to the table.
Viewing 2 posts - 1 through 1 (of 1 total)
You must be logged in to reply to this topic. Login to reply