March 8, 2010 at 12:43 pm
Hi,
I am using ALTER statement to change the column width, i got the exception
"The transaction log for database 'XXX' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases ". The table is having 8 million rows.
I changed the recovery model to simple and made the transaction log maxsize to unlimited then it worked. However my transaction log grown to almost 3 GB.
Can anybody please tell me, and how to avoid the transaction log growth when we use ALTER statement and why the transaction log grows so much with ALTER statements
Thanks,
EK
March 8, 2010 at 1:27 pm
Run DUMP TRANSACTION [db name] WITH NO_LOG and shrink it. It will still grow and large as the biggest transaction that uses the log so be careful of transaction that perform large data deletes, modifications before the commit.
March 8, 2010 at 1:48 pm
Ack!
Dump transaction is deprecated in SQL 2005 and removed in SQL 2008. Backup with No_Log is also deprecated in 2005 and gone in SQL 2008. Besides, the recovery model is simple, the log gets auto-truncated on checkpoint anyway.
Shrinking of the log is only something to consider after an unusual operation (like this may have been) that will not be repeated. It's something that should not be done regularly and the log should only be shrunk down the the size that it needs to be for regular operations
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 8, 2010 at 1:50 pm
The alter statement is fully logged, so the before and after values for each of those rows will be logged so that the alter can be undone or redone if necessary.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 9, 2010 at 12:00 am
If the ALTER statement results in a huge number of physical changes (not always the case) which must be logged (always), it can sometimes be faster to bulk export the data (in the new format, using CONVERT), drop the table, import the data into the new structure, and then re-create indexes and stuff. It depends, though. Note that things like FOREIGN KEY constraints can complicate matters.
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply