May 5, 2015 at 9:30 am
Hi guys,
I'm curious about how SQL handles a full transaction log. I'm running a population of a temp table with a lot of rows (roughly 70M), and when I run dbcc sqlperf(logspace)
it says 99% of the tempdb log is full. Obviously that's not ideal, but my question is what then happens? It doesn't throw an error so is it still working? Does it have to do some sort of disk/memory swapping of the transaction log? Is occasionally filling-up of the xact log ok? Or is it something that has dire consequences for other operations as well?
May 5, 2015 at 9:33 am
SQL will try to grow the log file. If it can't any data modification that runs will get error 9002 and fail.
You've probably got a tiny autogrow set, so the log's growing but maybe 1MB at a time and so SQL is growing the log again and again and again all in tiny fragments.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 5, 2015 at 9:53 am
When all inserts, updates and deletes return an error and the data being transmitted gets lost, most people consider that pretty dire.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
May 5, 2015 at 9:57 am
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply