February 16, 2012 at 5:18 am
First the scenario: I manage a 3rd party SQL server application with a monthly processing cycle. The DB is fairly static except when doing the main monthly update, so I have set it to simple logging. It is however fairly sizeable at 125 GB and gets bigger every month as more historical data accumulates. The log file is on its own drive letter with a maximum space of 99.9 GB.
Twice now the log file has expanded to fill all but 9.9 MB of the available space! I cut the log back to about 50 GB after the first occurrence to get a rest from the disk-nearly-full messages, but was fully aware that this was only a temporary fix of the symptoms.
Having read Gail Shaw’s article[/url], am I correct in deducing that if the log file disk had been more than about 0.1% smaller, the whole thing would have fallen over rather horribly?
I strongly suspect that I really ought to be pushing our IT people for more log file space! Their initial response has been that they would be quite happy to truncate the log each time they did the backups to “stop the size becoming a problem”, but according to the article this would appear to be misguided.
Looking forward to your comments.
Mark Dalley
February 16, 2012 at 6:11 am
If the amount of log space needed for the processing is 100gb, then I'd give it that much space and leave it alone. If you only have 100gb... yeah, I'd look at getting a bigger drive.
Rather than attempt to "manage" the size of the file, look to the transactions that are being done and see if they can be done in a more efficient manner. A 100gb log on a 150gb database is a flag that something isn't quite right. I would expect a much smaller log file under most circumstances.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
February 16, 2012 at 6:13 am
Hi Mark ,
What is your database archiving strategy? I am asking this because I faced similar problems and our infrastructure guys is still taking their sweet time allocating additional storage for the logs. By archiving your non operational(referenced data older than 3 months) you should reduce your db size which should reduce your log file.
Regards
Jody
February 16, 2012 at 6:17 am
Yes, you should get more space. That or look at why the month end imports are using that much log space and see if there's anything that can be done (break updates/deletes into batches and run checkpoints between them, break into smaller transactions, etc)
Truncating the log because it's getting too large is like complaining that the 100 litre rain-barrel is getting filled to fast and replacing it with a 50 litre one.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
February 16, 2012 at 6:17 am
Jody.WP (2/16/2012)
Hi Mark ,What is your database archiving strategy? I am asking this because I faced similar problems and our infrastructure guys is still taking their sweet time allocating additional storage for the logs. By archiving your non operational(referenced data older than 3 months) you should reduce your db size which should reduce your log file.
Regards
Jody
Just trying to understand, how would reducing data size reduce log size. The log is about the number, volume, and size of transactions. There isn't a direct correlation between data size & log size. I managed a system that was about 800gb of data with a log about 10gb in size. Reducing the data would never have reduced the log because it was about the transactions, not the data.
And please, not throwing rocks, just trying to understand what you mean.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
February 16, 2012 at 6:18 am
Jody.WP (2/16/2012)
By archiving your non operational(referenced data older than 3 months) you should reduce your db size which should reduce your log file.
That would only be true if the monthly update changed the non-operational data. Data that sits in the database untouched and unchanged isn't going to cause more log usage from inserts, updates or deletes.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
February 16, 2012 at 6:32 am
Hi Folks ,
Gail you spot on. I am was talking about the all data being affected
February 16, 2012 at 9:22 am
Rather than attempt to "manage" the size of the file, look to the transactions that are being done and see if they can be done in a more efficient manner. A 100gb log on a 150gb database is a flag that something isn't quite right. I would expect a much smaller log file under most circumstances.
The application is a 3rd party affair but their development people are accessible and amenable to a certain extent.
Being an accidental DBA of the more elementary kind, what is a good place to start in finding out which queries or sp's are producing these massively bulgy transactions? I can make a list of implicated sp's and I already know which ones are taking the most time to execute, but are there other warning signs I should look for? Hmm, probably updates involving messy looking joins and/or wide selections and/or large numbers of records...anything else?
Many thanks
Mark Dalley
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply