Log File is full

  • yup the backup will clear the transactions, the used part of the log, from the log but it won't decrease the size of the ldf file.  Have a read in BOL on how backups and transaction log backups work.

    To decrease the size of the device you need to shrink the ldf file ( which I personally don't recommend if it's only going to grow again )

    So what was the problem again now we've established you've not got a DBA and you're making backups correctly ?

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • I've looked in my application and have put in all of the commits where I think they need to go. I'm still getting the log file is full message. I'm going to try a trace. Anything specific to look for here?

  • commits won't do anything - if you were running with uncommitted transactions then your app wouldn't work anyway. So what is actually running that causes the growth? Are you making data loads, re indexing? As an extreme move you could always put the database into simple recovery mode and see if you get the same problem.

    I do remember having a similar problem - I used lumigent's log explorer to sole the problem, took about 30 mins - a developer had miscoded an update so that the entire table updated every time there was an update, as the table grew during the day, and was pretty large anyway, the logs hit several gb within a few hours.

    If your app hasn't always done that then I'd check for changes in code.

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • Here are my latest findings:

    I 'think' I was able to have the trace on when the log file filled up again. However, I'm not really sure what to look for or what is causing this problem. Can someone please help?

    I'm running Microsoft SQL Server  2000 - 8.00.818 (Intel X86)   May 31 2003 16:08:15   Copyright (c) 1988-2003 Microsoft Corporation  Enterprise Edition on Windows NT 5.0 (Build 2195: Service Pack 4)

    We have lots of apps going against the DB (front-end, batch processes, etc). However, the acting 'DBA' says we have many more DB's that are much larger, with the same size log file and just as many transactions and the problem does not happen with them.

    I don't know if this helps but it always seems to be the same person that encounters the log file full error. When I look at the trace file and I can see the 'rollback' next to her id - I'm assuming that's when she got the error. So, I figured I would look at the last insert or update that happened before the error. But, I don't know if it was the last transaction that caused the error or was the one that 'put it over the top' as far as filling up. This is a big problem for our users as it happens about twice a day.

  • Sorry, I'm on course at the moment so replies are going to be occational.

    What did you trace? What I think would help, since you're getting this often, is have a job set up to run every 5 min or so, and log the results of dbcc sqlperf and dbcc open tran to tables. With that, you'll be able to see if the tran log growth is smooth, or if there's a large jump somewhere.

    With that, you can check back over those tables and see how fast the tran log grows and probably any long running transactions.

    The transaction that threw the error would just be the one that encountered the log full, not the cause of the growth. If you do still suspect it's something she's doing, run SQL Profiler and filter by her username. Catch the SQL:Batch Completed (under T-SQL) and RPC:Completed (under stored procedures)

    If you're not afraid of lots of data to wade through, run profiler against the server without filters (and above events) and you'll be able to see exactly what happened.

    One thing you can do that may solve the symptoms is to ask the 'DBA' to make the log backups more frequent.

    I agree with Colin, there's something odd running to fill 1 GB of log in an hour or so.

    With the batch processing, is it BCP? DTS? How many rows?

    You said it's twice a day. Same times every day?

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • First of all, I want to sincerely thank everyone for their input. This is truly perplexing to me and causing great grief and I appreciate the help.

    What did you trace? I just set up a trace file within sql profiler to look at all of the transactions. Here's something that seemed interesting to me - I don't know if it means anything though. I started the trace file at 2007-01-09 13:49:19.497 and I ended it at 2007-01-09 14:01:59.680. When I look at the results of the file in the sql profiler window, at the bottom it says "Rows 40626". Seems like a lot to me in just a short period of time.

    What I think would help, since you're getting this often, is have a job set up to run every 5 min or so, and log the results of dbcc sqlperf and dbcc open tran to tables. With that, you'll be able to see if the tran log growth is smooth, or if there's a large jump somewhere. OK, I'll have to figure out how to do this.

    If you do still suspect it's something she's doing, run SQL Profiler and filter by her username. Catch the SQL:Batch Completed (under T-SQL) and RPC:Completed (under stored procedures) OK, I'll have to figure out how to do this too. Maybe I'll get a DBA job after all of this - LOL

    If you're not afraid of lots of data to wade through, run profiler against the server without filters (and above events) and you'll be able to see exactly what happened. Didn't the trace do just this?

    One thing you can do that may solve the symptoms is to ask the 'DBA' to make the log backups more frequent. What's a recommended time frame? They said the last time we got the error that it filled up in 20 minutes. Is it 'bad' to have it run every 15 minutes?

    With the batch processing, is it BCP? DTS? How many rows? I'm not sure what you mean here. I meant we have some import routines that we populate the DB from that are from files that are FTP'd to us.

    You said it's twice a day. Same times every day? Mostly in the mornings and after lunch

    Would anyone here recomend contacting Microsoft and going through their paid support? If I sent them my trace file could they tell what the problem is?

    Sorry, I'm on course at the moment so replies are going to be occational. BTW - what does 'on course' mean?

  • When you set up a trace was it with the default options? If so, 40000 isn't that much in 10 min. On some of my servers I can generate a couple hundred thousand in half an hour with just some of the default events.

    Also, unless you filtered, that would have been commands for all databases on the server.

    Some of my log backups run every 10 minutes, but if there's something wrong, this won't help.

    > I meant we have some import routines that we populate the DB from that are from files

    > that are FTP'd to us.

    How do you import them?

    > Mostly in the mornings and after lunch

    Are those the most active times of the day? If so, it may be that you're doing enough work to fill the log, though it does seem strange.

    I wouldn't suggest calling MS at the moment, as it'll probably cost a lot. They'll probably also want a lot more than just a trace file.

    If you're willing to send the trace to a stranger, I'll pm you my mail address and I can take a look through it for you. (pm = private message)

    If you are, set up a profiler trace with the following events: (look in Books Online for details on how to do this)

    RPC:Batch completed (under stored procs)

    SQL:Batch Completed (under T-SQL)

    SQLTransaction (Under transactions)

    and filter for the database that you're woried about (filter on database name and database id)

    Run for about 15 min during the time the problem usually occurs.

    > BTW - what does 'on course' mean?

    ??? I'm out of the office attending a training course this week. Hence little access to the computer.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • OK, here are my findings. I simply believe that the space allocated for the log is not big enough. I don't think it is anything that the users are doing to cause this problem. For example, at 15 minutes and 45 minutes past the hour, we have a process that inports a bunch of order files. I can look at the DBCC SQLPERF(LOGSPACE) command and see that at 00:14 the log file is maybe 10% used and at 00:15 it jumps to 30% full. We also import orders again at 00:45 and it will jump way up again. When I look at the logs, I can see the users inserts but they are just a few at a time. So, our 'DBA' gave us a couple hundred more MB. FYI, it looks like the log backup is run at 00:53 minutes past the hour.

  • Rog,

    Seems like you're on the right track here and with the advice from Colin and Gila.

    First I'd tell the acting DBAs that database size has nothing to do with log size. A 2TB data warehouse could get by with a 10MB log if there weren't many updates. On the other hand, a 20MB active web database that increments a counter everytime a page is hit could easily require 2GB log. They're just not related. Database size is related to data size. Log size is related to update activity in the database.

    If 1GB is filling up, can you increase the log size to 2GB and see if that helps? You want to determine the high water mark between backups and set the log there.

    You can also do more frequent log backups as an alternative. If there is not one large transaction filling things up, that can help reduce the high water mark.

  • Is there a specific formula that you guys use? The 'DBA's' have told me things like XYZ DB is much bigger then your DB and the log file is much smaller. And I told them it doesn't matter but log is related to transactions.

    Anyway, how I determine the propor amount to make the log?

    How do I determine the proper times to do backups?

    One of the problems I am dealing with here is that "All of the other db's don't have all of these special circumstances (when logs are backed up etc), and they run just fine - so, it MUST be something the users are doing."

    However, here's one more thing puzzling me. I don't understand how all of the 'processes' could just fill the log file up to say 99% and then as soon as my user tries to update she gets the 'log file full' message. Seems like it would be highly likely that at least sometime one of the 'processes' would get the log file full and fail. But I would never know about this - only the DBA's and of course, they've said they've never had this problem.

  • http://www.sql-server-performance.com/ac_filegroup_performance.asp

    Set a reasonable size for the transaction log.

    The general rule of thumb for setting the transaction log size is to set it to 20-25 percent of the database size. The smaller the size of your database, the greater the size of the transaction log should be, and vice versa. For example, if the estimated database size is equal to 10MB, you can set the size of the transaction log to 4-5MB, but if the estimated database size is over 500MB, the 50MB may be enough for the size of the transaction log.

    MohammedU
    Microsoft SQL Server MVP

  • The 'DBA's' have told me things like XYZ DB is much bigger then your DB and the log file is much smaller

    so, it MUST be something the users are doing

    only the DBA's and of course, they've said they've never had this problem

    No offence intended, but it sounds like your DBAs are a little on the clueless side.

    The size of the database is mostly irrelevent when determining the tran log size. It's the activity in the DB that's the issue. Mohammed's rule of thumb, is just that, a rule of thumb.

    Since you're doing frequent batch imports, the log will get used quite heavily.

    See if you can get access to the SQL error log. That will tell you exactly how often the transaction log full error occurs, whether it's a user or a background process. It could be that the DBAs aren't noticing it, and maybe that the other users (if any) getting it don't report the error.

    From your descoveries, it does indeed sound like the tran log just isn't big enough. Rather than growing it by a couple hundred megs, see if you can get the DBAs to change the tran log backup schedule to every 30 min, rather than every hour.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • "From your descoveries, it does indeed sound like the tran log just isn't big enough. Rather than growing it by a couple hundred megs, see if you can get the DBAs to change the tran log backup schedule to every 30 min, rather than every hour."

    Remember what you said earlier about our 'DBA's'? I already asked that and they said, "All of our other DB's run just fine and now you're asking for something special just for your DB". I think you understand what I'm dealing with here.

     

    "See if you can get access to the SQL error log." Can you tell me how to do this? Will it have a history?

     

    Thanks again

  • This is from memory, cause I don't have a SQL 2000 server here.

    In enterprise manager, expand out the server. Select management (I think) expand out and you'll see entries for error log, or SQL log or something similar. The log is text and will contain entries since the last restart of SQL server. There should be at least 8 history logs as well. There will probably be a lot of info in them. Database backups, log backups, errors, etc for all the databases on the server. Could be a fun read

    Remember what you said earlier about our 'DBA's'? I already asked that and they said, "All of our other DB's run just fine and now you're asking for something special just for your DB". I think you understand what I'm dealing with here.

    Go to their manager (or your manager) and say that you've identified the cause of the problem, but the DBAs refuse to implement it. Might work, depending on what the management there is like.

    Their comment about 'all the other DBs' really tells me they don't know enough. All databases are different. You can't look at 5 databases and say exactly how a 6th is going to behave.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • I don't think much of your DBA's, are they actually SQL Server DBAs or just assuming that job title?  A DBA should be helping you solve the problem, that's what we do !!

    There's absolutley no formula that relates log size to anything because it's all down to the work that's done by the database.

    You could have a look and see what the cumlative size of your tran log backups are for the day and the individual sizes - I'll sort of explain. I have a database which is oltp but has some reporting functionality which requires a number of data loads within a window of a few hours every day. I also have an index rebuild job once a week. Most tlog backups ( hourly ) are under 100mb but I can see that my biggest goes over 2gb for a data load on a sunday - for the other days of the week the data loads generate tlog files no greater than around 600mb.

    So I know the min size of the log device needs to be at least 2gb to avoid growths, I also know that generally I don't exceed more than 2gb of log backups for the day. I have to allow some extra capacity etc. etc. Now at other places I've worked the logs have hit 500mb every 5 mins, so we had backups that often - it's all a matter of scale -   the database that produced these logs was the quarter size of another that only produced around 250mb a day but each was equally busy ( interconnected databases ) .

    Have a look at your backup sizes and scale your log accordingly , find your "DBA" a suitable training course - good luck.

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

Viewing 15 posts - 16 through 29 (of 29 total)

You must be logged in to reply to this topic. Login to reply