Database logs full? But with lots of free hard disk space?

  • Hi All,

    Apologies if this covers old ground but I am really stuck!

    I have a database build job that can run for up to 12 hours. The job uses a lot of temp tables and deals with a large amount of data. The database being built is a STAR schema and is set up for simple recovery and transaction log unlimited growth. Recenty more hard drive capacity was added to the box and I now have 130GB of free space on the drive that the database builds on. My problem is that the SQL build job will fail and report that the STAR log is full and that I should truncate the log, however there seems to be plenty of room for the log to grow into! Has anyone come across this before? I've posted this in the DBA section as I think this is more of a DBA type question that development based. Any ideas are hugely appreciated!

    Jon.

  • How have you allocated to the database log file? Make sure this is set to autogrowth and no file restrictions. and also check the tempdb database to make sure that this isn't running out of space.

  • Jone, even though your database is in simple recovery mode, it uses transaction log while its building the your database.

    Is the building jobs has more data than before? One of the suggestion you commit the data in batch size rather using as one trasaction.

    Also double check your database script or settings that your log file is not restricted.

  • Hi guys,

    Thanks for the responses.

    Basically I have now resorted to allocating 10gig of log space to the database STAR using enterprise manager. The tempdb is sitting at around 60GB of usage. In answer to the second question, yes the job is now running a lot more data than is used to, this is the reason for the upgrade. The log is set to unrestricted growth and in the options autoshrink is on etc. I will check the scripts and see if the SQL itself limits the log size!

    Jon

  • I had a simlar problem one time where we were doing a migration on an OLTP database. The DB was only 20 gig but my transaction log balloned out to over 80 gig.

    What you may have to do is try breaking your job up into pieces.

    Commit any transactions, put a GO statement etc wherever possible. We ended up bringing our transaction log growth down to about 40 gig fairly easily.

    Kenneth FisherI was once offered a wizards hat but it got in the way of my dunce cap.--------------------------------------------------------------------------------For better, quicker answers on T-SQL questions, click on the following... http://www.sqlservercentral.com/articles/Best+Practices/61537/[/url]For better answers on performance questions, click on the following... http://www.sqlservercentral.com/articles/SQLServerCentral/66909/[/url]Link to my Blog Post --> www.SQLStudies.com[/url]

  • Well I would suggesst turnig off the autoshrink option as the first step.

    Secondly how is the log growth set up.

    If this is in percentage then it could be a problem.I would suggest setting it up like 1 Gb probably or a size depending upon how quickly the log grows.

  • Excellent advice, thanks everybody. I've kicked off the job again and it will now take about 10 hours to get to the point where it was failing. Fingers crossed. I've created a set of log files on another drive with more capacity and added those to the transaction log of each DB involved in the job. Hopefully this will work as a backup if my extra GO statements dont work. Thanks for all your help, I may be back tomorrow 🙂

  • one more thing if you are using tempdb during this process, make sure your application data, log and tempdb are on seperate drives too. good luck.

  • With TempDb sitting at 60GB I'd take a good look and make sure that TempDB has enough room to grow as well - SQL 2005 makes a lot more use of TempDB than SQL 2000 and if you're using temp tables, etc. it's very likely that it's TempDb that's running out of space.

  • When SQL want to grow the log, it will make 1 attempt to do so. If it cannot get enough extra space in that 1 attempt it will give the 'log full' message.

    SQL wants to keep enough log space available to roll back all uncommitted work, so if you load 10 GB in 1 transaction SQL may want a log sized at (say) 25 GB to be able to cope with a possible rollback. This can mean that when SQL wants to grow the log, it may want a large amount of extra space. If your growth increment is less than the space SQL wants the log to grow, then you will get the 'log full' message even though you may have many GB of spare disk space. This is documented somewhere in BOL...

    You need to estimate how much total log space your process will need, and set a suitable growth increment for that total. e.g. if you know you will need about 50 GB log space, then a suitable growth increment may be 1 GB or 5 GB, but a growth increment of 1 MB may lead to 'log full' messages.

    Also, beware of using a % growth increment. When your log gets to be big, then the amount of space you ask for as a % also gets big. If you set a 50% growth increment, then on a 50 GB log it will grow by 25 GB. SQL is very unlikely to need a 50% increase in log space in a single growth request so you are asking for more than is needed, and if you only have 24 GB spare disk space the growth will fail causing your application to fail.

    Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.

    When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara

  • Make sure you have 2005 SP 2, as we ran into problem that was fixed where if we changed the log file growth to, say, 1 GB, a known bug instead interpreted the number stored (# of pages to expand by) as a percentage. This meant a percentage of something like 125,000 % :w00t:, which of course will almost certainly exceed even the capacity of the largest disk arrays.

    Have Fun!

    Ron


    Have Fun!
    Ronzo

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply