Very Large Transaction Logs

  • I’m having a problem with SQL 2000 that hopefully someone out there has some experience with.

     

    We use SQL 2000 primarily as a table back end for our Access front ends.  That’s to say that all the code for our queries, macros, and modules are in Access.  We will be using stored procedures and views within SQL soon but that’s a part of the migration plan we have not hit on yet.

     

    We use ArcServe 2000 to backup our SQL server every night, backing up both the log file and database.  Currently, we have 2 databases with sizes as follows:  (1) Log = 20 GB and Database = 11 GB (2) Log = 14 GB and Database = 340 MB.  We have had successful backups for quite some time now.

     

    I did a check with the “dbcc loginfo” command and found that I’m getting a proper checkpoint for the log file during the backup.  After a backup, only 1 of the VLFs have a status of “2”, indicating, I’m told, that it is the only active VLF in the log file.  Although I have “Auto Shrink” enabled, I had to manually truncate and shrink each of the log files.  After this process, the log files are back down to their default size of 1 MB.  At this point, I’m keeping a closer eye on their size.

     

    My questions are these:  (1) Are we experiencing normal log size as compared to the database size?  The database with the 14GB log and 340 MB database seems to be a bit out of whack.  (2) The checkpoint process seems to be taking place properly.  Why doesn’t the log file shrink automatically when only 1 of the VLFs are active?  (3) My first thought is that our front end application is the main culprit.  Is this so?

     

    Any help will be appreciated.

     

    ersonName>Scott BassersonName>

  • How does ArcServe 2000 backup transaction log? Does it backup and truncate the log like SQL Server command "backup log ......"? I am wondering it just backup th log but doesn't truncate it after the backup.

  • We have a DB that is 3GB but the log file grew to 57GB.

    So it is normal for log files to grow, if they are not maintained.

    Auto shrink only works when the Database is set to "Simple" recovery in the options tab.

    Setup a schedule job to backup the logs on a regular basis (ie every 2-4 hrs), it should truncate when it does the backups.

    all log transactions is handle by SQL not MS Access.

     

    hope this helps some

  • I'm sure ArcServe does NOT take care of the transaction logs the way SQL Server maintenance jobs do.  I agree with Quynh.  That's about the only way I can think of

  • Thank you all for the input so far.

    You all right about the transaction logs and ArcServe.  I went into the ArcServe event log for the backups and found that unlike SQL, only the database is backed up.  While it does perform a proper Checkpoint, the log is not truncated and the actual Transaction Log device is not backed up.

    I'll put that in our current maintenance plan and that should solve the problem of reducing the size of the log.

    What do you think about the actual size of the log.  Is there anything we can do on this end, from a front end application design, that will minimize the log size or is that something we will just have to live with and manage with the backing up and truncating of the log on SQL?

    Thanks again for your input, I appreciate it!

  • Remember that the transaction log is capturing all the changes made to the data before it is committed to the database.  This provides the up to the level of recoverability incase of server failure.  Take a look at the amount of data the application is changing and create a log file that is large enough to handle that activity in between backups.  For example, if your application writes 1000 rows of data to the database per hour and you backup the transaction log once an hour then you need a log file big enough to hold those transactions and any other activity SQL is performing at the time. 

    I find that letting the log file autogrow creates an unacceptable level of fragmentation.  Log file starts at 1mb, grows by some small percent and pretty soon you have 1000's of VLF files in a 20mb log file.  If you use alter database modify file to set the size appropriatly for the usage then SQL will create the larges VLF it can to hold tran log data and handle checkpointing more efficiently.

  • "Is there anything we can do on this end, from a front end application design, that will minimize the log size or is that something we will just have to live with and manage with the backing up and truncating of the log on SQL?" Something that you may want to consider adding to either a "maintenance plan" or just a recurring job is to schedule a normal transaction log backup ever x hour (you fill in the time frame). Note, the normal transaction log backup versus a backup log truncate only command. Or if your disaster recovery requirements a such that you do not need a "point in time" restore, consider (gulp) setting your database to simple recovery and let SQL Server truncate the log upon evey checkpoint. If you have turned on 'auto shrink' then I assume that the transaction level on the SQL Server is relatively low ... so setting the server in an trunc. on checkpoint mode should not adversly affect the performance of the database ... only your ability to restore to a particular point in time in the event of a SQL Server crash. Your restore will only bring you back to the point of the last restore. Hope that this helps ....

  • All the comments were very helpful, thanks to everyone!

  • If you have trouble shrinking, try this. http://www.sqlservercentral.com/scripts/contributions/26.asp

     

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply