Transaction Logs growing beyond 3GB''s and rising

  • We have 2 Navision4.0 SQL2000 Databases ( Production and Development )

    The Production DB has the Full Recovery model.

    (Datafiles 100MB+1.2GB, Log 3GB Fixed )

    I've tried many different settings for the log file, but within minutes growing beyond 3GB's

    set an alert ( 75% of logsize ) wich triggers the follwing script:

    BACKUP LOG INFOPROD

    TO DISK = N'D:\MSSQL\BACKUP\INFOPROD_tran.bak'

    WITH NOINIT , NOUNLOAD , NAME = N'INFOPROD backup', STATS = 10, NOFORMAT

    all is working, but within minutes the log is back to 3GB's

    ( and wondering why 3GB's isn't enough ?? )

    There are not much transactions, ±100 orders a day and 40 employees mainly searching.

    We use the Disaster recovery script from Brian Knight (thanx for this great piece of information!) to schedule a nightly FULL backup and two DIFF backups during workhours (12.00-16.00)

    What is the solution for this problem?

    changing the revery model to Simple is an options

    the max dataloss acceptable is 15min. tops.

    help from the experts is welcome!

  • Hi

    You should regurlarly schedule a t-log backup with the full recovery model. That should help.

    JP

  • What else is being done on the database? Are you doing daily reindexing? If so, that will cause your tlog to really grow huge (I believe the recommendation is 1.2 times the size of the database).

    -SQLBill

  • Also, how do you enter those orders? Are they done all at once (Bulk insert)?

    -sQLBill

  • @SQLBill

    The orders are done manually ( just salesorders )

    over a period of the day. some orders are completed in a single entry, others are changed at backoffice

    but these transactions are very likely causing the problem.

    reindexing is done manually ( not during workhours )

    tried setting up a tracefile, but don't know exactly wich traces to set without getting extreme files with entries that does'nt point in the real log hogs.

  • I'm agreeing with JP de Jong about the tlog. I re-read your post and noticed you never backup the tlog. Do this at least once a day (I suggest doing it at least once after each differental). As I understand it, when in FULL recovery mode, the tlog will not shrink unless it is being backed up. In simple mode, checkpoints are set and the tlog is controlled that way.

    See if that helps.

    -SQLBill

  • coorect me if i'm wrong, but does the script not doing that job?

    because that's what the script should do.

    BACKUP LOG INFOPROD

    TO DISK = N'D:\MSSQL\BACKUP\INFOPROD_tran.bak'

    WITH NOINIT , NOUNLOAD , NAME = N'INFOPROD backup', STATS = 10, NOFORMAT

    [/qoute]

    If i'm wrong what would then be the correct t-sql ?

  • H!!!,

       If updates and deletes are more in your database obiously tlog will grow to a very large size .

    as a solution for this you can backup transaction log every 5-10 min .Moreover check weather any background jobs are running like reindexing,bcp etc ..

    regards,

    Vinod S.R (DBA)

    HTC Global Services

  • I wouldn't do tlog backups every 5-10 minutes, that can be overkill. But do them more frequently....I do mine every four hours and my tlog never grows very large EXCEPT when I do a DBCC INDEXDEFRAG. (Or reindex).

    -SQLBill

  • That's what concerns me too. a tlog backup every 5-10 minutes isn't enough sometimes. Not to mention the serverload and IO it's costing.

    At the moment I'm trying a different approach.

    Recovery model Simple and Full backup Daily, and 4 Differentals every 2 hours ( 12.00 - 14.00 - 16.00 - 18.00 )

    The cost for max. 2 hours of dataloss is perhaps more acceptable because of the fast 'back to live' situation.

    Restoring tlogs from 3-6GB's will take much more time to get the DB live ( even if you have the possibility to restore in time )

    Thanx anyhow for all your knowlegde sharing!

    Colin

  • I wonder if there is actually that much log activity, or if some process is attempting to maintain the log at a fixed size.

    Try:

    BACKUP LOG INFOPROD TO DISK = N'D:\MSSQL\BACKUP\INFOPROD_tran_test1.bak' WITH INIT, SKIP

    ...then an hour later:

    BACKUP LOG INFOPROD TO DISK = N'D:\MSSQL\BACKUP\INFOPROD_tran_test2.bak' WITH INIT, SKIP

    The size of the _test2.bak file should be a rough measure of the amount of actual write activity going on, assuming the hour-long sample was representative of the usual workload. If the _test2 file size is not a substantial fraction of the db size, I would suspect that some monitoring process is attempting to keep the log size at a 3GB minimum. Otherwise, if a large amount of write activity seems out of proportion to the actual amount of data change, I would look for either defrag statements, or perhaps some idiotic code, like something that updates every row it reads, even if it hasn't changed.

    In Enterprise Manager, try un-checking Automatically Grow File in the Transaction Log tab, and also un-checking Auto shrink in the Options tab. Wait awhile and see if either of these settings have changed magically. If so, it is a pretty good indication that some process is trying to manage the log size automatically.

    You can use profiler to look at activity; the default trace events include SQL Batch Completed, which gives a good indication of what SQL statements are being received by the server. Maybe that would yield some clues.

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply