Transaction Logs - Limited Size and optimizations

  • Hi,

    We have SQL Server 2000 setup in various sites across the world, but everything gets backed up and sent to a central site once a week - via tape for an offsite backup. We are looking at ways of doing transaction log backups but the size of the log files tend to be too long to send across our 512K limited network. Now assuming that we cannot do anything about the network (for the moment anyway), is there a way to dump out transaction logs at a particular size, rather than every 10 minutes per say? I believe this is possible in Oracle, but I'm not sure if this is the case in SQL Server.

    Another problem we are having is we shut down sql server agent, perform an integrity check, do optimizations then do a complete backup. When we restart sql server agent we get a massive transaction log. I don't know whether this is because of the optimizations or because of not truncating the logs. If I should be truncating logs when should this be done. Any advice would be appreciated.

    Thanks in advance.

  • First Issue:

     

    -- run this before and after to see what happens

    select cast(name as varchar(20)) as Name,

     cast(size as varchar(10)) AS SIZE,

     fileid

    FROM sysfiles

    declare @TooBig int

    -- Set variable to current size

    -- Be sure to verify the fileid of the log file

    set @TooBig = (select size From sysfiles where fileid = 2)

    -- See the threshold limit for the log file

    -- The number is set to the max size you would want

    --

    if @TooBig >= 4500

    begin

     BACKUP LOG northwinds TO northwinds_log_device-- or create a revolving file

     BACKUP LOG  northwinds  WITH  TRUNCATE_ONLY

     DBCC SHRINKFILE (northwinds_Log,truncateonly)

    end

    Second Issue:

    Just setup a maint. plan for your system and user DBs.  It can do all this for you and take care of house keeping.  Let SQL do the work.

     

  • Thanks for that. I tried running the script but the log did not grow and no data was dumped. When I removed the lines

    BACKUP LOG  northwinds  WITH  TRUNCATE_ONLY

    DBCC SHRINKFILE (northwinds_Log,truncateonly)

    it seemed to work.

    As for the main. plan - I am using a main. plan and this is what is happening. The optimizations seem to fail, but we have figured out that we need separate shutdown/restart scripts to turn off certain services so maintenaince is done in single user mode. But I don't know why the log is so big.

  • BACKUP LOG northwinds TO northwinds_log_device-- or create a revolving file

    BACKUP LOG  northwinds  WITH  TRUNCATE_ONLY

    DBCC SHRINKFILE (northwinds_Log,truncateonly)

     

    What's the point of the line above?

       BACKUP LOG northwinds WITH TRUNCATE_ONLY

    Doesn't the previous BACKUP LOG statement truncate the transactions log

    as part of the backup? 

  • I hope your not running that against a production database!

    Transaction logs keep a record of all changes in the database.

    BACKUP LOG WITH TRUNCATE_ONLY

    DELETES THE DATA IN THE TRANSACTION LOG FILE

    DBCC SHRINKFILE (logfile,TRUNCATEONLY)

    Frees up the unused space in the logfile, and returns it to the OS. But doesn't delete data.

    As you said, backup with truncate_only is not necessary after a backup log, because once you backup the log, the log is marked as read for overwrite.

    If you are ending up with massive transaction logs after optimisations, it's probably because of your optimisations. If you are doing indexing optimisations, the index changes are all processed through your log files, which when you do a backup log are getting dumped out.

    Probably, you don't need to do those optimisations so often. and you probably don't need to do your integrity checks so often. (unless you have dodgy server hardware or arn't using RAID). In my experience, SQL Server humms along just fine without the need to do these things very often at all.


    Julian Kuiters
    juliankuiters.id.au

  • Thanks for that information.

    I don't think I made clear that when the Transaction Log was huge it was after not only the optimizations, integrity checks but also a full complete backup. We actually do use a good RAID system on the major sites which are backed up every 2 hours. But we require a transactional solution to implement an offsite backup.

    I think that this is what should happen now from what I know understand about transactional logs: -

    1) At the weekly full backup stage we should do the following: -

        a) Integrity Checking

        b) Optimizations

        c) Truncate Log (New step from your information)

        d) Shrink Database (New step from your information)

        e) Perform Complete SQL backup

        f) Stop SQL Server (required for some sites for complete tape backup - because of the hardware, but not all sites)

        g) Do complete backup tape

        h) Start SQL Server again (required for some sites for complete tape backup - because of the hardware, but not all sites)

        i) Restart the transactional logs (every 10 minutes for the rest of the week until the next weekly full backup)

        j) Have our custom written application zip the log files (as they appear in their folder) and send them across the network to the new location

    The only problem would be when the transactions logs don't work for one reason or another, or are oversized in the 10 minute timeframe.

    My goal in this email was to set up Jobs to go every 2-5 minutes and output transaction logs every 10 minutes OR every time the log reaches a certain size.

  • Read all the docs on DBCC optimisations. They detail their log usage for most of the commands. For anywhere where data is moved or changed, sql uses the transaction log.

    You could run your optimisations before the backup. You could backup, set the recovery mode to SIMPLE, optimise, backup, set recovery to FULL. But basically SQL Server is performing as it's supposed to.

    Do you really need to run the optimisations ?

    And have you tested recovery? If you are moving the backups offsite (from the sql server location to your office), you'd need to copy them back before you can restore. This adds time to how long it will take to recover in an emergency. I'd probably recommend a on site Tape backup, rotated and sent off site.


    Julian Kuiters
    juliankuiters.id.au

  • Thanks,

    I'll have a look at DBCC as well. Some of the sites are pretty big and do require optimizations. Note that these will only be run once a week.

    As for the tape backup - we already do that. But our contract said that we would give restoration capability to store data that was only 10 minutes old from time of crash and this is why we require offsite backups.

    Please note we already do both a local and an offsite backup. Its just that the offsite backup at worst is a week behind. The offsite backup is our last-line-of-defence backup for when a site completely goes.

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply