Which recovery mode is recomendable for I/O performance?

  • I have a crawler system that writes to my sql database generating huge transaction log backups (20GB avrg) every hour. Recently we are having problems with performance and is related to our disks, so I was wondering if changing the recovery mode to simple would improve the I/O performance.

    Thanks

  • Regardless of recovery model, all updates, inserts, and deletes are written first to the transaction log. Which recovery model you use should be based on how much data loss can you afford should you have a failure.

    If you don't need point in time recovery of your database, then switching to the simple recovery model makes sense. If you need point in time recovery, then you need to be sure that you are running full backups, differential backups (if needed or recommended) and transaction log backups in such a manner as to meet your Service level Agreements should you have a failure requiring you to restore your production database(s).

  • I delete the transacton log backups after 2 hours because we don't have the storage capacity.

    So, the recovey mode has nothing to do with performance?, I thought that when the log backup occurs (in my case every hour generating 20GB trn files), the server/database get busy and that process to create te backup slows down the other processes.

    In other words, I don't care losing transactions, I really need better I/O performance. Thanks!

  • Recovery model has nothing to do with performance. It has to do with recoverability of the database.

    If you don't care about losing transactions, set the DB to simple recovery. That means that in the case of a disaster, the best you will eb able to do is to restore to the last full backup. Are you sure that's acceptable?

    Fixing IO performance problems is usually achieved by one of the following:

    - Optimising queries

    - Tuning indexes

    - Adding additional drives and splitting the database across those drives

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • I had an anti-virus database that was in a similar situation. It scanned tens of thousands of hosts a few times an hour and ended up generating tremendous amounts of data. We actually exceeded the size of an int in the identity column every few months.

    However this was a point in time database, and the scans would repopulate things. we switched to simple mode and it ran smoother. We didn't get any break in performance, but the log files didn't need to be managed.

  • Thanks for your responses.

    Since I delete the transaction log backup files every 2 hours (because of lack of storage capacity), I can't restore to a point in time, so I changed the recovery mode to simple. Now, does it mean that, since I am not backing up the log file, it will retain the transactions until the next checkpoint?, if so, how often checkpoints run?, can my log file grow too much? Thanks.

  • Simple mode sounds good for you.

    The transactions are in the log file until the checkpoint, which is about every few minutes. Depends on load.

    The log files can grow if there is an open transaction for a long time.

  • I changed my database recovery mode to simple and I see a better performance in I/O and memory utilization. Some of my processes are 50% faster.

  • Antero Ruiz (6/4/2009)


    I changed my database recovery mode to simple and I see a better performance in I/O and memory utilization. Some of my processes are 50% faster.

    Question I have, are your ldf and mdf files on the same disk(s)?

  • For completeness:

    The recovery model can affect performance particularly if the transaction log file(s) are set to auto-grow from a small size and in small increments. The frequent file growths are obviously problematic (the files must be physically zeroed - instant file initialization does not apply to log files) and can also result in a very large number of virtual log files, which can be just as bad.

    Another obvious case where the recovery model directly affects performance is with bulk loads and index rebuilds - these can be bulk-logged if the recovery model is other than 'full'.

    Often, a process which generates a huge amount of log can be converted to one or more bulk-loads, possibly with the use of SSIS to extract and transform the data beforehand. It is something to consider for the future - the primary advantages being the more efficient utilization of server resources and better recovery options (remember to back up the log before and after switching to bulk logged recovery).

    Paul

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply