Questions about best practices for T-LOG Backups

  • I have an Intranet database running on SQL 2005 Std. in our datacenter. We are running a full backup nightly with transaction log backups beginning 30 minutes later running at 15 minute intervals, and finally stopping 15 mins prior to the next full backup. I am working on one of our dev databases testing the restore process. I set it up the same way however I have it backing the t-log up to a Backup Device rather then directly to the file system. In reality it's just sending the file out to a share on a SAN. Under that method the T-logs accumulate all with in a single file. I had it backup the t-log several times writing to the same file with out a problem. Then I restored the database to a new location and applied those transaction logs. Everything went great.

    So my first question is..

    I have always seen T-LOGS backed up to a new file each time. So you would end up with a directory filled with logs files usually with some distinguishing information in the file name like a date time or the LSN it began or ended with.

    Is there anything wrong with my approach using a single file (backup device) and just appending the log files to it?

    Second question deals with maintaining the log files..

    When you set the expiration period to 1 day on the t-log backup and your backing up to a device is it going to remove each piece of the log as it expires or will it keep growing?

    One thought I had was just execute a shell command after the full backup runs successfully to delete the old log.bak file so the went the t-log job starts again it would start with a clean one.

    Same question for writing out to a new file during each t-log backup. I need a better understanding of how the expired logs will be deleted so they don't accumulate.

  • There is absolutely no harm in backing up to the same file. to read the contents of the file - do a restore headeronly from [backupdevice]

    it will list all the files, when they were taken, the lsn#'s etc.

    second, instead of deleting the files, what we do is in our full backup procedure, we also do a backup log to [backupdevice] with init immediately after our full restores. this will clear out the file and start you back on file #1, clearing out the previous days backups.

    This approach works great for us and will probably fit your needs

  • There is USUALLY no problem in overwriting FULL backups, unless something happens to that backup. If you lose the full backup and/or have a corrupt full backup, you will not be able to restore the database.

    Which ever route you choose you should create a maintenance plan that will create your backups and then create a "clean up task" to remove any unwanted files.

    Note: You will have to create a full and tran log maintenance plan as you can only have one schedule per maintenance plan.

  • The only problem I have run into with this method is if a backup fails for whatever reason mid way through - for example, if the disk runs out of space or a network error occurs - the last backup in the file becomes "corrupt" because it is incomplete. This does not cause any problems for backups that were already taken to the file - you can still restore them as normal, but any other backups you try and append to the file will fail because the header is not valid. This will cause you to expose yourself to more potential data loss until you correct the issue.

    Granted, last time I encountered this problem was in SQL 2000, and I haven't tested it in SQL 2005 so they may have changed how it reacts in this situation.

  • Thanks all for the scoop. I ended up going with this approach.

    I am a big SSIS guy so I put my toys to work to do the following (although I am not actually using SSIS DB Maint. tools to perform any of the following. Everything is T-SQL scripts and SSIS components).

    Each night it does the following..

    1.) Stop any databases running T-log Backups.

    2.) Query sys.database on the target server for a collection of all non system databases currently online.

    3.) It then does a for each looping container backing up each database to disk. The destination paths for each SQL Server are stored in a separate database which the package calls to determine what path to write the backups to based on the @@ServerName result of the server the package is targeted at. Beauty of this step is that you don't have change your backup jobs each time you add, remove, or take a database offline. Since it's looking for all online databases each time.

    4.) I went a step further and implemented custom logging and error handling. So I will be notified in case of failure of any backup. DTS logs are great but a pain to read, so I put in more readable logs that I can view thru Reporting Services. The process will of course continue past the error and backup the remaining databases notifying of any subsequent failures.

    5.) Based on your feed back here I configured the T-logs job to automatically in the event of failure to fall back and perform an unscheduled full backup of that particular db, copy the corrupt t-log backup to an archive folder, and immediately restart the t-log backups again with a fresh file.

    Planning to put in an initial step that will log any databases aside from the system db's that's in an offline state so we have visibility when something is being skipped. The best part about this whole package is by just changing the target server, and entering the destination path in the remote database we can point this at any of our SQL Servers to automate backups. Our Systems Architect who is also our senior DBA wants to see this new method working in DEV for at least 90 days before we look at implementing. So we are going to run the whole process nightly in our development env. to ensure stability. All testing so far has gone great!

    Thanks again all!

  • I would be very concerned about this approach, since you no longer have any way of restoring your system to a point in time prior to your latest backup.

    Take this scenario:

    Monday (morning):

    Full Backup at 1am

    Transaction Log Backup with init

    Transaction Log backups throughout the day (noinit)

    Tuesday (morning):

    Full Backup at 1am

    Transaction Log Backup with init

    ...

    Tuesday morning at 8am - problem identified where user performed a process that deleted several thousand rows accidentally. User states this was done last night at 10pm.

    Your job: restore the system back to the state it was in just prior to this operation.

    Question: where are the transaction log backups that you are going to restore from? Oops, they were in that backup device that you initialized at 1am.

    I certainly hope you have backed up that file (to another file, preferably on another server and ideally to tape) prior to initializing it. If not, I hope your users are very understanding that they have just lost a full days worth of work that is going to have to be redone (if possible). And hopefully, it does not cost them a lot of money (for your sake).

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • Very good point. The location where we are writing both backups to is a SAN LUN. It's configured to snapshot once an hour right now, but the plan is to dial that down to 15 minutes for this particular volume. The snapshots are maintained for 30 days on the NetApp and written to tape once a week. So we have the ability to go back and get the previous .bak file that had been stepped on.

    We are looking at using the NetApps ability to snapshot the databases thru it's own software which talks to the SQL Server API. However, we will want to see this process work many, many, many, many times before we hand the reins over to it. We did a demo early in the week for a dev server that had it configured and it did work successfully.

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply