Database back up strategy

  • Hi

    I am looking for guidance on setting up and configuring backups for several SQL Server databases. I am developer and don’t have much experience in SQL Server Administration, or much time to experiment on a development server.

    Currently I have about 8 databases in a production environment running SQL Server 2005. All databases are currently configured for SIMPLE Recovery. The following scheduled script runs regularly, and updates the backups saved on the same server. The SQL Server directory on the server is backed up to tape every night.

    BACKUP DATABASE [Database] TO DISK = N'Database' WITH NOFORMAT, INIT, NAME = N' Database', SKIP, NOREWIND, NOUNLOAD, STATS = 10

    GO

    DECLARE @backupSetId as int

    SELECT @backupSetId = position from msdb..backupset where database_name=N’Database' and backup_set_id=(select max(backup_set_id) from msdb..backupset where database_name=N'Database’)

    if @backupSetId is null begin raiserror(N'Verify failed. Backup information for database ''Database' not found.', 16, 1) end

    RESTORE VERIFYONLY FROM DISK = N'Path' WITH FILE = @backupSetId, NOUNLOAD, NOREWIND

    GO

    From reading around I think the above arrangement is inadequate and we are fortunate so far to not have had an emergency requiring a restore.

    I’m looking for a solid model/strategy that I can implement quickly now and then tailor as necessary as I become more familiar with the available tools.

    Can someone provide a model to get me started?

    1.I am thinking to initially change all databases to FULL RECOVERY and perform a manual backup. Can anyone clarify any settings that I should be using here, and how I should be automating this process?

    2.At this point can I configure SQL Server to back up to two locations? (The server MSSQL is installed on and a network file server). Is this a useful arrangement that provides greater redundancy?

    3.Some of the databases have large existing transaction logs. I want to send these to a file server for storage. How should this fit in with the above processes? Is the proper way to do this to use the transaction log shipping option? When in the process should I truncate the transaction logs and shrink the log file/ release unused space?

    4.After all the above processes have been implemented I think I should have reliable backups in two locations so that I could if necessary carry out a point in time recovery of the databases. If I wanted to recover the data I would need to make the shipped transaction logs available to SQL Server Recovery tool. Additionally each database transaction log should be a reasonable size because they are regularly being truncated. Is that likely to be the case?

    Thanks

  • It all depends on data volatility and business requirements.

    Two examples.

    #1 - If you have a production database that gets updates once a day then setting that database to SIMPLE recovery model, taking the backup after the update process will suffice.

    #2 - If you have a production database that supports an OLTP application getting updates (short for update/insert/delete) 24 hours a day then setting it to SIMPLE recovery model is really a pretty bad idea. In this particular case I would set the database to FULL recovery model, take a full backup once a day and transaction log backups - may be - every 15 minutes.

    As previously said, it all depends on data volatility and business requirements.

    _____________________________________
    Pablo (Paul) Berzukov

    Author of Understanding Database Administration available at Amazon and other bookstores.

    Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.
  • Since your a developer and have access and decide the backup plan for the production databases I wonder if they are really that important?

    A simple recovery model means you be creating snapshots of the database. It could help you revert to any day when a backup was taken.

    If the db experiences an issue an You need to perform a recovery I think the best thing would be continue to use simple recovery model and explain the implications to the management.

    The other options require proper maintaince and actual plans in place on how to perform and restore backups which is best done by a DBA.

    if you want to play that role i suggest you practice on lower enviornments before playing with production.

    Jayanth Kurup[/url]

  • Looks like you are on your way to being an "Accidental DBA." If these databases aren't too big, you may want to consider using maintenance plans at this point. A good resource would be this book, Brad's Sure Guide to SQL Server Maintenance Plans.

  • If you're considering full recovery, please read through this - Managing Transaction Logs[/url]

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • If you can survive when you lose almost entire day of your data then your backup strategy is fine but if not i would change databases from simple to full recovery model and include transaction log backup every 5-15 minutes. This way you can recover almost everything applying restore of database backup and after that restore of transaction logs. You also can consider taking some differential backups between two database backups. You also would have to keep you backups on different disks or tape

  • hexagonshield-odd2 (3/9/2012)


    Hi

    2.At this point can I configure SQL Server to back up to two locations? (The server MSSQL is installed on and a network file server). Is this a useful arrangement that provides greater redundancy?

    3.Some of the databases have large existing transaction logs. I want to send these to a file server for storage. How should this fit in with the above processes? Is the proper way to do this to use the transaction log shipping option? When in the process should I truncate the transaction logs and shrink the log file/ release unused space?

    4.After all the above processes have been implemented I think I should have reliable backups in two locations so that I could if necessary carry out a point in time recovery of the databases. If I wanted to recover the data I would need to make the shipped transaction logs available to SQL Server Recovery tool. Additionally each database transaction log should be a reasonable size because they are regularly being truncated. Is that likely to be the case?

    Thanks

    First, a very good call on taking the initiative to investigate your backups, and congratulations on the desire to improve them.

    Second, pay attention to the previous replies and the links they gave.

    In addition:

    2. If you back up to the same server, at least back up to different physical drives, if your backup is supposed to help protect you from a drive/RAIDset failure. For automating, there's maintenance plans, Ola Hallengren's maintenance scripts[/url], and other available. I prefer a slight modification of Ola's work, personally.

    2. Many organizations use SQL backups to disk, and then the normal organizational disk/tape backups of the SQL backups (Bacula, Veritas, Backup Exec, Arcserve, Commvault, etc.), which should (in most cases) or must (in some regulatory instances) be encrypted, and get sent offsite.

    3. Study transaction logs first, with the above full recovery mode link, and perhaps SQL in the Wild Recovery Model and Transaction Logs[/url] and Kimberly Tripp's Transaction Log VLFs too many or too few.

    4. Your transaction log backups will vary in size; when you do large actions (such as some forms of index maintenance), they can grow quite a bit! However, part of your maintenance should be to clean up backups more than N time units old.

    5. Restore those backups to a new database name, or on different hardware! Do point in time restores! See what works, and what doesn't! See what's difficult or annoying... and automate it! Test your bandwidth! Test restores as if an error happened at an inconvenient point in time... for instance, if you truncate the old backup file and then overwrite it... do a restore as if the server failed after the truncation and before the overwrite finished (extra credit: assume the tape backup/copy to network job failed all the previous time or two, too, as the server was dying).

    Backups in and of themselves are a worthless waste of time. Correct restores within the required (not necessarily desired) amount of time can be critical to saving a business.

  • Dear All

    Thank you for taking the time to provide these ideas and solutions,

    right on schedule, on Thursday last week the RAID controller failed on one of our database servers and I am now rebuilding the instance from scratch. I look forward to revisiting this thread soon and asking for more practical advice as now I've got a clean slate to work with!

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply