Backup space

  • I am trying to get the backup space estimate based on the dev server since I need to know how much is needed in production. However, but we won’t have production levels of change in the diffs or logs since it's DEV. Is there a way to estimate that or just go with some buffer space?

  • I think it depends on your workloads and environment.  For example, if you have a data mart that gets re-loaded nightly using the truncate->reload methodology, and you are doing hourly transaction log backups and nightly full backups, the transaction log backup right after that reload will be just as big as your full, if not bigger.  Now, if your loads are all transactional, your log will be much smaller than your database.  How much smaller really depends on how frequently you do the backup and how much data changes.

    If you can estimate how much data is changing and know how  often you are doing log and differential backups, it isn't too bad to estimate how much space you will need.  For example, if you have data changing at 1 MB per minute on average, and you do log backups every hour, you will need 60MB on average for each log backup.  If you do full backups daily, you will need space for 23 logs plus the full.  Now, chances are your data isn't changing at exactly 1 MB per minute, but could range from 0 KB per minute during slow periods and 10 MB per minute in peak times (or more or less).  Estimating these values depends on your business.

    My approach - get a lot of disk space for your backups.  How much depends on your RTO and RPO, retention periods, and data throughput.  At my workplace, some systems I estimate 10 MB per log while others I estimate 100 MB per log.  It depends on how busy the server is.  Do an estimate based on how much data you expect to see changing, then double (or triple) that number so you have wiggle room.  Your backup sizes are VERY unlikely to be consistent as time goes on.  A likely trend is that your full will get larger as time goes on (new data goes in, old data is rarely removed), your differentials will grow depending on how long it is between the latest diff and the last full, and your log files will remain fairly consistent unless something causes spikes or valleys.  Spikes may occur if the system suddenly starts getting used more (such as if you have 1 employee entering data most days, but once per quarter you have 20 entering data into the system, you will see a spike when 20 people dump data in there at once), and valleys occur during offtimes (if your company operates 9-5, any backup outside that window will likely be smaller unless you have automated processes still dumping data in there).  For a brand new, empty database (with tables and functions and such, but no actual data), I like to allocate 100GB for the backup disk space so I have a LOT of wiggle room.  Then after the system has been used for a week, I re-evaluate it and do a better estimate.  Then repeat at 2 weeks and 4 weeks.  our retention period on disk is 2 weeks, so after that period passes, I shouldn't see things growing that much.  Hopefully a few KB/MB difference between the 2 week and 4 week windows and I can request a smaller disk and migrate the data.  Or I may need to request a larger disk.  Where I work, it is VERY unlikely that we will go from an empty database to 100+GB in a 4 week period, so this is a safe approach for my workplace. Yours may be different and 100GB may not be enough to get you through a day.

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • I start with how many full backup files I want to keep local to the system - then what process will be utilized and how will that affect the storage.  For example, if I am performing a daily full and frequent transaction logs and need to keep 2 copies local then I will size the drive at least 4x the compressed backup size.

    If the system is an OLTP with a high volume of transactions - I then add space for the total amount of transaction logs to be kept.

    Now, if we are performing a weekly full - daily differentials - and frequent log backups, then I review the frequency of index rebuilds and how much data will be rebuilt each day.  In a lot of cases, the differentials will quickly get to the size of the full - so we need even more space available to handle the increased requirements.

    Just for an example - the backup drive on my large system is sized at 4TB.  The compressed backup files are currently 564GB and grow about 1GB per day.  That provides over 2TB of space to perform the backups before the cleanup process removes the older files - and we only keep 2 files available locally (we copy the files to offline storage each day).

    I also like to keep extra space available for upgrades - when we upgrade I can copy the backups prior to the upgrade to another folder just in case we have to roll back.  If we take several days over a weekend before we decide to roll back - I need to be able to keep the backups prior to the upgrade available.

     

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

Viewing 3 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic. Login to reply