Backup is one of the major task for DBAs, it has many different options and ways for each companies to perform their backup according to their needs (RPO and RTO). However, many of them are having the same issues, which is where can I keep my backup files which allow easier access, stored on different location apart from the current data center, and optimized in cost?
Traditionally for most companies, the database backup will be performed to a separate disk on the server, which then got backup to another media (then output to Tape or directly backup to Tape) on a daily basis via another tool (NetBackup or backup exec etc.), followed by 3rd party companies collect the tape and send it to an offshore location for long term storage. The cost of keeping this infrastructure includes the separate media, tapes library, admin overhead (man resources) to switch in/out tapes, license cost for the backup tool and 3rd party company cost to retain those tapes. Not to mention when you need to restore a database, which is located offsite, might have a cost involves if needed urgently. The time required for the tape to re-call, load to the media and reading it off tape does take time. This seems like a very standard process for many of us, and it might be running fine in your company for years. But don’t you ever think is there a better way to maintain it?
That is where cloud backup comes in, starting from SQL 2012, it has a new option for you to backup directly to Azure storage using backup to URL statement. In SQL 2014, MS take a step further by integrating Azure backup to management studio. For prior SQL server, MS released a tool call backup to Azure tool which helps to encrypts and compresses your SQL backup and store it in Azure. You can find more details here
https://blogs.technet.microsoft.com/dataplatforminsider/2014/07/24/get-started-backing-up-to-the-cloud-with-sql-server-backup-to-microsoft-azure-tool/. This simplify the backup process where you can skip the extra backup tool, tape library and 3
rd party company to keep your tapes. Not to mention when you need to perform a restore, you no longer need to wait for the tape to come back and load to the media server, well you still need to wait for the backup file to be downloaded from Azure and depending on the size of the backup, it might take a while. But overall speaking, the process become much easier and less party involves, which means more control and less things to break. This does however is not comes in free, you still need to pay for the Azure storage cost, and the deduplication ratio might not be well.
One more thing to note is that if your backup to URL directly, you cannot keep a copy locally, which means all your backup is in Azure (SQL 2016 might change this). This might be a showstopper for you if you need to restore your database to another environment (UAT or Dev) on a regular basis. There are workarounds for it, where you can perform a copy-only backup and keep that locally, but it does means you will have 2 backup jobs scheduled. Is there any other way to address this issues, yes there is! It will be in the following post which I will discuss other options for cloud backup.