November 14, 2012 at 8:06 am
This is more of a poll but I'd like to get others opinions. What is the recommend and preferred methods of SQL Server backups considering all technologies:
- Native SQL Server backup to tape
- Native SQL Server backup to disk
- SAN backups (specifically for my purposes - NetApp Snapshots / FlexClones).
Go!
November 14, 2012 at 8:19 am
We do backups to local disk, then the backups are backed up via the SAN backup, they are also shipped to the virtual tape array via NetBackup and they are also replicated to the DR server, which is also backed up by the DR SAN backup and is also backed up to the DR virtual tape array.
So we have a multiple point of failure system where we can get the backup back.
Data recoverability is a big issue for us, as every hour we have an outage costs the company millions of pounds in productivity and lost revenue, so the quicker we can get things back should an issue or a failure at any point of the infrastructure should happen
November 14, 2012 at 8:26 am
It depends.
On DB size, backup windows, technology available, restore time window, etc.
I wouldn't backup to tape though. Not directly at least.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 14, 2012 at 9:18 am
@Gila: I can provide you with my options but was looking for some general feedback first. Obviously I'd like to rule out options and not focus on them if they simply do not apply. But to elaborate here are additional details:
Let me also add our databases are currently versions 2005-2008. We also have SharePoint 2010 databases that will be folded into the mix so their backup is slightly different. In addition I want to try and get the backup strategies as consistent as possible across all servers / platforms.
November 14, 2012 at 9:43 am
Disk is my vote, perhaps striped to multiple physical locations for speed. Then copy over to tape/DR.
The SAN stuff can work, but you better check this carefully. Not all SAN systems are transactional aware, and you might end up with a backup/copy that can't start.
Either way, you still probably need some log backups, unless you plan on losing data between the backups/snaps.
November 19, 2012 at 11:48 am
Thanks for your input everyone. You've helped me confirm my initial direction is valid - backup to disk and then allow our infrastructure group backup the files to tape.
Currently our databases are backuped up to tape directly and there are all sorts of issues. It currently takes 9+ hours to do a full backup of all our databases (just under 1 TB). And that's only production. My new proposal is going to go straight to disk plus include a few additional optimizations (to name a few):
- Use compression where necessary
- Use multiple backup files (up to 1 per CPU)
- Separate backup traffic from the network traffic
I see no reason why we cannot backup 1 TB of data to disk in 30 minutes tbh. I based most of this on a case study for fast and reliable backup and restore of VLDB over the network.
November 19, 2012 at 1:02 pm
tafountain (11/19/2012)
Use multiple backup files (up to 1 per CPU)
I wouldn't necessarily recommend this.
If you're going to stripe your backups, stripe based on how many IO paths you have (if you have 2 backup drives that are separate physical drives or separate IO channels, then stripe to 2 files).
Backup is an IO-bound operation, not a CPU-bound, if you stripe to multiple files on the same drive you probably won't see much of a gain. Only add multiple backup destinations if you have multiple separate IO paths (so 32 cores, 32 destinations on 1 LUN isn't a good idea. 32 cores, 32 destinations across 16-32 LUNs or 32 cores, 16 destinations across 16 LUNs may work)
Separate backup traffic from the network traffic
Backing up to disk or backing up to a network location?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 20, 2012 at 1:03 pm
Just to add my scenario, I have multiple SQL Servers (approx. 8), with a few databases on each, not big, largest db is 25 GB. (One Oracle DB is 115 GB.)
For SQL Server, I backup straight to disk which is on-server (preferrably the non-data disk, but can't always be helped). I wrote a .net program to take a list of directories, and compare the first directory with the throw-over directory. Any files not already in the throw-over area, copy them. This runs every hour. The throw-over goes to a Data Domain dedupe machine 1/2 mile away. You could use a NAS if needed, though it won't dedupe, but is a LOT cheaper.
I try to keep 2 days' worth of full and tranlog backups on server, with (by policy) 2 weeks' worth on the Data Domain. (I also wrote a program to go in each directory and delete files over X days old, using either Windows create date or SQL Server file timestamp date, can set different length of time for each directory.) This keeps any needed restores on disk for 2 days. Any needed from longer than that can just be copied back over to the server from the Data Domain.
November 20, 2012 at 1:21 pm
GilaMonster (11/19/2012)
Only add multiple backup destinations if you have multiple separate IO paths (so 32 cores, 32 destinations on 1 LUN isn't a good idea. 32 cores, 32 destinations across 16-32 LUNs or 32 cores, 16 destinations across 16 LUNs may work)
also, the multiple drives should have the same I\O capability\performance 😉
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
November 20, 2012 at 2:10 pm
Perry Whittle (11/20/2012)
GilaMonster (11/19/2012)
Only add multiple backup destinations if you have multiple separate IO paths (so 32 cores, 32 destinations on 1 LUN isn't a good idea. 32 cores, 32 destinations across 16-32 LUNs or 32 cores, 16 destinations across 16 LUNs may work)also, the multiple drives should have the same I\O capability\performance 😉
That should fall into the category of 'common sense'.... not that common sense is all that common. 🙂
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 20, 2012 at 2:45 pm
GilaMonster (11/20/2012)
That should fall into the category of 'common sense'.... not that common sense is all that common. 🙂
yes you would think, but its not uncommon actually 😀
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
November 26, 2012 at 10:54 am
@Gila - your comments on the I/O paths are definitely noted. On the CPU side I was referring to how SQL Server manages the multiple backup files - my understanding is there is a limit of 1 file per CPU.
November 26, 2012 at 11:01 am
FWIW - here is the case study I am referring to.
November 26, 2012 at 11:11 am
tafountain (11/26/2012)
On the CPU side I was referring to how SQL Server manages the multiple backup files - my understanding is there is a limit of 1 file per CPU.
No limit. If you want to backup to 20 files on a server with 4 CPU cores, go right ahead. It'll work. Whether it'll be the optimal setup or not is another matter.
I'm familiar with the case study, I've read it a number of times
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 26, 2012 at 12:32 pm
@Gila - I'm looking for optimal 🙂 - sorry I wasn't clear on that.
Viewing 15 posts - 1 through 14 (of 14 total)
You must be logged in to reply to this topic. Login to reply