August 24, 2012 at 12:28 pm
I've told my IT Admin many times that we don't/can't do SQL DB backups to network shares or anything but a local drive (tape drive or hadr drive). Apparently this is a real source of frustration because today I got en email from him with a link to a hack on how you can backup your SQL Db to a network share. I was just curious as to what you guys (and gals) thoughts are on this. Is this a hack (works but still not a good idea since it is an attempt to circumvent a safe guard) or is it OK to do these days and this restriction is outdated and no longer necessary?
Thanks
Kindest Regards,
Just say No to Facebook!August 24, 2012 at 12:37 pm
YSLGuru (8/24/2012)
Thanks
Yes, you can do backups to UNC locations. The problem I have seen doing them is all it takes is a little network hiccup and the backup fails. SQL Server is not very forgiving when it comes to network issues while writing a backup to a remote resource. This is why you normally hear the "backup local, move to remote" mantra used many times.
I have yet to work in an organization that had a network solid enough to ensure that backups to a UNC would always work, but I am also not saying that there aren't networks out there that do meet this requirement.
August 24, 2012 at 1:35 pm
Lynn Pettis (8/24/2012)
YSLGuru (8/24/2012)
I've told my IT Admin many times that we don't/can't do SQL DB backups to network shares or anything but a local drive (tape drive or hadr drive). Apparently this is a real source of frustration because today I got en email from him with a link to a hack on how you can backup your SQL Db to a network share. I was just curious as to what you guys (and gals) thoughts are on this. Is this a hack (works but still not a good idea since it is an attempt to circumvent a safe guard) or is it OK to do these days and this restriction is outdated and no longer necessary?Thanks
Yes, you can do backups to UNC locations. The problem I have seen doing them is all it takes is a little network hiccup and the backup fails. SQL Server is not very forgiving when it comes to network issues while writing a backup to a remote resource. This is why you normally hear the "backup local, move to remote" mantra used many times.
I have yet to work in an organization that had a network solid enough to ensure that backups to a UNC would always work, but I am also not saying that there aren't networks out there that do meet this requirement.
SO then this is still a work-a-round/hack hybrid; something you can do but you are circumventing a safety measure that is in place for a very good reason, yes?
Thanks Lynn
Kindest Regards,
Just say No to Facebook!August 24, 2012 at 2:44 pm
Not saying it is a hack, of course I haven't checked the link/article you mention. I am speaking from experience. I have backed up to a file share, and I have had backups fail due to network issues while doing so. It is one of the reasons I have always pushed to have sufficient space to on my servers to complete backups locally even if I then needed to move them to a central location to be backed up to tape.
I have also done restores from UNC's (file shares). The tened to be slower than copying the file to a local directory then restoring, but I never had a restore fail over the network. Now that networks are getting faster, doing the restore over the network is getting better, but I still like backing up locally then moving rather than backing up over the network.
August 27, 2012 at 11:38 am
Lynn Pettis (8/24/2012)
Not saying it is a hack, of course I haven't checked the link/article you mention. I am speaking from experience. I have backed up to a file share, and I have had backups fail due to network issues while doing so. It is one of the reasons I have always pushed to have sufficient space to on my servers to complete backups locally even if I then needed to move them to a central location to be backed up to tape.I have also done restores from UNC's (file shares). The tened to be slower than copying the file to a local directory then restoring, but I never had a restore fail over the network. Now that networks are getting faster, doing the restore over the network is getting better, but I still like backing up locally then moving rather than backing up over the network.
Met too. I'd rather not risk several hours only to find at the very end that the backup is bad because something went bad near the end . Thanks
Kindest Regards,
Just say No to Facebook!August 27, 2012 at 12:08 pm
I agree with all the posts that I would prefer to back up to a directly attached disk then move the backup to a network share for Archiving. However, even after sharing all the reasons and comments from this forum and others I am always forced to backup to network shares. We have large clustered servers that host several databases making it cost prohibitive to add enough attached storage to hold our backups. There are network hiccups from time to time but our monitoring processes look for databases that have not been backed up with SLA timeframes and runs a backup if not found. We rarely have trouble with the backups to network shares even though I am not fond of the overall concept.
August 27, 2012 at 12:12 pm
andersg98 makes me think you could do a proof of concept;
create a new job that does a COPY ONLY with verify backup to the network share;
set it to run 100 times or so, and report the % of failures for the backup after that;
if the failure rate is not zero, for me it's not an option.
then you could argue for more harddrive space a lot easier.
Lowell
August 27, 2012 at 12:36 pm
At a former job I managed backups for over 300 SQL Servers, and all of them were backed up to UNC locations on file servers dedicated to SQL Server backups. We did daily full backups and transaction log backups every 15 minutes for all production databases (4000+). We had only had occasional backup failures, and they were mostly transaction log backups that run OK on the next 15 cycle.
As long as the file servers and SQL Server have good network bandwidth, and the disk arrays on the file servers have enough speed to support the backups, you should be OK. Don't make the mistake of thinking that you can skimp on network and disk speed.
Backups to a UNC location are usually a little slower, but we were backing up some databases that were over 1 TB in size with no problem. I recommend that you use backup compression whenever possible to speed up the backups.
August 27, 2012 at 1:38 pm
Michael Valentine Jones (8/27/2012)
At a former job I managed backups for over 300 SQL Servers, and all of them were backed up to UNC locations on file servers dedicated to SQL Server backups. We did daily full backups and transaction log backups every 15 minutes for all production databases (4000+). We had only had occasional backup failures, and they were mostly transaction log backups that run OK on the next 15 cycle.As long as the file servers and SQL Server have good network bandwidth, and the disk arrays on the file servers have enough speed to support the backups, you should be OK. Don't make the mistake of thinking that you can skimp on network and disk speed.
Backups to a UNC location are usually a little slower, but we were backing up some databases that were over 1 TB in size with no problem. I recommend that you use backup compression whenever possible to speed up the backups.
At a previous site - we also backed up to UNC with little or no issues. Yes, it was a bit slower - but still within the maintenance windows for all of the servers involved.
One thing we did was create a separate backup network. We added additional NIC's to each system and routed traffic by IP address over the backup network. This helped quite a bit because we were no longer competing with the public network.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
August 27, 2012 at 1:43 pm
I've been on both extremes. I used to work for a major oil company, and we backed up hundreds of SQL servers across a WAN from Oklahoma City, OK to Calgary, Canada. It was very slow, but it worked. Now I work for an equally large company and we won't stand up a SQL Server unless it has a big enough local partition. That is the way I beleive it should be. Just my opinion.
September 5, 2012 at 10:06 am
Thanka to all who replied.
What we've found is that we get our best bang for the time by performing a native back up locally and then moving it across the LAN to the new DB Server and doing a native DB restore as opposed to using Microsoft's DPM2012 (Data Protection manager 2012). The IT Admin is not sure why DPM takes so much longer to do a DB restore then it takes to move a backed up copy of the Db between servers and then restore it but it does. DPM is almost twice as long to use versus native SQL Backup & Restore.
DPM however does have 2 key advatanges that make it the preferrred recovery method for normal day to day use (as oppose dto a one time relocation of the DB between servers) and they are
(1) Space - DPM does not require a copy of the DB (backed up) be placed on the SQL Server locally before restoring the DB.
(2) Real Time Recovery Availability - DPM is always ready to restore the DB where as the native process require we get a backup foirst and then move it, restore it and then dump the copy of the bak file.
Thanks to all who replied.
Kindest Regards,
Just say No to Facebook!September 6, 2012 at 5:32 am
I agree with tim.cloud that you should have a backup drive attached to your database system.
I normally set up the following when creating new SQL server
C: for OS ~ 60GB (nothing else on this than primary and shared SQL stuff)
D: allways CD or DVD drive
E: Datafiles (ask or check initial size, annual growth and usage before deciding size and RAID)
F: Logfiles (I choose 1,5 times the size of calculated Datafiles, at least RAID 5)
G: backup (I choose at least 2 times the size of calculated Datafiles)
Considerations has to be made of instances, Analyzing Services and such, depending on how many drives one should have.
There is also on heavily loaded system sometimes a need for a TEMPDB drive.
September 6, 2012 at 9:50 am
EXEC xp_cmdshell 'net use v: \\ServerName\foldername'
Here we are assigning V as drive letter to the \\ServerName\foldername'
-----------------------------------
BACKUP DATABASE [XYZ] TO
DISK = N'v\xyz.bak'
WITH stats = 10, format
September 7, 2012 at 3:32 am
Sqlism (9/6/2012)
EXEC xp_cmdshell 'net use v: \\ServerName\foldername'Here we are assigning V as drive letter to the \\ServerName\foldername'
-----------------------------------
BACKUP DATABASE [XYZ] TO
DISK = N'v\xyz.bak'
WITH stats = 10, format
What's the point of mapping the drive letter? SQL will back up to the UNC path quite happily provided the permissions at both ends are set up properly (and if they're not properly configured, mapping the drive isn't going to help anyway). Mapping the UNC path as a drive letter doesn't make the network more reliable or faster in any way, so it's still not the best idea unless you're dealing with relatively small databases (a few gigabytes, maybe).
Viewing 14 posts - 1 through 13 (of 13 total)
You must be logged in to reply to this topic. Login to reply