February 1, 2018 at 4:06 am
Can anyone point me towards any article on the ideal way to backup a server with multiple databases which equals about 8Tb. I've just calculated, based on the current set up, that a full backup would take over a day which is not a good solution for a system that is expected to be 24/7.
It's set up as a 3 node AlwaysOn cluster, two nodes automatic failover and the 3rd in a different DC for DR. The databases are Sharepoint databases. Ideally we would still want the ability to do point in time restore incase of data corruption etc but not a huge requirement. The servers are VMs.
What does anyone else use in this situation, would it be still SQL ackups or would we be looking at a different solution such as something on the VM disk level?
Thanks
February 5, 2018 at 5:07 am
I've managed to half the time the backups take by using striped backups, 10 different files, I have 5 mountpoints for the backup server so this figure seemed logical to evenly spread the backups on the disks (also 10 was faster than 5!).
I read somewhere that adding extra nics to the server may reduce the time further, does anyone have any exerience of this or have any other suggestions to reduce backup time for large amounts of data?
February 5, 2018 at 7:38 am
That depends on how you connect extra NIC, I had extra NIC with direct cable connection to backup server and the results were amazing.
I been using striped backups with 64 files, using the stored procedure below
USE [master]
GO
/****** Object: StoredProcedure [dbo].[sp_fast_backup] Script Date: 07/10/2013 12:56:10 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[sp_fast_backup] ( @dbname varchar(8000)='Database_Name',@fno INT=64, @run INT=0 ,@Path varchar(8000)=null)
AS
SET NOCOUNT ON
declare
--@path varchar(300),
@finame varchar(8000),
@dsk varchar(8000),
@ts VARCHAR(8000),
@cmd VARCHAR(8000)
begin
print ' '
PRINT '-- USEAGE : sp_fast_backup , <databasename>, <no of file>, <run=1/0> , path=<path>'
PRINT ' '
IF @fno IS NULL
set @fno = 64
IF @fno > 64
SET @fno = 64
IF @dbname IS NULL
set @dbname ='Database_Name'
IF @path IS NULL
set @path ='\\10.12.12.25\sql_backups\' + CAST(REPLACE(@@servername,'\','_') AS VARCHAR) +'\'
SET @finame = REPLACE(@dbname,' ','_')
SELECT @ts='_' +convert(varchar, GETDATE(), 112)+ '_'+replace(convert(varchar, GETDATE(), 108), ':', '') + '_FILENO_'
print 'Backup Database [' + @dbname + '] to '
SET @cmd = 'Backup Database [' + @dbname + '] to '
--print @fno
while 1 < @fno
BEGIN
--SELECT 'this is test'
SET @dsk= 'disk=' +CHAR(39) + @path + @finame + @ts + cast(@fno as varchar) + '.full' + CHAR(39) + ','
PRINT @dsk
SET @cmd = @cmd + @dsk
set @fno = @fno - 1
END
--PRINT 'out of loop'
SET @dsk= 'disk=' +CHAR(39) + @path + @finame + @ts + cast(@fno as varchar) + '.full' + CHAR(39)
SET @cmd = @cmd + @dsk + ' with copy_only,compression'
PRINT @dsk + ' with copy_only,compression'
IF @run=1
EXEC(@cmd)
end
February 5, 2018 at 12:24 pm
Its not the extra NICs that will help - it is the number of HBA's you have and how they are configured. Ideally you have enough paths to the storage so you can dedicate database access (both read and write) through on set of paths - and dedicate another set of paths to the backup storage.
Since these are virtual machines - the path to the storage is controlled at the VM host level - and you are not going to be able to do something at that level unless these servers have a dedicated VM host and probably not even then...it would depend on how your VM environment is configured.
Striping backups is the next best alternative - as well as compressed backups. Any of the 3rd party utilities (Litespeed, RedGates SQL Backup, etc...) will use striping in the background for you and also performs compression at the same time. I utilize Litespeed and I am able to backup a 10TB database in about 2.5 hours - but that is on physical hardware with lots of CPUs and memory, and to an Enterprise SAN with lots of SSD drives
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
February 8, 2018 at 8:22 pm
agree that simply increasing the number of stripes does already improve backup duration.
one would think it has to be on separate disks, but it does not necessarily have to be. there seems to be a sweet spot for the number of backup stripes on the same disk before it saturates it somehow.
we have a 20TB DB and we do 8 stripes and that has been acceptable since 2012. adding backup stripes will definitely shorten your backup duration.
February 8, 2018 at 10:30 pm
If your DB is so big, I think you need to think about re-designing File/File group (or split it to multiple relatively small DBs). Then you can just backup one file/filegroup per day. If some huge tables are readonly, just move them to a seperated filegroup, backup it once.
GASQL.com - Focus on Database and Cloud
February 9, 2018 at 12:43 am
It's not one database so file groups wouldn't work, it's a Sharepoint farm with each content DB at the best practice limit of 200Gb, I'd rather have everything backedup all at once. We changed some of the VM settings and brought the backup time down further, the settings changed were the queue length:
http://longwhiteclouds.com/2016/05/24/performance-issues-due-to-virtual-scsi-device-queue-depths/
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\pvscsi\Parameters\Device]
"DriverParameter"="RequestRingPages=32,MaxQueueDepth=254"
Then on to of that the stiping to 4 disks with 8 files, the disks are on multiple LUNs, I got a 11 minute backup I had at the beginning of the week for 87Gb down to 3 minutes.
Now all I need to do is get the time for CHECKDB down, might post something seperate for that.
February 9, 2018 at 2:44 am
BU69 - Friday, February 9, 2018 12:43 AMI'd rather have everything backedup all at once.
I think you're probably beyond that point, to be honest. So long as all your databases are in Full recovery mode, the time of individual backups isn't going to affect your ability to recover.
Now all I need to do is get the time for CHECKDB down
I would restore your backups one at a time on a test server and do the checks there. Not only are you taking some load off the production server, but you're also testing that your backups are good.
John
February 9, 2018 at 3:18 am
John Mitchell-245523 - Friday, February 9, 2018 2:44 AMBU69 - Friday, February 9, 2018 12:43 AMI'd rather have everything backedup all at once.
I think you're probably beyond that point, to be honest. So long as all your databases are in Full recovery mode, the time of individual backups isn't going to affect your ability to recover.
Now all I need to do is get the time for CHECKDB down
I would restore your backups one at a time on a test server and do the checks there. Not only are you taking some load off the production server, but you're also testing that your backups are good.
John
Would this not fall foul of licensing requirements though if they're Production databases?
February 9, 2018 at 3:23 am
Beatrix Kiddo - Friday, February 9, 2018 3:18 AMJohn Mitchell-245523 - Friday, February 9, 2018 2:44 AMBU69 - Friday, February 9, 2018 12:43 AMI'd rather have everything backedup all at once.
I think you're probably beyond that point, to be honest. So long as all your databases are in Full recovery mode, the time of individual backups isn't going to affect your ability to recover.
Now all I need to do is get the time for CHECKDB down
I would restore your backups one at a time on a test server and do the checks there. Not only are you taking some load off the production server, but you're also testing that your backups are good.
John
Would this not fall foul of licensing requirements though if they're Production databases?
No, I don't think so. It's what you use them for, not what data they contain. As soon as you restore them on a separate server for integrity testing (only), they're no longer production databases.
John
February 9, 2018 at 6:16 am
John Mitchell-245523 - Friday, February 9, 2018 2:44 AMI would restore your backups one at a time on a test server and do the checks there. Not only are you taking some load off the production server, but you're also testing that your backups are good.
I think this might be the only way, I have a restore routine set up for other databases in the estate so adding these into that and adding CHECKDB to it should not be a huge issue.
For the licensing is the rule that as long as the databases have no user connections it can be considered test?
February 9, 2018 at 6:23 am
I think that's probably as good a definition as any. I'm not sure precisely how Microsoft defines it.
John
February 9, 2018 at 8:02 am
John Mitchell-245523 - Friday, February 9, 2018 3:23 AMNo, I don't think so. It's what you use them for, not what data they contain. As soon as you restore them on a separate server for integrity testing (only), they're no longer production databases.John
I was told by a person at Microsoft that DBCC CHECKDBs on Production databases require licensing because it counts as Production work (strictly a "Production offload process"). Googling suggests the same thing. Here's an example of a post that covers it:
https://www.red-gate.com/hub/product-learning/sql-clone/sql-clone-quick-tip-offloading-dbcc-checks
What are your options in this situation? The most common response is to offload the integrity checks to a restored database backup, on a secondary, production-licensed server. However, if you already have SQL Clone set up, for database provisioning, then an alternative might be to offload DBCC checks, and possibly other administrative jobs, to a clone database on a server licensed as per the production server.
Another link here: https://dbatools.io/dedicated-server/ .
February 9, 2018 at 8:06 am
Goodness! You live and learn. Thanks, Beatrix.
John
Viewing 14 posts - 1 through 13 (of 13 total)
You must be logged in to reply to this topic. Login to reply