February 2, 2018 at 9:26 am
I have been asked to schedule backups of our DBs on 2 separate location. First sets of backups on the physical server where our DBs are and second backup copies to AWS. I have this script I use for backups. USE [CommonDB]
GO
/****** Object: StoredProcedure [dbo].[Full_backup_databases] Script Date: 2/2/2018 10:19:18 AM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[Full_backup_databases]
@diff_only tinyint = 0, /* 0=no, 1=yes */
@full_only tinyint = 0, /* 0=no, 1=yes */
@copy_only tinyint = 0, /* 0=no, 1=yes */
@folder nvarchar(2048) = null
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
DECLARE @name SYSNAME
-- Insensitive to ensure we get entire list (see http://stackoverflow.com/questions/4048147/inconsistent-cursor-results-when-looping-over-databases)
DECLARE dbcursor INSENSITIVE CURSOR FOR
SELECT name
FROM sys.databases
WHERE STATE = 0 --Online
AND name not in ('tempdb','THINK','RBCS','FarmReach_Test_06-30-2017','ReportServerTempDB')
AND source_database_id IS NULL
OPEN dbcursor
FETCH NEXT FROM dbcursor INTO @name
WHILE @@FETCH_STATUS = 0
BEGIN
EXEC backup_database @dbName=@name, @diff_only=@diff_only, @full_only=@full_only, @copy_only=@copy_only
FETCH NEXT FROM dbcursor INTO @name
END;
CLOSE dbcursor
DEALLOCATE dbcursor
END
The above SP executes this SP
USE [CommonDB]
GO
/****** Object: StoredProcedure [dbo].[backup_database] Script Date: 2/2/2018 10:20:04 AM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[backup_database]
-- Add the parameters for the stored procedure here
@dbName sysname = null,
@folder nvarchar(2048) = null,
@diff_only tinyint = 0, -- 1 = yes, 0=no
@full_only tinyint = 0, /* 0=no, 1=yes */
@copy_only tinyint = 0 -- 1=yes, 0=no
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
declare @path nvarchar(2048)
declare @now datetime
declare @sql nvarchar(max)
set @now = getdate()
if @folder is null
begin
exec master.dbo.xp_instance_regread
N'HKEY_LOCAL_MACHINE',
N'Software\Microsoft\Microsoft SQL Server\MSSQLServer',
N'BackupDirectory',
@folder output;
end;
set @path = @folder+'\'+QUOTENAME(@dbName)+'\'
+RIGHT('0000'+CONVERT(nvarchar,DATEPART(yyyy,@now)),4)
+RIGHT('00'+CONVERT(nvarchar,DATEPART(ww,@now)),2)
exec master.sys.xp_create_subdir @path
set @sql = 'BACKUP DATABASE '+QUOTENAME(@dbName)+' TO DISK='''+@path+'\'
+QUOTENAME(@dbName)+'_'+CONVERT(nvarchar, @now, 112)+'_'
+RIGHT('0000'+CONVERT(nvarchar, DATEPART(hh,@now)),2)
+RIGHT('00'+CONVERT(nvarchar, DATEPART(mi, @now)),2)
+RIGHT('00'+CONVERT(nvarchar, DATEPART(ss,@now)),2)
+CASE WHEN @diff_only = 1 THEN N'.dif' ELSE N'.bak' END
+''' with stats=10'
-- Can't do a differential on master db
if @diff_only = 1 and UPPER(@dbName) <> N'MASTER' AND @full_only = 0
begin
set @sql = @sql + ', DIFFERENTIAL'
end;
if @copy_only = 1
begin
set @sql = @sql + ', copy_only'
end;
exec(@sql)
END
There are 28 DBs on that server so I was thinking maybe scheduling a job which copies the backup which are taken within 24 hours and move it the AWS or change my code so backups are on 2 separate locations. I would like to know if there are some performance issues with the 1st method (I am guessing yes since its moving all the data through the wire) or 2nd method works best? If so then what sort of changes I should make on my script?
February 2, 2018 at 10:09 am
you can use the following, no matter what you do, it is going to take time to transfer file over wire. you may want to use with compression, in addition to splitting to multi-pal backup files, in my experience splitting to multi-pal files improves backup performance (& difficult to administer when comes to restore) but in your case it might not improve, you can try with one database backup
BACKUP DATABASE AdventureWorks TO DISK = 'C:\Backup\SingleFile\AdventureWorks.bak' MIRROR TO DISK = 'C:\Backup\MirrorFile\AdventureWorks.bak' WITH FORMAT GO |
If this command is being run for the first time, it is mandatory to use the WITH FORMAT
clause; but for sub sequential runs it is not required. WITH FORMAT
reinitializes the backup.
February 2, 2018 at 10:27 am
goher2000 - Friday, February 2, 2018 10:09 AMyou can use the following, no matter what you do, it is going to take time to transfer file over wire. you may want to use with compression, in addition to splitting to multi-pal backup files, in my experience splitting to multi-pal files improves backup performance (& difficult to administer when comes to restore) but in your case it might not improve, you can try with one database backup
BACKUP
DATABASE
AdventureWorks
TO
DISK =
'C:\Backup\SingleFile\AdventureWorks.bak'
MIRROR
TO
DISK =
'C:\Backup\MirrorFile\AdventureWorks.bak'
WITH
FORMAT
GO
If this command is being run for the first time, it is mandatory to use the
WITH FORMAT
clause; but for sub sequential runs it is not required.WITH FORMAT
reinitializes the backup.
I really like using mirrored backups but it also required enterprise edition.
Sue
February 2, 2018 at 11:24 am
All our SQL Servers are running on Standard editions so I am guessing using mirrored backup option is out?
February 2, 2018 at 12:14 pm
how reliable and fast is your connection to AWS? It seems like that could be a liability in directly storing backups there. To make sure backups run fast, I'd store them locally (preferably to a different disk than your live database files) and then copy them to AWS afterwards as you suggested in your first option. Storing the local backups somewhere different than your live database files is key though, I worked somewhere once where a client came to us to help them restore their database after they had disk corruption, only to find out the backup file was corrupt too because it was stored on the same disk. <facepalm> :blink:
February 2, 2018 at 2:24 pm
Chris Harshman - Friday, February 2, 2018 12:14 PMhow reliable and fast is your connection to AWS? It seems like that could be a liability in directly storing backups there. To make sure backups run fast, I'd store them locally (preferably to a different disk than your live database files) and then copy them to AWS afterwards as you suggested in your first option. Storing the local backups somewhere different than your live database files is key though, I worked somewhere once where a client came to us to help them restore their database after they had disk corruption, only to find out the backup file was corrupt too because it was stored on the same disk. <facepalm> :blink:
Believe it or not that's how it was setup before I took over that system. Data files, log files, backup files were all on the same disk. Connection to AWS is somewhat reliable but restoring from the backup is as slow as a Snail or turtle. I restored a DB once and it was 14.6 GB in size and it had taken me over hour and a half. So they want the backups on the local server for faster recovery but they also want it in AWS in case the whole server goes south.
February 2, 2018 at 2:32 pm
It definitely makes sense to have some sort of offsite copy of your backups, will be useful for disaster recovery scenarios.
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply