September 17, 2012 at 10:59 pm
Hello,
Apart from Backing up to multiple locations and compressing the backup file, is there any other way to SPEED UP the Backup Process ?. Without using third party tools of course.
Thanks in advance.
Smith.
September 18, 2012 at 5:30 am
Hardware and compression are the two best things that I know of for making backups run faster. There just aren't any knobs to twist or switches to throw that make a big difference.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
September 18, 2012 at 12:11 pm
One thing i have heard from one of the well know MVP's is that if you re-org your indices and then take a backup it might save some time and space.
September 19, 2012 at 9:41 pm
..Something came up my mind this morning..
Suppose a transaction was started before the backup process starts, only one transaction for example.
Backup process back up the pages.. transaction is still open..
But before the backup completes the above said transaction commits/Roll Back.
So backup will also include the part of LOG for "Roll forward/Roll Back" while restoring the database.
Now my question is that, backing up the LOG is an overhead here is it ? So if I disconnect all the users before the backup starts, ensuring there's no any transaction open which may or may not commit before the backup process ends, would it have been more faster ?. (I know I just can't disconnect all the users ofcourse ! 🙂 )
Thanks.
September 20, 2012 at 12:54 am
Given that the bulk of the time taken to do a backup is I/O, the things to do to speed up the backup are based around reducing that - use whatever in-built compressions you have within SQL Server to compress the data files and the backup files being created.
The other thing you can do is make sure that your database isn't cluttered up with data that's not needed - look into various archiving strategies for really old un-touched data that you only need for compliance / audit, for example. Archive it off into a separate database, or keep an "old stuff" filegroup that doesn't get backed up as often as the "current stuff" filegroup. Hmm. I'm going to have to think about that one...
Thomas Rushton
blog: https://thelonedba.wordpress.com
September 20, 2012 at 3:51 am
Joy Smith San (9/19/2012)
..Something came up my mind this morning..Suppose a transaction was started before the backup process starts, only one transaction for example.
Backup process back up the pages.. transaction is still open..
But before the backup completes the above said transaction commits/Roll Back.
So backup will also include the part of LOG for "Roll forward/Roll Back" while restoring the database.
Now my question is that, backing up the LOG is an overhead here is it ? So if I disconnect all the users before the backup starts, ensuring there's no any transaction open which may or may not commit before the backup process ends, would it have been more faster ?. (I know I just can't disconnect all the users ofcourse ! 🙂 )
Thanks.
Yeah, if you give the backup process less work to do, handling rollback/rollforward, then yeah, it'll be faster. But I wouldn't suggest disconnecting all users was a viable tuning method.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
September 20, 2012 at 3:58 am
Joy Smith San (9/19/2012)
if I disconnect all the users before the backup starts, ensuring there's no any transaction open which may or may not commit before the backup process ends, would it have been more faster ?. (I know I just can't disconnect all the users ofcourse ! 🙂 )
one more thing if the user is operating some data wont that gets hampered.
How about weekly full backup on sunday night and hourly or other timing of Differential backup? hope this will defenetiley help large DB's.
Regards
Durai Nagarajan
September 20, 2012 at 9:50 am
Users working with data don't impact the data reading portion of the backup. They can impact the log, as noted above, but that's mostly depending on the amount of logging done.
As far as the timing, that can impact backup since you have I/O and CPU resources being shared among all processes. If the backup runs at the same time as something else doing work on the server, there is an impact, but it's never been that significant for most of my servers unless heave compression is in use.
The ways you speed up backups are compression (needs more CPU, reduces I/O) and backup to faster or multiple devices.
September 21, 2012 at 8:38 am
For minor possible changes, you can look at adding the options:
BLOCKSIZE = 65536
BUFFERCOUNT = <something appropriate - maybe try 12 for 4GB RAM or any 32-bit, 128 for 64GB RAM>
MAXTRANSFERSIZE = 4194304
Other than that and the advice above, adding more disks to source or target drives and/or checking for filesystem level fragmentation on the source or the target are options.
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply