Backup/Restore optimization using MAXTRANSFERSIZE, BUFFERCOUNT, BLOCKSIZE

  • i'm testing out some parameters for the backup and restore commands in order to speed up a monthly database refresh process. at the moment, the database is approx 1TB and the backup and restore take approx 4 hours each.  

    in testing out the various combinations of MAXTRANSFERSIZE, BUFFERCOUNT, BLOCKSIZE with a 25GB database, i am seeing 40-50% improvement on backup times, and about 15-20% improvement on restores.  when i tread over about BUFFERCOUNT = 500, i start getting out of memory errors. 

    my question(s): 
    since i am using a 25GB database as a test (i don't have time, space, or a free server to test  a 1TB backup/restore repetitively), does the database size affect the upper limit when you start seeing out of memory errors? 

    when hardcoding these parameters, how do you ensure that you never get out of memory errors? or is that always a risk? 

    basically, my plan was to find out the breaking point (OOM) with these parameters, then to back down a level or two from that and make a single test on a 1TB database during a normal backup and restore. 

    attached are my test results if you are interested.

    thanks for any input!

  • 1 - and how many files you backing up to?
    2 - Using backup compression or not?
    3 - How many cores on the server?
    4 - Server Memory?

    Can you do a backup to san and then a san copy and/or lun transfer to the other server? This is also normally faster than backup to remote server or restore from remote server.

    1 and 2 are the ones I saw having more impact on backup and restore times for medium to big size db's.
    As default based on the standard servers my clients use and on the size of db's we use normally a standard of 4 to 8 files, always with compression on.

    If backing up to a network share I do play with buffer and transfersize but don't have those values at hand.
    For critical cases where speed is a must and volumes are quite high I have used some of the items mentioned on this document http://download.microsoft.com/download/d/9/4/d948f981-926e-40fa-a026-5bfcf076d9b9/Technical%20Case%20Study-Backup%20VLDB%20Over%20Network_Final.docx

  • frederico_fonseca - Friday, July 20, 2018 11:00 AM

    1 - and how many files you backing up to?
    2 - Using backup compression or not?
    3 - How many cores on the server?
    4 - Server Memory?

    Can you do a backup to san and then a san copy and/or lun transfer to the other server? This is also normally faster than backup to remote server or restore from remote server.

    1 and 2 are the ones I saw having more impact on backup and restore times for medium to big size db's.
    As default based on the standard servers my clients use and on the size of db's we use normally a standard of 4 to 8 files, always with compression on.

    If backing up to a network share I do play with buffer and transfersize but don't have those values at hand.
    For critical cases where speed is a must and volumes are quite high I have used some of the items mentioned on this document http://download.microsoft.com/download/d/9/4/d948f981-926e-40fa-a026-5bfcf076d9b9/Technical%20Case%20Study-Backup%20VLDB%20Over%20Network_Final.docx

    we are backing up to a network fileshare (on a VM with SAN disks). 
    i am testing using only 1 backup file at the moment with the 3 parameters testing. backups with 4 files give about 7% improvement. 8 files gives 20% improvement. but restores are slower - 150% and 100% slower respectively. 
    backup compression = yes.
    8 cores on the test server and 12 on prod server. 
    server memory 20GB / sql memory 8GB on test. server memory 144GB / sql memory 80GB on prod (both test and prod servers have other sql instances installed)

Viewing 3 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic. Login to reply