October 30, 2015 at 7:30 am
Jeff Moden (10/29/2015)
Jayanth_Kurup (10/29/2015)
http://blogs.msdn.com/b/sqlcat/archive/2008/03/02/backup-more-than-1gb-per-second-using-sql2008-backup-compression.aspxInteresting read
What's "interesting" is that it would appear (I may have misread it) that they're putting the backups on the same system as the database and that would not be a good thing I don't care how fast it is.
What if they're just staging the backups to DAS and then copying them?
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
October 31, 2015 at 7:13 pm
Kristen-173977 (10/30/2015)
Jeff Moden (10/29/2015)
I've got a major table that contains call recordings divyed up like that (almost 400 gb) and I have one filegroup and file per month. Once a month goes by, I do a final backup on the month and set the filegroup to READONLY. As a bit of a sidebar, the auditors LOVE that....
Done right, it's not a trivial task. It takes careful planning to avoid storing a whole lot of blank disk space in read-only files. You DO have to make a plan and it DOES have a lot of caveats, but it's worth every minute of research and planning.
Any documentation on that Jeff? We don't really (I think?!!) have big enough tables to warrant that approach, but I would certainly appreciate seeing a "worked example" of some sort 🙂
Absolutely. Please see the following "attachment" link. The PPT runs you through it all and the code (numbered in the correct order) builds an example 4 million row table and demonstrates how to partition it including the monster trick of doing a final index rebuild without there being any free space left in the filegroup so you're not saving empty space in a READ-ONLY filegroup.
http://www.sqlservercentral.com/Forums/Attachment17585.aspx
It's a two hour presentation that I give now and again.
--Jeff Moden
Change is inevitable... Change for the better is not.
November 1, 2015 at 9:40 pm
This article defines many options to speed up the backup, maybe it will help
http://henkvandervalk.com/how-to-increase-sql-database-full-backup-speed-using-compression-and-solid-state-disks
November 3, 2015 at 2:01 am
i use this settings:
SKIP, NOREWIND, NOUNLOAD, COMPRESSION
BUFFERCOUNT = 2200
BLOCKSIZE = 65536
MAXTRANSFERSIZE = 2097152
my 8hours backup (1Tb compressed) now is done in 1:20Hours, at 800mb/s
check memory usage before set this params, level it by steps
Viewing 4 posts - 16 through 18 (of 18 total)
You must be logged in to reply to this topic. Login to reply