April 19, 2012 at 2:31 pm
I've been reading a number of articles, blogs and forum discussions on tuning backups by using the 'NUL' disk device. Since the nul device is a construct within DOS/Windows/Linux, the "bit bucket", data you send to it goes nowhere and is not written anywhere.
So if you are trying to tune a backup and see what the theoretical throughput is what exactly are you measuring? It seems to me this is measuring how much can be read from disk, pump into memory and pump out to nowhere.
How far through a system do we get before we get to nowhere? Right up to the point of going out to disc? Then correlate this to whether the disk can handle the throughput using a tool such as SQLIO or CrystalDiskMark?
April 19, 2012 at 3:06 pm
The only thing you're cutting out is the write to disk. SQL hands the full backup buffer to the OS and says 'write this to the specified file please' and the OS discards it.
There's a good coverage of tuning backups here: http://sqlcat.com/sqlcat/b/technicalnotes/archive/2008/04/21/tuning-the-performance-of-backup-compression-in-sql-server-2008.aspx
It's not specific to compression, despite the title.
In short, if you're backing up to nul you're either testing the io throughput of the source device (if fiddling with buffersize and maxbuffercount has no effect) or CPU (if you're CPU bound during the backup) or limitations of the buffer sizes (if fiddling with buffersize and maxbuffercount does have an effect)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
April 19, 2012 at 4:33 pm
Thanks Gail, that was actually the first article I read on the subject :-). Re-reading it I was able to take away a bit more. And thanks for the explanation - seems to have confirmed my understanding.
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply