January 22, 2004 at 1:04 am
Hello All,I like to setup a SQL 2000 server with separate disk devices for the data and the log files.What is the best choice for the allocation unit size during the format of the devices?Thanks.Peter
January 22, 2004 at 11:39 pm
Hi
When SQL Server reads data from disk from a data file, it does so in 64kb blocks, or the size of an extent. So setting the allocation or block size for your NTFS partition to 64kb will save a lot of disk I/O. The default NTFS cluster size is 4kb regardless of the partition size. This is cool for file servers, coz a small 1 or 2k file will not waste a lot of space, only 1 file can be stored per cluster. But for SQL datafiles 64kb is the best choice. Also when db's are backed up, an extent is the smallest unit backed up, so if any data has changed on an extent, this is backed up. So each extent to be backed up will only result in one I/O. Another reason for 64kb cluster size is fragmentation. When SQL allocates a new extent for a table it will do this on one cluster, good for performance.
As far as the log file goes, I've had a look around and can't find any guidelines. Remember that the log file in db's with mostly writes is going to be written to serially, so many spindles, and for that matter cluster size won't make too much difference. I'd say your best bet for a log file partition is a RAID 1 array with a 32 or 64kb cluster size.
My 2 cents worth.
David
January 23, 2004 at 2:33 am
I've always set data to 8k for oltp database to match the page size for random reads etc. logs & backup drives I've always set to 64kb. Means most defrag tools won't work of course.
Never been able to test the effect of changing block size, I'd dearly love to know if it really has any measurable effect !!
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
January 23, 2004 at 8:50 am
Use 64K for the file system cluster size. That helps performance even with OLTP as, even though disk I/O granularity is indeed 8K blocks, read-aheads use 64K extents.
--Jonathan
January 23, 2004 at 9:00 am
I often wondered about that, if I read a series of non-contiguous 8k pages does sql server actually bring in the pages or use read ahead to bring in an extent .. question there for read ahead does that assume contiguous storage for the rows? Or is this a whole new question?
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
January 23, 2004 at 9:12 am
Read-ahead is used when the pages should be contiguous (e.g. indexes and scans). Pages not in the read-ahead range will not be cached, even if they're in the read extents. Note that the file system will fetch a 64K cluster when that's the cluster size, but SQL Server will only cache the 8K page it requested. As it doesn't take the file and disk system much more time to work with 64K than it does with 8K, the better performance gained on read-aheads (and other sequential operations) by setting the cluster size to 64K more than offsets the overhead of reading 64K rather than 8K for random I/O.
--Jonathan
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply