August 24, 2015 at 3:59 am
Do I need to add data-file defragmentation to my regular database maintenance, or does SQL Server avoid file fragmentation by reserving contiguous data space?
A I'm not sure that Microsoft has an "official" defragmentation recommendation for SQL Server data files, but I believe that OS-level defragmentation can often improve SQL Server performance. SQL Server uses standard Windows I/O APIs to manage its physical I/O; SQL Server doesn't have direct control over the exact physical placement of bytes on disk when Windows creates and expands SQL Server disk files. Imagine that you're creating a new database—which, of course, means that you're creating a new physical file. Windows will likely attempt to use contiguous disk space, if the space is available, to create the data file. However, Windows can't create the file in contiguous space if the file is too large to fit into any of the existing blocks of contiguous disk space. In such a case, the SQL Server data file won't be contiguous, and SQL Server can't do anything about it.
Let's assume that the data file can fit in a contiguous block of disk space at creation time. Physical fragmentation could still occur if the SQL Server data file is marked to autogrow and Windows can't satisfy those additional requests for space with contiguous space. Note that none of the defragmentation techniques in SQL Server (e.g., DBCC INDEXDEFRAG) address the problem of physical fragmentation within the OS file system. Also, DBCC SHOWCONTIG doesn't report fragmentation that happens when data files are spread over noncontiguous disk space within the file system. These commands reveal other types of fragmentation within the SQL Server extents and rows within the physical data file. (A discussion of these commands is outside the scope of this Q&A.) Will SQL Server performance improve if you defragment SQL Server's data files? It depends. Certain classes of queries—such as table scans and large-range scans that read data in large, contiguous blocks—might benefit from OS-level file defragmentation. However, online transaction processing (OLTP)—style queries that issue single INSERT, SELECT, UPDATE, and DELETE commands are unlikely to benefit because short transactions that affect relatively small amounts of data rarely need to read large blocks of data in one operation.
August 24, 2015 at 4:04 am
Err, what's the question here?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
August 24, 2015 at 6:25 am
In over 20 years working with SQL Server, I've only ever seen the OS files defragmented in handful of cases. Most of the time if you build your database files large enough and grow them in large enough chunks, you'll have extremely minimal fragmentation at the OS level. Only in extreme cases where the database grew to hundreds of gb a few mb at a time have I seen the need for OS level defragmentation. Plus the fact that you can't afford the down time in most cases because you'll have to take SQL Server offline in order to do this defrag. Then, finally, add the fact that most enterprise level systems are not sitting on a disk, but rather a disk array where you've got distributed storage anyway. In short, people just don't defrag at the OS level.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply