June 14, 2004 at 11:57 am
Does anyone do regular operating systems disk defrags on their SQL servers? Good idea, bad idea? Do you find it helps performance at all?
Anything I've found so far seems to say there's no reason not to do that (not exactly a resounding yes) but I'm getting a little push back from our NT admins and I'm just trying to get a feel for what other folks are doing.
Thanks
June 14, 2004 at 12:02 pm
We run it on all non db related drives. Whether it makes a difference or not I am dubious about, I've not seen any definitive stats to show me that. Bearing in mind that there are few changes server wise in anything but SQL data files I would probably not worry about it (unless of course you are hosting more than SQL and working with flat files).
June 14, 2004 at 12:08 pm
I was considering running on the DB drive because a defrag analysis report shows at least one of the database is in 27 fragments. I'm supposing that having it in one contiguous space might help performance. But I really haven't been able to come up with any definitive information on whether it's of any value or not.
June 14, 2004 at 12:52 pm
I wouldn't recommend using a defrag on datafiles. You won't be able to without stopping SQL anyhow.
An option to resolve the fragmentation level disk wise on the files would be to backup and drop all the databases, then reformat the drive and restore the databases, this would push them into contiguous space.
June 14, 2004 at 1:54 pm
You should use DBCC for degragging:
DBCC INDEXDEFRAG: In SQL Server 2000, Microsoft introduced DBCC INDEXDEFRAG to help reduce logical disk fragmentation. When this command runs, it reduces fragmentation and does not lock the table, allowing other users to access the table when the defragmentation process is running. Unfortunately, this command doesn't do a great job of logical defragmentation.
The only way to truly reduce logical fragmentation is to rebuild your table's indexes. While this will reduce all fragmentation, unfortunately it will lock the table, preventing users from accessing it during this process. This means that you will need to find a time when this will not present a problem to your users.
Of course, if you are unable to find a time to reindex your indexes, then running DBCC INDEXDEFRAG is better than doing nothing.
Example:
DBCC INDEXDEFRAG (Database_Name, Table_Name, Index_Name)
All the best,
Dale
Author: An Introduction to SQL Server 2005 Management Studio
June 14, 2004 at 3:27 pm
Well OS Defragging helps, if nothing else to consolidate some freespace into bigger chunks. I have a 50 gig drive with about 20 gig free. The largest contigous freespace is "maybe" a couple of gig.
As stated though you would need to shutdown SQL Server, or at least take the DB offline to do an OS Defrag.
The DBCC Defrags will put the data in better order. But think it through. If the DB has 3 allocations, but they are physically 1-3-2 on the drive, the Index defrag will load the data into the allocations 1-2-3. It doesn't comprehend physical allocation positions.
I'm dating myself here, consider the head movement as you read through the DB.
Again I'm dating myself. In VSAM (A mainframe access method) everything is stored based on an RBA (Relative Byte Address) from the begining of the file. So the access method says I want ? number of bytes starting at RBA ???. It is the OSs responsibility to resolve that using it's equivalent of the FAT to decide which allocation contains that RBA etc.
I've done enough digging (not a lot, but enough) into SQL data files to make an educated guess it works very much the same.
Also as the drive gets fragmented, more activity occurs maintaining the FAT (what's it called on NTFS ??? alzheimer is setting in). Hard drives don't fail like they used to so you have to worry about software messing up the drive. If that FAT gets hosed. OH !!!!!!!
KlK
June 14, 2004 at 4:57 pm
From the reading I've done today I believe that the DBCC REINDEX defrags the logical database structure, but does nothing about the physical makeup of the file(s).
I have the window to shut SQL down; this is a M-F app, so after a full backup Friday night, I could theoretically shut SQL down for the weekend on this particular server, so that isn't an issue for me in this case (not true with my other SQL servers).
My feeling is that having the physical file in order, in a contiguous space, would make more sense than broken up into 27 pieces scattered across my drives - my hesitency stems from my years as an NT administrator and I could swear that I had read something about not "needing" to defrag NTFS drives (but I don't think that's actually true) and the reverse of not being able to find a solid reference that "yes" this is a good thing to do on your SQL servers.
If someone could point to a Microsoft article that said, "do it" or "don't do it" I'd feel better, but I don't find that. I have run across a couple of articles (not Microsoft) that essentially read, "it can't hurt..." which isn't exactly the resounding yes I was looking for!
June 14, 2004 at 5:19 pm
http://www.winnetmag.com/SQLServer/Article/ArticleID/15742/15742.html
Hope that helps.
All the best,
Dale
Author: An Introduction to SQL Server 2005 Management Studio
June 14, 2004 at 10:49 pm
How about trying it during your maintenance window and letting us know if it helped any? (If it helps noticably, I would be surprised, but it would be good to know.)
June 15, 2004 at 8:20 am
Depends on your version of SQL Server and how you configured your databases and when. Older versions of SQL Server required you to preallocate physical disk space. While you could extend the database to a point, it was best to anticipate the maximum size and allocate it upfront. If this was done on a "clean" drive, your physical database would not benefit from file defrag since it was originally laid out in an efficient manner on the hard drive. Recent versions of SQL Server have provided for automatic file growth in which case the database expands when needed using space available at that time. If available free space is fragmented... guess what... , your new extensions to your database will be fragmented and not run efficiently. So here to defragging your drives holdling the SQL databases and indexes would benefit from defragging IF there are other non-database files on the drives.
Generally, we dedicate drives to our SQL Server databases and logs and preallocate space when we build the database and for us file defragging does serves no purpose. (We certainly run DBCC defrags to keep SQL Server's index pages in optimal shape, but as previously pointed out, that is something different than physical disk defrags.)
June 15, 2004 at 8:41 am
I seem to remember on of the Brians posting something to the affect that physical file defrag was of limited use if you database was on a RAID 5 array because the data is striped in any case.
In theory, if you set the database size to auto-grow then you risk physical file fragmentation, but realistically your database should not be expanding its physical files all the time. The size of your database should be under your control.
It is possible that the log files become fragmented, but as these are written serially it shouldn't be much of a problem.
June 15, 2004 at 9:00 am
We monitor for disk queue length on the database and log file drive. On those servers where disk queue length seemed to be getting bad, we ran disk defrag during the day on the database drive, while SQL Server was running with lots of activity. We actually ran it a few times. While running it with SQL Server online did not completely eliminate file fragmentation, each time, it made an improvement in decreasing the number of file fragments. And we also noticed that disk queueing numbers decreased.
On another server, we also did what someone mentioned above. Stopped SQL Server, copied the mdf files to a different location, formatted the drive, and then moved them back. That is the surest way to eliminate file fragmentation. We also noticed disk queue length numbers improved on that machine as well.
I will say though, that if disk queue length numbers are not a concern, then there is probably no need to do it. But it certainly would not hurt anything is you can stand the outage time.
Cheers
June 15, 2004 at 9:13 am
Check out this previous post. It's goes in the same direction as this one. However, Steve Jones jumped in. You might want to read it:
http://www.sqlservercentral.com/forums/shwmessage.aspx?forumid=9&messageid=101769
All the best,
Dale
Author: An Introduction to SQL Server 2005 Management Studio
June 18, 2004 at 6:00 am
I used Diskkeeper 8 to defragment a heavily fragmented drive (40 Gig mirrorred volume) that housed applications, data and sql server. I stopped running services including sqlserver, replaced a bad cpu fan (the reason I was called out there -- bsod), installed a trial version of diskkeeper 8 and then ran the defrag. I had to let the defrag run several times over a 4 hour period to finally clear up the vast majority of the fragmentation. I then restarted the server, let all the processes fire up and had no issues with SQL or any other program.
I have read the articles that say yes and some that say no. If you really want to be sure, install sql to a test box, add some of your db's and then test defragmentation.
A side note. Their backup time with veritas was cut by 30% after the defrag.
BillH
June 18, 2004 at 7:22 am
Good input BillH except:
When she moves the databases to a test server, defragmentation will be a result of copying the databases. So she won't realize any benefit from defragging the test environment. Because it will already be completely defragmented when she runs diskkeeper.
Viewing 15 posts - 1 through 15 (of 16 total)
You must be logged in to reply to this topic. Login to reply