February 23, 2004 at 9:24 am
I am considering running Diskeeper in the background on a NT4 server containing a SQL Server instance. How will this affect the databases? There are Maintenance plan jobs in place for optimisation and integrity checks. I was wondering if anyone had opinions on the cumulative effect of re-organising the data/indexes in these jobs as well as defragmenting the files which contain the data. I have a suspiscion it will either be a glorious triumph or it will end up on its knees begging for mercy. Any thoughts?
Thanks in anticipation...
(BTW, in case you hadn't guessed, I'm a newbie)
February 23, 2004 at 9:37 am
I've tended to not defrag those folders where SQL data is located. My suspicion is that it would slow down IO, but it shouldn't affect the SQL Server, unless you are in a very heavy OLTP environment.
I'd do these manually.
February 24, 2004 at 3:08 am
Most defrag tools will not touch files that are in use. This means that if SQL Server is running, no database files will get defragmented. You will need to stop SQL before a defrag of your database files will do anything.
If your databases have undergone frequent resizing, then they will be fragmented, and a NT defrag will help SQL performance. Databases can get fragmented:
a) If they have grown from a small size to a large size, but will small growth increments
b) If you have frequently shrunk files to reclaim space
c) The disk already contained fragmented files, so your databases got fragmented when they were originally created
Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.
When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara
February 24, 2004 at 5:30 am
The contiguousness of the data file itself can speed things up dramatically or not much at all. The higher the fragmentation the longer it will take for the OS to travel the peices and thus it hurts SQL itself. So doing it will be a bennifit.
I suggest testing the setting for diskeeper before deciding which helps best. In high use systems you will want to limit to once a day or week as the runtime as generally this activity will increase disk queue length for both reads and writes. However the "Smart scheduling" feature may work fine on your system as long as is all SCSI drives it will be touching (IDE drives completely lose when you use this and so do some SCSI drives). I do believe there is a demo copy available and I suggest testing before purchase.
February 24, 2004 at 9:43 am
Defrag drives only has positive effects SQL Server. The more the fragmentation the more work it takes to read data.
The best setting is to let Diskkeeper run continually for several hours during off peak hours. Several time a week.
It is best if you can do this in conjunction with a backup and maintenance plan.
February 24, 2004 at 10:35 am
Also keep in mind that defragmenting on the hard drive doesn't necessarily defragment your tables. You can have fragmented tables insdie the .mdf files. dbcc showcontig can help you here.
February 24, 2004 at 11:44 am
As Ed mentioned, you need to stop SQL, or at least take the DB offline to defrag the mdf / ldf.
There is a program, contig.exe, on ??? that will tell you the number of fragments the mdf / Ldf is in. Although it too will not report on a file the is in use.
Two cautions with defragging :
Had an issue a couple of weeks ago with the weekly (don't ask I hate it) reboot. Either Diskeeper or VScan go a hold of the MDF before SQL fired, and the DB wouldn't open for 4 hours !!. Fortunately off hours so I let it be.
Have had issue's with Diskeeper defragging the backup files, and dragging response time down to a crawl. Takes a while to defrag a 30 gig backup file. The disk layout was part of the problem, but again don't ask.
KlK
February 25, 2004 at 2:31 am
Thanks for all your info - much appreciated.
Mark
June 26, 2004 at 11:15 am
Hi Guys,
Is there a way to defrag ldf and mdf files using the normal windows defrag?
I tried it a few times and it keeps ignoring these files.
Thanks for all your help.
SK
June 27, 2004 at 8:16 am
You have to stop SQL server or close the db from open state (auto close option) and keep everyone out while defrag occurrs. Personally best way is to stop SQL Server during run so all files are trated at the same time.
June 27, 2004 at 10:32 am
Thanks for the reply.
I did stop sql server and sql manager but it still keeps ignoring the files.
any ideas?
Thanks.
June 28, 2004 at 2:30 am
My experience of Windows Defrag is that it needs the same amount of free space as the largest fragmented file if that file is to be defragmented.
You get a warning if you have less than 15% free space on the drive, but this does not guarantee that everything can be defragged even if more than 15% free space exists.
If you have a 200GB drive (or LUN) with a 50GB file, a 80GB file and 70GB free space (i.e. 35% free space), the 50GB file can be defragged, but not the 80GB.
One way round this problem is to use multiple physical files in your database filegroup(s). Keep each file to, say, 5Gb, and you only need 5Gb of unfragmented free space to defrag the whole volume, regardless of how big it is.
Being a dinosaur at heart, this type of file size management for defrag purposes is something that mainframe DBAs learnt about in the 1980's. It is good to see the Windows world catching up... 🙂
Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.
When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara
June 28, 2004 at 6:21 am
If not as Ed says then it may be the files are still locked. Try rebooting with SQL disabled and then see if that doesn't work. Again if yours doesn't fit into Eds statement.
Viewing 13 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply