June 1, 2008 at 3:03 pm
Hi,
I am being told by IBM folks supporting our Windows servers that I being a SQL DBA should ensure physical disk performance due to file fragmentation should be taken care of by me through scheduling disk keeper runs everyday. My question is - can I let disk keeper defrag database disks even when the sqlservr is ON? Isn't there a chance -
- that sqlservr has a lock on the mdf/ldf files so that Diskeeper can't defragment these files at all? OR
- that diskeeper somehow gains access to these files for moving them around and causes some SQL processes to fail?
or am i a little ill-informed here & that disk keeper is smart enough to take care of all these resource contention issues and still do its job?
I am looking for your advice! Thanks!
June 2, 2008 at 11:35 am
I would imagine that the mdf's would be locked and in use as long as the SQL Server service is running.
Here's a snippet from DisKeeper's FAQ page which isn't exactly clear. Perhaps contacting thier support would be a better path to a complete answer...
Defragmenting Exchange and SQL Server database files
Diskeeper is able to defragment Exchange and SQL database files. However, there are times when certain files are locked in such a way that Diskeeper is denied access to them.If this should happen with your database files, you can either try again later, or you can shut down the Exchange or SQL Server services and defragment.
-Luke.
June 2, 2008 at 12:40 pm
First of all, we need to understand whether or not we should have disk defragmentation. If it is a standalone server configuration, I mean your server does not use SAN storage, you may consider it.
Here is a link below. May it help.
http://www.las-solanas.com/storage_virtualization/san_volume_defragmentation.php
June 2, 2008 at 10:35 pm
what is diskeeper?!?
why not use shrink database?!?
June 2, 2008 at 11:07 pm
For database files ( mdf, ndf and ldf extensions), if the files are properly sized then there should be no need to perform disk defragmentation. As "SQL Oracle" indicated, defragmentation when using a SAN may not be a best practice and should be reviewed with the SAN administrators.
However, database defragmentation (index rebuilds) do need to be performed on a regular basis and the frequency is dependent on the maintenance window. I have some production SQL Servers where selective index defragmentation occurs on a nightly basis.
To avoid disk fragmentation, the partition should be dedicated to SQL Server, no other files should be on the partition, and the allocation size should be 64K, the size of a SQL Server extent.
See the article "Considerations for the "autogrow" and "autoshrink" settings in SQL Server" at http://support.microsoft.com/?id=315512.
Data files (mdf and ndf) should be sized to allow for future data growth, typically with enough free space for 3 months of growth plus sufficient size to allow defragmentation of the largest table with indices. Each database's log files should be pre-allocated for the largest needed size.
Tempdb should be sized for the largest needed size and the default behavior of having an initial default size of 2Mb and then allow to autogrow , can cause waits will the growth is occurring, causes disk fragmentation, which oes affect performance. For tempdb under SQL Server 2000 (but not later versions), the optimal number of tempdb data files depends on the degree of contention seen, but to start, the number of files should be equal to the number of cores (up to about 10)
Under normal operations, database files should not be shrunk. Note that shrinking will cause database fragmentation and index rebuilds should be run after a file shrink is run. Note that if tempdb is in use and a shrink is executed, this can cause consistency errors and the shrink operation may fail. The only available corrective action is to restart SQL Server.
You should also review the disk allocation unit size, which should be the same size as a SQL Server extent of 64K and should not be the default of 4K.
Finally, for partitions that house database files, disk space monitoring should be turned off as OS level monitoring of space cannot monitor the free space within the database files ( create a new database with an initial size that fills the partition - internally, there will be almost 100% free space but the OS will report that there is no free space).
If database backup files are on disk, the recommend allocation size is also 64K as the minimum database size is 2Mb resulting in a minimum backup file size of about 2Mb.
SQL = Scarcely Qualifies as a Language
June 4, 2008 at 3:46 am
Carl has given good advice.
Looking at just Diskeeper itself, it is safe to run it while SQL Server is running, but not necessarily wise.
As a simplified example, SQL decides it wants to read or write extent number 50 in a database file. In order to build the I-O request to NTFS, it translates this into a request to NTFS to read or write 64K at offset 3200K from the start of the file. NTFS then looks up in the MFT what address that relates to on the disk volume as presented to Windows and builds a command to the disk driver DLL to read or write the relevant sectors. The disk driver DLL takes this command, translates that to where the data actually exists (taking into account RAID or SAN setup, etc) and passes the command to the physical disk drive for execution. (Some steps have been missed out, but the basics are about OK.)
Diskeeper and similar defrag products work at the NTFS level. They are able to move data around the disk and update the MFT regardless of the SQL Server or other application I-O activity to the file. Normal file locks seen by applications are not visible at the Diskeeper level. Diskeeper works on only a few KB at a time. NTFS has a serialisation process that locks the segments being moved by Diskeeper, and any SQL Server I-O to these segments is queued until the segments have been moved and the MFT updated. SQL Server I-O to other segments is not locked and can proceed as normal. When the NTFS serialisation lock is released, NTFS only knows the new location of the data and generates the correct command to the disk driver layer.
There are a few file operations that are likely to lock the complete file at the Diskeeper level. When a file is expanded or shrunk (or created or deleted), this is the NTFS equivalent to a schema change in SQL Server, and NTFS will set a NTFS lock to prevent Diskeeper moving any more data. The high level NTFS lock will wait until the current move is complete, which means the SQL Server request to increase the database file size is waiting on NTFS which in turn is waiting on a Diskeeper operation. (And your users are all waiting on SQL Server...) When the file operation has completed, NTFS removes the high level lock and Diskeeper can proceed as normal. Also, SQL Server I-O that does not use the file being expanded or shrunk continues as normal while the NTFS high level lock is in place. NTFS has its own rollback facilities so that if Windows crashes while Diskeeper is moving data, NTFS will rollback the change when Windows is restarted. (Do not confuse SQL Server and NTFS locks - they are managed complete independantly!)
Therefore, although it is safe to run Diskeeper while SQL Server is running, there may be 'lock contention' between Diskeeper and SQL Server operations. The 'lock contentions' mean it may not be wise to run Diskeeper during peak time when SQL Server is also running. Many DBAs take the view that Production SQL Server should be stopped before running Diskeeper, but are not so worried about non-production systems. You have to choose what is right for your situation.
Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.
When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara
June 5, 2008 at 9:40 pm
Thank you very much all!!!
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply