October 4, 2010 at 3:29 pm
Hi,
I've been operating with the preference to take SQL server offline when defragmenting the disks in which SQL Server datafiles or logfiles reside on in order to avoid possible database corruption. I'm having a bit of difficultly finding articles to support my argument. Anyone have links or references in their repositories I can use to support my arguments?
October 4, 2010 at 3:32 pm
no links but we run diskkeeper with SQL up and running, no problems.
---------------------------------------------------------------------
October 4, 2010 at 3:36 pm
Jon.Morisi (10/4/2010)
Hi,I've been operating with the preference to take SQL server offline when defragmenting the disks in which SQL Server datafiles or logfiles reside on in order to avoid possible database corruption. I'm having a bit of difficultly finding articles to support my argument. Anyone have links or references in their repositories I can use to support my arguments?
No links and I haven't really seen corruption, but I have seen it take 10x (or more) longer if you don't take them down. It can't access/move in-use memory locations.
Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.
For better assistance in answering your questions[/url] | Forum Netiquette
For index/tuning help, follow these directions.[/url] |Tally Tables[/url]
Twitter: @AnyWayDBA
October 5, 2010 at 6:32 am
The native defragmentation utility is inherently much slower than the advanced commercial utilities and also lacks the background auto-defrag features of the latter, so it's bound to take much longer. I don't think corruption should be an issue, but I guess that many of the fragmented files will be skipped and not defragged, kinda making the defrag with the native utility less useful. Sorry, but I also don't have any links.
October 5, 2010 at 6:43 am
Jon.Morisi (10/4/2010)
I've been operating with the preference to take SQL server offline when defragmenting the disks in which SQL Server datafiles or logfiles reside on in order to avoid possible database corruption. I'm having a bit of difficultly finding articles to support my argument. Anyone have links or references in their repositories I can use to support my arguments?
My personal opinion is that all depends on the status of the disk at the time SQL Server allocates space. Remember that SQL Server pre-allocates space so if the disk has enough contiguos space SQL Server will allocate a single chunck of space and use it forever - no further physical fragmentation will occur.
This is why space management should be considered a sensitive task and this is also why "database shrinking" is usually a bad idea.
Bottom line is... pre-allocate space as needed in a suitable disk drive then let SQL Server work and just keep an eye on logical fragmentation.
Please check TechNet note next by Brian Moran http://www.microsoft.com/technet/abouttn/flash/tips/tips_083104.mspx
_____________________________________
Pablo (Paul) Berzukov
Author of Understanding Database Administration available at Amazon and other bookstores.
Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.October 5, 2010 at 7:34 am
Jon.Morisi (10/4/2010)
Hi,I've been operating with the preference to take SQL server offline when defragmenting the disks in which SQL Server datafiles or logfiles reside on in order to avoid possible database corruption. I'm having a bit of difficultly finding articles to support my argument. Anyone have links or references in their repositories I can use to support my arguments?
Jon,
Paul is right. Allocating the space that the Database needs up front is the ideal. However, in my experience most of the DBA's I work with are not the proactive. Additionally, if I were to tell Clients to create a 100 GB File that they are going to grow into, they still ask me how big the database really is and create the file much smaller.
So, if your disk is fragmented and you want to avoid loss of data, here is what I recommend:
1) Take the system that is writing the data offline.
2) Backup the Databases on the Server. Yes, everything (master, msdb, temp, etc.) just in case.
3) Copy your backups to an alternate location
4) Shutdown SQL.
5) Run your defragmentation.
6) Bring SQL back online and ensure that the data has integrity (DBCC Checkdb)
7) Extend your Data file to avoid physical file fragmentation caused by growth.
One of the things I tell my clients is this. If you don't want your virus scanner locking your file while SQL is using it, why would you want SQL to use the file while you are defragmenting it? To avoid performance issues/corruption maintain the idea that one application should modify a file at a time.
:blink:**This is NOT an endorsement **:blink:
Diskkeeper Crop posted this from writer Brad McGehee (of Red-Gate fame):
http://www.diskeeper.com/defrag/dk-boost-sql-server.aspx
Regards, Irish
October 5, 2010 at 8:02 am
sysInternals Contig.exe allows you to defrag specific folders, file specs like *.mdf or even specific files form a command line...so it's a little faster operation to script out the command for each of your mdf/ldf files and defrag them;
Lowell
October 5, 2010 at 8:45 am
george sibbald (10/4/2010)
no links but we run diskkeeper with SQL up and running, no problems.
Thats how I ran it at one of my previous places... The invisible tasking from the newer versions of diskeeper is brilliant, never had an issue with it taking processor from SQL.
October 5, 2010 at 9:26 am
I appreciate the feedback.
October 5, 2010 at 2:54 pm
http://www.piriform.com/defraggler
Will allow single file defrag too
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply