April 28, 2012 at 7:15 am
I have heard rumours from my peers that defragging the MDF will in some way logically fragment the indexes so that it would necessitate another index rebuild.
In my head, the two operations work on 2 different layers so they should be mutually exclusive and hence I disagree with these rumours.
As such, I am guessing that an MDF defrag after an index rebuild, or a rebuild after an MDF defrag will result in an index which is in the same logical and physical state and not need further manipulation.
But a DBA I am not and hope you pro's could help me gain clarity on this issue.
April 28, 2012 at 8:45 am
MadTester (4/28/2012)
I have heard rumours from my peers that defragging the MDF will in some way logically fragment the indexes so that it would necessitate another index rebuild.
Err, no.
mdf file defrag puts the pieces of the file back into a nice contiguous piece. NTFS neither knows nor cares what's in that file.
Index defrag puts the indexes within the file back into a nice order, it doesn't know or care where those indexes are on disk
As such, I am guessing that an MDF defrag after an index rebuild, or a rebuild after an MDF defrag will result in an index which is in the same logical and physical state and not need further manipulation.
File defrags isn't something often done, especially if the files are on a SAN. Also note that you almost certainly need to have the SQL service stopped to do such a defrag.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
April 28, 2012 at 9:36 am
April 28, 2012 at 10:30 am
GilaMonster (4/28/2012)
File defrags isn't something often done, especially if the files are on a SAN. Also note that you almost certainly need to have the SQL service stopped to do such a defrag.
Not a challenge here. I really want to know and haven't found (IMHO) a truly suitable answer for it. Since you're one of the folks in this world that I trust, let me ask you why it is that people think that just because something is on a SAN that they think that they...
1. Don't have to worry about defragging files?
2. Don't have to worry about whether or not logical drives are actually on separate spindles for performance reasons?
--Jeff Moden
Change is inevitable... Change for the better is not.
April 28, 2012 at 10:56 am
Jeff Moden (4/28/2012)
GilaMonster (4/28/2012)
File defrags isn't something often done, especially if the files are on a SAN. Also note that you almost certainly need to have the SQL service stopped to do such a defrag.Not a challenge here. I really want to know and haven't found (IMHO) a truly suitable answer for it. Since you're one of the folks in this world that I trust, let me ask you why it is that people think that just because something is on a SAN that they think that they...
1. Don't have to worry about defragging files?
Because some SANs, especially the top ones, have a very different way of storing files to what's done on a single disk. On a single disk you want file sectors contiguous so that the head can be positioned to the start and then read to the end. What a SAN can do, is store those sectors spread across multiple disks, so it's not a single disk head reading, it's a lot of them.
SANs can be defragged, but you cannot use standard windows disk defrag tools on them, must use tools that that the SAN vendor supplies that are aware of that SAN's storage methodology.
2. Don't have to worry about whether or not logical drives are actually on separate spindles for performance reasons?
Again because good SANs don't necessarily store things the same way that single disks or local RAID arrays do. Good SANs can, for example, move files around dynamically according to load and usage. File getting really heavy random reads, move it to the SSDs. File getting few reads and no writes, move it to RAID 5, etc.
Bear in mind that a SAN is not just an array of disks connected to the network. There's so much caching and processing involved in them now.
Note that I'm not saying there's magic SAN dust that just makes them faster, I'm saying that they cannot be treated the same way you would treat direct attached storage
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 2, 2012 at 4:24 pm
Glad I found this thread...question for you Gail - you alluded to using a tool specifically from the manufacturer of a given SAN to perform defrags with (if and when needed).
Are there any 3rd party tools that you know of which would negate the need for the manufacturer's specific tool? Sysinternals comes to mind (CONTIG for instance), but I think Microsoft bought them, and I am not sure if the tools are the same now.
Any thoughts from anyone on this?
May 2, 2012 at 4:48 pm
Rich Yarger (5/2/2012)
Are there any 3rd party tools that you know of which would negate the need for the manufacturer's specific tool?
Do you want to potentially lose all the data on the SAN, make things worse or have no effect at all?
SAN-specific tools would run from the SAN itself, not from the server that attaches the SAN LUNs. Like any low-level tool for a SAN, it will be vendor and likely version specific.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 2, 2012 at 5:48 pm
Reason I was asking is because the internal array from HP (for which that part makes up 2.8tb of space) apparently did not come with any tools to do any form of defragmentation with. I am aware of options being made available to a RAID controller, but this comes from the I.T. group that set the system up (and I am not that familiar with the array controller).
Guess my next step is to research that, and see if perhaps there isn't something already on the controller itself.
May 3, 2012 at 2:59 am
Rich Yarger (5/2/2012)
Reason I was asking is because the internal array from HP (for which that part makes up 2.8tb of space) apparently did not come with any tools to do any form of defragmentation with.
Speak to the people at HP, they can tell you what their SAN requires in the way of defrag and they will have the tools.
In my experience (with EMC) the tools don't come with the SAN, they're extras, like all the SAN monitoring tools that the vendors have.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 3, 2012 at 10:54 am
Yes - that is what I found in my research yesterday as well. The HP Insight Manager is what looks to be the software set that is needed. Just need to find it for the version of what we have. Thanks Gail.
May 4, 2012 at 6:46 am
GilaMonster (4/28/2012)
File defrags isn't something often done, especially if the files are on a SAN. Also note that you almost certainly need to have the SQL service stopped to do such a defrag.
WHy we need to stop services here ?
-------Bhuvnesh----------
I work only to learn Sql Server...though my company pays me for getting their stuff done;-)
May 4, 2012 at 6:52 am
May 4, 2012 at 6:59 am
MadTester (5/4/2012)
SQL services lock files
It means that there should not be any connection (some process is using) with those files but why this case doesnt get applied on INDEX level.
-------Bhuvnesh----------
I work only to learn Sql Server...though my company pays me for getting their stuff done;-)
May 4, 2012 at 9:22 am
Bhuvnesh (5/4/2012)
MadTester (5/4/2012)
SQL services lock filesIt means that there should not be any connection (some process is using) with those files but why this case doesnt get applied on INDEX level.
I'm not sure what you're asking here. An external process run from Windows doesn't know what's in the files. It just moves the large fragments from one spot to another.
SQL Server manages the entire file, so it locks the whole database for quick access. Taking locks on the files takes time and you can't have that when you're trying for quick access.
Also, SQL Server needs exclusive locks on the files. You cannot ensure data integrity if some other process can access the file.
Viewing 14 posts - 1 through 13 (of 13 total)
You must be logged in to reply to this topic. Login to reply