Index defrag doesn't seem to do anything

  • I just got an email from Idera pitching their free fragmentation analyzer, so I downloaded it and tried it out. Several tables report high levels of index fragmentation, 50%, 67%. But when I rebuild an index, some of them don't change, at all. Some do, and go all the way down to 0%, but some don't change at all. I've seen several articles stating that index fragmentation isn't often a performance issue, and worrying about rebuilding them is usually pointless.

    That may be, and I certainly don't intend to run defrag scripts every night, but I will likely run them whenever I deploy a new version of the DB, since that involves copying over ALL the data from EVERY table in the old DB into EVERY table in the new DB. The order of such records getting imported is whatever SQL Server decides, based on my import queries, so some indexes will certainly be completely trashed. That seems like an appropriate occasion to do a complete rebuild of all indexes.

    In any case, I'm curious why a rebuild doesn't defrag an index, even if a fragmented index isn't necessarily a problem. (It's not just Idera's tool, BTW, SSMS also reports the same unchanged level of fragmentation. Idera's tool just presents lots of information in a nicely formatted table.) Can anyone shed some light on the subject?

  • How many pages do you have on that table or tables?

    If I recall well, any table below 1k pages, a defrag will do nothing. So I would skip those.

    Also, I personally stick to a weekly or bi-weekly defrag job. More than that is overkilling, in my opinion, unless you have daily imports or jobs that may mess up statistics.

    I use Ola's solution on all my servers. You should take a look:

  • sql-lover (8/23/2013)


    How many pages do you have on that table or tables?

    If I recall well, any table below 1k pages, a defrag will do nothing. So I would skip those.

    Umm, 1K pages, like one thousand pages, or 1024 pages? Or did you mean 1KB total index size? (I'm NOT trying to be sarcastic here. I understand the principles of indexes well enough, but don't have that much practical experience with care and feeding of same.) In any case, most of the unchanged ones are quite small. One is 56 KB, the others are 20Kb - 40KB. But there is one, a clustered index of 192KB that is sitting on 23%, and refuses to go lower. Again, I doubt if it would cause any performance problems, but I don't understand why it won't change. Seems to me that an index rebuild should clear fragmentation to 0%, always.

    Also, I personally stick to a weekly or bi-weekly defrag job. More than that is overkilling, in my opinion, unless you have daily imports or jobs that may mess up statistics.

    Even that is a lot for me. This database is massively geared towards searches, and update/insert activity is minimal. I think that the one major rebuild during version upgrade is entirely adequate, then keep an occasional eye on things. I'm constantly tweaking this, and generally deploy a new version long before anything has seen enough activity to get seriously hashed.

    I use Ola's solution on all my servers. You should take a look:

    Thank you, that looks like a useful site. I have many sites devoted to SQL Server bookmarked, but this is a new one for me.

  • 192K, still quite small. The effectiveness of an index seek on a table with so few little rows really won't show much of a performance hit/improvement that a table scan/clustered index scan. I wouldn't worry about indexes smaller than a few MB, unless you really see performance issues

    ______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience

  • MyDoggieJessie (8/23/2013)


    192K, still quite small. The effectiveness of an index seek on a table with so few little rows really won't show much of a performance hit/improvement that a table scan/clustered index scan. I wouldn't worry about indexes smaller than a few MB, unless you really see performance issues

    As I wrote, several times, I understand this is not likely to be a major performance issue. I'm not worried, the performance of the database is entirely adequate. It's true, I'm always looking for ways to make things better, but this is not such a case. I'm trying to understand how and why something works, or doesn't work, not generate a performance increase. In particular, I'm trying to understand why, after a complete rebuild, the index is still fragmented. Do you know anything about that?

  • Heap or index?

    How are they getting that fragmentation?

    Any shrinks running?

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Taken from a post elsewhere:

    A new table or index is usually allocated it's first 8 pages from a mixed, rather than uniform extent. So, it's possible for each of the first 8 pages to be allocated from different mixed extents. A table or index consuming 8 pages could therefore have 8 fragments, 1 on each of 8 different mixed extents.

    The more widely used defrag scripts tend to exclude small tables because of this. IIRC, <500 pages is in one or both of them. At these sizes, there is very little benefit to defragmenting and the fragmentation figures are potentially skewed by the mixed extent allocations.

    Make sense?

    ______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience

  • The rough guideline is somewhere in the vicinity of 1000 pages (8MB data). It's not a hard number (defrag at 1001, ignore at 999), just a rough figure about where rebuilding an index makes sense (based on what fragmentation's effects are)

    There's never any certainty that a rebuild will result in 0% fragmentation, that should never be expected. About the only time you might see that is when rebuilding the index into a new filegroup when there's nothing else in that filegroup and it's large enough for the entire index. If there's limited free space in the data file during the rebuild it's quite likely to still have fragmentation in the index, especially if there's concurrent activity.

    SQL will try to put a new index down contiguously, but same when creating, no guarantee that it absolutely will all the time and the way fragmentation is calculated means that smaller indexes are more likely to show higher fragmentation % if rebuild slightly out of order.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (8/23/2013)


    Hi Gail. I was hoping to meet you in person during SQL in the City last month, but I got involved in a scuffle with the passport control police at the Prague airport and wound up not making my flight. 🙁

    Heap or index?

    Some heaps, but those are mostly temporary working tables used during the process of copying the old database to the new version. It's not worth the trouble to index those, since they are generally write once, serially, read-once, serially, and never used again. Some dregs of data remain in them now, while I'm still playing with it, but when I do a real conversion, that then goes live, I run an additional procedure that completely deletes them, since my need for such tables in the next version will likely be different. The one I'm interested in is a primary key clustered index on a real data table. Here is the SQL for that table:

    CREATE TABLE [dbo].[LinkGenusSpeciesAuthor](

    [GenusAutoID] [int] NOT NULL,

    [SpeciesAutoID] [int] NOT NULL,

    [AuthorAutoID] [int] NOT NULL,

    CONSTRAINT [PK_LinkGenusSpeciesAuthor] PRIMARY KEY CLUSTERED

    (

    [GenusAutoID] ASC,

    [SpeciesAutoID] ASC,

    [AuthorAutoID] ASC

    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY]

    ) ON [PRIMARY]

    It's 192KB, 23% fragmented, and nothing I have done changes that.

    How are they getting that fragmentation?

    I suppose during the import process. This table is not copied - it's part of a design change, and is newly generated from several other tables during the conversion import.

    Any shrinks running?

    None on a regular basis. I run DBCC CHECKDB(PaleoDataProPokusy) as the first step of the import, otherwise I leave it alone, thanks in large measure to your blog posts on the topic.

  • Heaps you can't defrag (well, there's alter table rebuild, but...), and fragmentation means something very different on a heap vs a table. The other one, it's honestly far too small to be bothered about. 193kB is a little over 3 extents, the first 8 pages are from mixed, extents, even if the other two are uniform that's more than enough to account for a 23% fragmentation.

    Seriously, stop worrying about the tiny indexes, it's not worth rebuilding them, fragmentation that small is irrelevant and expected because of the first 8 pages.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (8/23/2013)


    The rough guideline is somewhere in the vicinity of 1000 pages (8MB data). It's not a hard number (defrag at 1001, ignore at 999), just a rough figure about where rebuilding an index makes sense (based on what fragmentation's effects are)

    Roughly 1000 pages, okay, and how much fragmentation? Any guidelines on that, or just one thing to try if performance is suffering? A couple of my indexes are close to 30MB, but they show 0% fragmentation, and performance on those tables is stellar.

    There's never any certainty that a rebuild will result in 0% fragmentation, that should never be expected.

    Ah, I expected that a rebuild would generate the index completely linearly, like writing from scratch on a blank sheet of paper. Guess I should have known better than to expect something with SQL Server to be that simple.

  • pdanes (8/23/2013)


    How are they getting that fragmentation?

    I suppose during the import process. This table is not copied - it's part of a design change, and is newly generated from several other tables during the conversion import.

    I meant how is the Idera tool calculating the fragmentation. I've seen some very creative (and wrong) methods of 'simplifying fragmentation' from various tools.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (8/23/2013)


    Heaps you can't defrag (well, there's alter table rebuild, but...), and fragmentation means something very different on a heap vs a table. The other one, it's honestly far too small to be bothered about. 193kB is a little over 3 extents, the first 8 pages are from mixed, extents, even if the other two are uniform that's more than enough to account for a 23% fragmentation.

    Seriously, stop worrying about the tiny indexes, it's not worth rebuilding them, fragmentation that small is irrelevant and expected because of the first 8 pages.

    Again, the heaps are just scratch paper. Write once, read once, and forget it. And no worries, I'm just trying to improve my understanding of functionality. Idera's frag display got me to poking around under the hood, and I got curious.

    Thanks for the explanations.

  • GilaMonster (8/23/2013)


    pdanes (8/23/2013)


    How are they getting that fragmentation?

    I suppose during the import process. This table is not copied - it's part of a design change, and is newly generated from several other tables during the conversion import.

    I meant how is the Idera tool calculating the fragmentation. I've seen some very creative (and wrong) methods of 'simplifying fragmentation' from various tools.

    Good God, I have no idea. Probably pulling it from the index properties, since the numbers agree with what SSMS display in its UI, but you'd have to ask Idera's engineers to know for certain.

Viewing 14 posts - 1 through 13 (of 13 total)

You must be logged in to reply to this topic. Login to reply