REBUILD INDEX DOES NOT REMOVE FRAGMENTATION

  • I am using following query to rebuild particular index but Logical Fragmentation is still 40%. I think after rebuild it should be 0%. I don't know why it's still showing 40%

    ALTER INDEX I_BAR_FACT_PATIENT ON BAR_FACT

    REBUILD WITH (

    FILLFACTOR = 95,

    SORT_IN_TEMPDB = ON,

    STATISTICS_NORECOMPUTE = ON)

    Fragmentation.

    DBCC SHOWCONTIG scanning 'BAR_FACT' table...

    Table: 'BAR_FACT' (1113979245); index ID: 7, database ID: 11

    LEAF level scan performed.

    - Pages Scanned................................: 5

    - Extents Scanned..............................: 2

    - Extent Switches..............................: 1

    - Avg. Pages per Extent........................: 2.5

    - Scan Density [Best Count:Actual Count].......: 50.00% [1:2]

    - Logical Scan Fragmentation ..................: 40.00%

    - Extent Scan Fragmentation ...................: 0.00%

    - Avg. Bytes Free per Page.....................: 368.0

    - Avg. Page Density (full).....................: 95.45%

    DBCC execution completed. If DBCC printed error messages, contact your system administrator.

  • If an index is below a certain size, defragmentation doesn't end up doing anything. From what I can see in the data you posted, the index is tiny, so fragmentation doesn't matter and won't get defragmented.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • - Pages Scanned................................: 5

    TOO Little ๐Ÿ˜‰


    * Noel

  • I know it does not matter but my question is why using rebuild index command does not make logical fragmentation to 0%.

    May be it is related to filegroup space issue ?

  • If I remember correctly if the index has less then 8 pages, it uses a mixed extent. My guess is that this is the reason that it wonโ€™t rebuild the index (rebuilding one index on mixed extent causes rebuilding at least one more index that share the same extent). I have to admit that this is more of a speculation from me and not something that I know for a fact. In any case as other already said with that size it has no real effect.

    Adi

    --------------------------------------------------------------
    To know how to ask questions and increase the chances of getting asnwers:
    http://www.sqlservercentral.com/articles/Best+Practices/61537/

    For better answers on performance questions, click on the following...
    http://www.sqlservercentral.com/articles/SQLServerCentral/66909/

  • Hi Just a thought...

    http://msdn.microsoft.com/en-us/library/ms177571.aspx

    I recall a similar problem once where I was unable to attain a 0% defragmentation on a rather large database and the MSDN link provided above was far too time consuming, for a production environment it would have been catastrophic to have tried the MSDN solution.

    I've looked at your SQL which you are trying and it appears to be somewhat small, or missing some information. It IS possible to acheive 100% defrag on any object, the ways to go about this differ however.

    My advice would be to start with a script that essentially creates a new table (not a temp), with indexes and design constraints precisely the way you require it then to bulk load data into it. After performing another DBCC SHOWCONTIG you 'should' see the table the way you wish to have it visualized. The previous table can then be renamed to anything, renaming you new table to the one just created and loaded and the previous fragmented dropped after your checked and happy with the results.

    Bear in mind the reason you are receiving this problem as previously pointed out is that the table contains a small amount of data. It is not to relative to the file system but the amount of data contained is too small to re-organize within the optimizer.

    A book worth looking into: Sajal Dam's "SQL Server Query Performance Tuning Distilled"

  • I agree with you guys, Actually, I was fixing one of the issue on SQL Server called timeout error. All of a sudden my all processes are failing and getting timeout error so I checked the event log and found following error.

    I/O Requests Taking Longer Than 15 Seconds To Complete

    So I thought , may be it is fragmentation issue. I ran the query to check if indexes are fragmented or not and found lots of tables are fragmented.

    looks like even fragmentatio issue is not the cause of the timeout error.

    Any idea ?

  • I/O can take long time to write in tables that don't have cluster indexes.

    You might have a badly configured io subsystem; that is you might have one channel back to your disk drives only and the writes are being blocked by reads.

    Recently I was working a system and the cache controls on the raid controller were configured wrong. So my writes were being held up because they only had 10% of the cache (mistake by the original guy who configured it).

    .. Mohit.

    [font="Arial"]---

    Mohit K. Gupta, MCITP: Database Administrator (2005), My Blog, Twitter: @SQLCAN[/url].
    Microsoft FTE - SQL Server PFE

    * Some time its the search that counts, not the finding...
    * I didn't think so, but if I was wrong, I was wrong. I'd rather do something, and make a mistake than be frightened and be doing nothing. :smooooth:[/font]

    How to ask for help .. Read Best Practices here[/url].

  • Stalled IO is certainly relative to disk I/O.

    I've hacked around this in the past by moving the TEMPDB to another physical location, this is also a worthwhile performance consideration for production servers, if your T-SQL uses more then enough Temp tables to JUSTIFY this move. For example, RAID-0 is the fastest Disk I/O there is and this is temporary data, so consider locating the TEMPDB of a RAID-0 array and the MASTER and Production databases on RAID-10.

    First locate the TempDB:

    USE TEMPDB

    GO

    EXEC sp_helpfile

    GO

    Now you know the location where you are, where would like the Temp to reside?

    USE MASTER

    GO

    ALTER DATABASE tempdb

    MODIFY FILE (NAME = tempdev, FILENAME = 'C:\tempdb2009.mdf')

    GO

    ALTER DATABASE tempdb

    MODIFY FILE (NAME = templog, FILENAME = 'C:\tempdb2009.ldf')

    GO

    Are your disks full? This is certainly down to I/O and all weird kinds of problems would be hitting you. Check if you are able to de-allocate some space!

    Worth a look - http://blogs.msdn.com/sqlserverstorageengine/archive/2006/06/21/642314.aspx

  • My Tempdb is on a separate drive called I drive. I have separated tempdb into two files on two different drives I and G and both the drives have enough space. I drive is 78 GB free and G drive is 70 GB free.

    So Tempdb is not the issue here. Is it possible that something changed at the SAN level. ?

  • The I/O issue you are having is it with one table or multiple tables?

    [font="Arial"]---

    Mohit K. Gupta, MCITP: Database Administrator (2005), My Blog, Twitter: @SQLCAN[/url].
    Microsoft FTE - SQL Server PFE

    * Some time its the search that counts, not the finding...
    * I didn't think so, but if I was wrong, I was wrong. I'd rather do something, and make a mistake than be frightened and be doing nothing. :smooooth:[/font]

    How to ask for help .. Read Best Practices here[/url].

  • Sounds like the SAN is most likely to be the culprit. Is the SAN shared? Is it administered within your control?

    SAN is a 'storage area network', which is great for storing backups, but running production databases from such a complicated environment is bad practice. :w00t:

    You can check the state of you local physical disks easily by locating the disk properties and checking to ensure the physical drive platters are local to the machine on which the SQL Server instance is running, this is almost an absolute must, unless you are 100% in control of the SAN environment and can control the settings from the fiber channel controller, right through the path it would almost certainly be a bad idea to host databases there IMHO.

  • mrhassell (3/4/2009)


    SAN is a 'storage area network', which is great for storing backups, but running production databases from such a complicated environment is bad practice.

    It is? You have any white papers or articles to back that statement up?

    A badly-configured SAN can cripple DB performance, but so can a badly configured local disk.

    balbirsinghsodhi: What RAID level on the SAN? Are the disks dedicated to the LUNs or shared?

    Check the following perfmon counters (broken down per disk)

    Physical Disk: Avg sec/read

    Physical Disk: Avg sec/write

    Physical Disk: % idle time.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • To clarify my previous attempt to help.

    My point in response to the original point raised regarding use a 'shared' SAN in a mission critical production environment, where potential configuration changes "may" be made without the DBA knowledge is possible.

    Consider the following device's, typical I/O speed's and transfer rates;

    STORAGE DEVICE Avg I/O per second (IOPS) TRANSFER MEG/SEC

    TMS Solid State Disk 400,000 1,600

    IEEE1394 (firewire) 1,100 400-800

    PC Solid State drive 7,000 62

    Typical PC drive 150 15-30

    Also consider the following interface types and speeds;

    Interface Speed

    Serial 115 kb/s

    Parallel(standard) 115 kb/s

    Parallel(ECP/EPP) 3.0 Mb/s

    SCSI 5 320 Mb/s

    ATA 3.3 133 Mb/s

    USB1.1 1.5 Mb/s

    USB2.x 60 Mb/s

    IEEE1394(b) 50-400 Mb/s

    Original IDE 3.3-33 Mb/s

    SATA 150 Mb/s

    PCI Express 3.3 1 GB/s

    Fibre Channel 2 Gb/s

    No doubt fibre channel is the fastest interface, there are also 10Gb and 100Gb ethernet available, however they still only transfer data to physical storage devices which are attached to PCI-X or SCSI controllers.

    LUN's are logical unit numbers, in RAID there are three abstraction layers (disk LUN, volume LUN and LUN stripes). The RAID abstraction layer being anywhere between 8Kb and 1Mb is a series of combined slices, in which fault tolerance is geared with performance increases, however if the configuration has been altered (say from 32Kb stripes to 64Kb) then SQL page files and data (esp; indexes) will almost certainly be adversely affected.

    Pinpointing the specific SAN bottleneck depends on many factors, including LUN's sharing I/O buffers and LUN's mapped incorrectly within the controller/s. It is not even really much to do with SQL so much as system architecture.

    There is no tried and true formula that will fit with each customers specific needs or usage and for this reason, if there is a SHARED SAN in the picture, the point I was aiming to raise without this exhaustive and yet still very limited elaboration is, by utilizing a series of dedicated RAID arrays (RAID-0 for tempdb and RAID-10 for data/logs) would be a safer bet, that the potential risk of having configurations change suddenly leading to sudden and noticeable performance degradation.

    The only safe use of a "shared" SAN I have 'personally' encountered, is for rapid off load and reload of snapshots and backups. May the force be with you!

  • mrhassell (3/4/2009)


    SAN is a 'storage area network', which is great for storing backups, but running production databases from such a complicated environment is bad practice. :w00t:.

    I believe this is a misconception on your side. We, as many others here probably, run our production environments on SAN's. The I/O these guys are able to support are huge ...... it all boils down to the configuration, as Gail mentioned.

    _______________________________________________________________________
    For better assistance in answering your questions, click here[/url]

Viewing 15 posts - 1 through 15 (of 16 total)

You must be logged in to reply to this topic. Login to reply