sql server does not use index without hint

  • ...more the apps hitting this table the ID seems to be the best candidate to be use as a primary key...

    Setting a primary key by itself, is a conceptual issue.

    A primary key is a non-nullable unique constraint.

    Making it the clustering index is another issue.

    By default SQLServer will create the pk as clustering index if there isn't already a clix defined.

    A clustering index determines the physical organisation of your data.

    Read BOL ! It has good info on this.

    To determine a clix for your case, you may want to launch profiler a capture some load for a "representing" periode of time.

    Then feed that trace to the "index tuing wizard" and see what it comes up with.

    Investigate the results and pick your choice.

    Johan

    Learn to play, play to learn !

    Dont drive faster than your guardian angel can fly ...
    but keeping both feet on the ground wont get you anywhere :w00t:

    - How to post Performance Problems
    - How to post data/code to get the best help[/url]

    - How to prevent a sore throat after hours of presenting ppt

    press F1 for solution, press shift+F1 for urgent solution πŸ˜€

    Need a bit of Powershell? How about this

    Who am I ? Sometimes this is me but most of the time this is me

  • Rem (5/5/2008)


    Hello GilaMonster,

    Here is the information:

    rowcount whole table:+/-36000000

    expected rowcount return: 244918

    CREATE TABLE [dbo].[REP_table] (

    [ID] [int] NOT NULL ,

    [Qty] [int] NOT NULL ,

    [Price] [money] NOT NULL ,

    [FPrice] [money] NOT NULL ,

    [Fline] [money] NOT NULL ,

    [GNet] [money] NOT NULL ,

    [Prod] [int] NOT NULL ,

    [Dist] [varchar] (8) NOT NULL ,

    [CE] [money] NOT NULL ,

    [Date] [datetime] NOT NULL ,

    [Acct] [varchar] (23) NOT NULL ,

    [COT] [varchar] (3) NOT NULL ,

    [Package] [int] NOT NULL ,

    [Discount] [money] NOT NULL ,

    [Route] [varchar] (12) NOT NULL

    )

    CREATE INDEX [IX_Dist] ON [dbo].[REP_table] ([Dist])

    I have drop/recreate index. I also updated the stats for all the indexes and even for that one.

    select * frep rep_table where dist='ppst' ***** very very long

    select * frep rep_table with (name(IX_DIST) where dist='ppst' **** much faster

    The query optimizer is a COST based one, not a TIME based one. Please get the cost for each query (show actual execution plan and put cursor over the left-most icon in the graphical display). Report both plan costs back to us. Bookmark lookups are VERY expensive in the optimizer's math, whereas sequential I/O for a table scan is much less expensive. Could it be that your table and/or database file is very fragmented and thus the supposedly fast sequential I/O is really taking much longer than it should?

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • Here is the plans cost for both exemple:

    ************

    select * from rep_table with(index(ix_table_dist))

    where dist='TYERY'

    Bookmark Lookup

    Physical operationBookmark Lookup

    Logical OperationBookmark Lookup

    Estimates row count244,918

    Estimated row size 120

    Estimated I/O cost 762

    Estimated CPU cost0.269

    Estimated number of executes1.0

    Estimated cost762.360171(100%)

    Estimated subtree cost763

    Argument:

    BOOKMARK:([Bmk1000]),OBJECT:([PRD].[dbo].[REP_table]) WITH PREFETCH

    ************

    select * from rep_table

    where dist='TYERY'

    Scan rows from a table

    Physical operationTable Scan

    Logical OperationTable Scan

    Estimates row count 244,918

    Estimated row size 120

    Estimated I/O cost 382

    Estimated CPU cost9.70

    Estimated number of executes1.0

    Estimated cost391.358490(98%)

    Estimated subtree cost391

    Argument:

    BOOKMARK:([PRD].[dbo].[rep_table]), WHERE: ([rep_table].[dist]='TYERY')

    Thanks all for your help !

  • And thus you see why the optimizer chooses a table scan - almost 1/2 the cost of the bookmark lookups!

    I would look at table fragmentation, disk file fragmentation, amount of RAM on the box and getting faster I/O in general if you want to speed things up.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • Ok, I understand now.

    How can I see if my table is fragmented and how I can defrag? what about the disk file fragmentation, how I can defrag it?

    Thanks SQLGuru !

  • One more thing. This table is truncated and repopulated on a weekly basis. No insert/delete or update are done to the table after the load. How the table can be fragmented?

  • Rem (5/7/2008)


    Ok, I understand now.

    How can I see if my table is fragmented and how I can defrag? what about the disk file fragmentation, how I can defrag it?

    Thanks SQLGuru !

    Umm, pay me to give you some mentoring and a performance review?? πŸ˜€ I am betting this issue is just the tip of the iceberg.

    1) I recommend DiskKeeper for OS file defrag. Others use Perfect Disk, Symantec, built-in, etc. It is HIGHLY recommended that you turn sql server off during this evolution.

    2) see dbcc showcontig in BOL. Note there is a maintenance script in BOL on that same page. It is a good start for index maint practices. If you have a heap table then forwarding pointers is a concern also.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • Rem (5/7/2008)


    One more thing. This table is truncated and repopulated on a weekly basis. No insert/delete or update are done to the table after the load. How the table can be fragmented?

    Another easy one. Lets assume you have the default growth still on the database (1MB). Our example has just 2 tables in the database, X an M. Over time you have filled and dropped these tables such that they are fragmented (no contiguous place to place them).

    XXXXMMMMXMXMXMMMMXXXXXXXMMMMXXXMMMMXMXMMMXMMMXXXMMMMMMMMXXXXXMXMXMXMXMXM

    Now you drop M and reload it. Where is it gonna get put? Back into the same empty holes just created from it's drop, and thus be fragmented to hell right off the bat. Obviously this is even worse when you have lots of tables/indexes in the database that see activity that makes them grow, especially if you have never done any interal defrag operations.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • The table seems to be not too fragmented:

    DBCC SHOWCONTIG scanning 'REP_table' table...

    Table: 'REP_table' (672057480); index ID: 0, database ID: 10

    TABLE level scan performed.

    - Pages Scanned................................: 519723

    - Extents Scanned..............................: 64974

    - Extent Switches..............................: 64973

    - Avg. Pages per Extent........................: 8.0

    - Scan Density [Best Count:Actual Count].......: 99.99% [64966:64974]

    - Extent Scan Fragmentation ...................: 0.42%

    - Avg. Bytes Free per Page.....................: 319.8

    - Avg. Page Density (full).....................: 96.05%

    DBCC execution completed. If DBCC printed error messages, contact your system administrator.

    Next step is to check the fragmentation at OS level. I am currently running 'Disk Defragmenter' to see if the disks are OK.

    This tool is good for the job? I am new here and I don’t think we have something to defrag.

    Thanks again !

  • I am betting you are down to simply having slow disk IO, or perhaps your query is being blocked sometimes. Did you try your query with NOLOCK hints? Can't see the original - perhaps you did this.

    Oh, don't forget that RAM is 1000 times faster than disk, so make sure your server has all it can take (or at least more than the size fo the database of concern if it is small).

    You can use windows defrager if you like. Again, if you do a DEFRAG operation (as opposed to simply checking frag levels) you should shut sql server down first.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • TheSQLGuru (5/7/2008)


    Rem (5/7/2008)


    One more thing. This table is truncated and repopulated on a weekly basis. No insert/delete or update are done to the table after the load. How the table can be fragmented?

    Another easy one. Lets assume you have the default growth still on the database (1MB). Our example has just 2 tables in the database, X an M. Over time you have filled and dropped these tables such that they are fragmented (no contiguous place to place them).

    XXXXMMMMXMXMXMMMMXXXXXXXMMMMXXXMMMMXMXMMMXMMMXXXMMMMMMMMXXXXXMXMXMXMXMXM

    Now you drop M and reload it. Where is it gonna get put? Back into the same empty holes just created from it's drop, and thus be fragmented to hell right off the bat. Obviously this is even worse when you have lots of tables/indexes in the database that see activity that makes them grow, especially if you have never done any interal defrag operations.

    I don't want to hijack the thread but I have a question SQLGuru...

    In regards to drive fragmentation vs. database fragmentation.

    My co-worker and I have inherited a process in which the following occurs...

    Test System

    A 350 gb E:\drive which is separate from the OS Install drive (c:\ OS, e:\ .mdf's, f:\ .ldf's)

    * Each week a copy of a 60 gb production db is refreshed for testing (it is always the same database)

    * From what we can tell the database hadn't been re-indexed for close to 5 years ;D

    * We re-indexed the database and the size went down to 45 gb's

    Great we thought...and we have started a plan for doing the same in production (complicated because of Log shipping...ect)

    However after reading your post above I started thinking about the actual drive fragmentation and just did a quick "analyze" from the Tools menu...

    The database that we just re-indexed shows a tremendous number of fragmentations vs. the other database copies of the same database. (for example when you do the "analysis" it gives you the "MOST FRAGMENTED FILES" list and the database we re-indexed has 71 fragments, the exact same copy (different name) that hasn't been re-indexed has 3 fragmentations)

    Does re-indexing cause file level fragmentation?

    are the two operations counter productive?

    Just to restate my question (in one of our test environments)

    1. took a production database and restored to test (part of a weekly process)

    2. multiple copies of the same database on the same server (same size, different names, different points in time, different testing uses)

    3. decided to re-index one of the copies to see the results

    4. re-index gained back over 10 gb's of space from the database

    5. Out of Curiosity ran the defragmentor "analysis" to see the fragmentation of the e:\ drive (based on this post)

    6. The most fragmented file is the one that was re-indexed (71 fragmentations vs. 2-4 for the same database with different names)

    I Curious if the re-index causes file level fragmentation...

    My current place of employment DOES NOT practice fragmenting the file system (OS) drives (C:\, E:\, F:\) however some of our larger SQL servers are on a multi-terabyte SAN so I am not sure if fragmenting even helps in those circumstances.

    Being a bit of an obsessive compulsive I am always looking to improve on things and this could be one area of performance we could entertain that currently isn't being instituted...but like I said, some of the larger production sql servers are run off of SAN drives so I am not sure how fragmentation's works in those cases

    Hope this isn't too confusing of a post!

    Thanks!

    Lee

  • What tends to cause massive fragmentation on the disk level is repeated shrinks and grows of the data file, especially if there are multiple data files on the same disk all shrinking and growing.

    You want really, really bad disk fragmentation? Enable auto shrink and auto grow on alll the database files.

    An index defrag won't fragment the disk on its own. All the defrag does is shuffle the index pages around inside the existing data files. That may cause a data file to grow. If it does, see paragraph above.

    If you're using a SAN, speak to your storage engineers/SAN vendor about defragmenting tools.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (5/7/2008)


    What tends to cause massive fragmentation on the disk level is repeated shrinks and grows of the data file, especially if there are multiple data files on the same disk all shrinking and growing.

    You want really, really bad disk fragmentation? Enable auto shrink and auto grow on alll the database files.

    An index defrag won't fragment the disk on its own. All the defrag does is shuffle the index pages around inside the existing data files. That may cause a data file to grow. If it does, see paragraph above.

    If you're using a SAN, speak to your storage engineers/SAN vendor about defragmenting tools.

    Hi Gail,

    I know the pitfalls of having databases on the auto shrink/grow cycles in regards to the fluctuation of the physical files...and how that cycle contributes to fragmentation of the actual drive...I guess I was looking for some confirmation that re-indexing doesn't contribute to it as well...

    None of the databases we monitor are set up like this, we generally only "grow" our databases to keep 20% or more free at all times and monitor them weekly.

    Since this is the first DBA type job I have had and is the only experience I can pull from. I was taught based on the way things were "normally" done here but like other things that doesn't always mean it is the best which is why I am exploring a little further on the drive fragmentation.

    I was curious what else can cause drive fragmentation in the event you don't have some of the obvious stuff enabled (auto grow/shrink)...

    As for SAN stuff, I can certainly pose the question to the infrastructure people to see what their opinion is...

    In this weekly process of ours it performs a DROP DB

    and then does a re-attach of the .mdf and .ldf files

    Does the constant DROP and re-attach process contribute to drive fragmentation?

    Thanks,

    Lee

  • 1) Drop and add database activity can indeed lead to os file frag due to leaving holes in the file structure on disk.

    2) Index maint can lead to os file frag too. If the database adds space for newly allocated extents that indexes are being built on --> file growth --> os file frag.

    3) Best is to have autogrowth be a FALL-BACK mechanism, not the thing that really controls database size. I recommend to my client that they size a database for 12-18 MONTHS of expected growth need. Boom, one fell swoop and one. Reevalutate regularly and be PROACTIVE. Also, growth increments should be appropriate. the default 1MB for data and 10% for logs is a REALL killer!!! I had a client for a perf review that had over 300THOUSAND file fragments. Got nearly a 20% throughput gain simply from doing an os file defrag.

    4) You should consider hiring a mentor for a bit if you are new to the DBA roll.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • Wow, I think my post launched an interesting story.

    We might facing a lot of fragmentation, see the result of the analyze.

    Volume Data (D:)

    Volume size = 559 GB

    Cluster size = 4 KB

    Used space = 423 GB

    Free space = 136 GB

    Percent free space = 24 %

    Volume fragmentation

    Total fragmentation = 48 %

    File fragmentation = 96 %

    Free space fragmentation = 0 %

    File fragmentation

    Total files = 38,573

    Average file size = 13 MB

    Total fragmented files = 4,316

    Total excess fragments = 18,694

    Average fragments per file = 1.48

    I will try to find a window to defra our D drive and keep you posted. Please continue the thread, very interesting discussion.

    Thanks

Viewing 15 posts - 16 through 29 (of 29 total)

You must be logged in to reply to this topic. Login to reply