Tuning expensive query

  • Hi all,

    I am tuning some expensive queries , 1 of them is :

    Select * from Table A

    Where Date >= β€˜2012-01-01 00:00:00.000’

    Table A consist of 773.309 rows

    Duration is 15 sec , logical read : 20900

    When I check the actual execution plan , it shows Clustered Index scan 100%

    I added nonclustered index :

    CREATE NONCLUSTERED INDEX IX_TableA_Date

    ON dbo.Table A (Date);

    Result : Nothing , still shows Clustered Index scan 100%

    I changed nonclustered index by adding included columns ( All column from Table A )

    CREATE NONCLUSTERED INDEX IX_TableA_Date

    ON dbo.Table A (Date);

    Include (A,B,C,D,E )

    Result : only 1 sec faster  , logical read : 19324 ( still very high ) , Index Seek 100 %

    Could you please help me out ? really appreciate it

    Cheers

  • You're not going to get any better than that. By the looks of the reads, you're fetching the majority portion of the table and all the columns. As long as you're retrieving the majority of the table and all the columns, the query will take time.

    And as for the %, well costs within a plan have to add to 100%, so in a very simple plan an index seek can easily be 100%, because there's nothing else in the plan with any cost.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (5/25/2015)


    You're not going to get any better than that. By the looks of the reads, you're fetching the majority portion of the table and all the columns. As long as you're retrieving the majority of the table and all the columns, the query will take time.

    And as for the %, well costs within a plan have to add to 100%, so in a very simple plan an index seek can easily be 100%, because there's nothing else in the plan with any cost.

    Thanks for the reply πŸ™‚ appreciate it . I thought the same to be honest .. just wanna know other opinions about it ..

  • Hey

    You might find the following helpful. It explains why returning even a small percentage of a given table can cause SQL Server to outright ignore a non-clustered index.

    This is almost certainly what's happening to you:

    https://rule30.wordpress.com/2014/03/30/sql-server-are-your-non-clustered-indexes-useless/

    You would likely find that if you modified your date to return a much smaller percentage of the table, you would reach a "switch" point where SQL Server decides that the non-clustered index is worthwhile.

    HTH

    S

  • Just a word of advice, that blog post's conclusion is wrong.

    SQL Server could still very well hold 2GB of table in memory – but it doesn’t take a genius to realise that scanning 2GB of data to get a small percentage of rows is a different proposition when compared to scanning 26MB.

    It has nothing at all to do with the table being able to fit into memory, and it has nothing to do with amount of data being scanned. The optimiser assumes that all data is on disk and will need to be loaded into memory during processing.

    The percentage at which the optimiser switches from index seek + key lookups to table scan has to do with the ratio of rows estimated to be returned : pages in the table because key loopups are singleton seeks to the clustered index and hence do at least one read usually more per key lookup.

    Edit: Added the missing conclusion.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • The question that I also have is where are you returning this information to because no one will have the time to read even a thousand rows.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply