Which query is faster and why?

  • Hi,

    I wanted to know if both of these queries with field_1 and field_2 as varchar extract equal number of same records from table_A which of these will be faster and why?

    Query#1:

    select * from table_A where field_1 = '4'

    Query#2

    SELECT *

    FROM table_A

    WHERE (CONVERT(datetime, Field_2)

    BETWEEN CONVERT(datetime, '20030701') AND CONVERT(datetime, '20030906'))

    Thanks

  • The first will be slightly faster assuming it uses no implicit conversions. An exact equivalence will also be faster than a range (e.g. BETWEEN) operation.

    The first will be much faster if you have an index on the Field_1 column.

    Even if there is an index on Field_2, it will not be used because of the function used with the column in the predicate of Query #2.

    If you type both queries into Query Analyzer, you can use the "Show Execution Plan" option to see exactly how the times compare as a percentage of total time. It's not named Query Analyzer for nothing.

    --Jonathan



    --Jonathan

  • quote:


    The first will be slightly faster assuming it uses no implicit conversions. An exact equivalence will also be faster than a range (e.g. BETWEEN) operation.

    The first will be much faster if you have an index on the Field_1 column.

    Even if there is an index on Field_2, it will not be used because of the function used with the column in the predicate of Query #2.

    If you type both queries into Query Analyzer, you can use the "Show Execution Plan" option to see exactly how the times compare as a percentage of total time. It's not named Query Analyzer for nothing.

    --Jonathan


    Hi Jonathan,

    I also thought exactly same at first but I tested both the queries with 40000 records in table_A, I tried without index, I even tried indexing field_1 and field_2 alone seperately and indexing both of them at one time but anyhow the execution time displayed for both the queries is exactly same to my surprise.

    Though the argument in the query#2 is much complex the query#1 what is the reason of their execution time being the same?

    Thanks

  • quote:


    Hi Jonathan,

    I also thought exactly same at first but I tested both the queries with 40000 records in table_A, I tried without index, I even tried indexing field_1 and field_2 alone seperately and indexing both of them at one time but anyhow the execution time displayed for both the queries is exactly same to my surprise.

    Though the argument in the query#2 is much complex the query#1 what is the reason of their execution time being the same?

    Thanks


    The overhead from the scalar functions may be too small to see in your sample. Is there implicit conversion in your first query? What is the data type of the Field_1 column?

    As for the index on Field_1 not helping performance, my only guess is that you used a non-clustered index and the value ('4') has very low selectivity (so the index was not chosen). If you learn to read the execution plans (which is not hard), you can easily discover the actual issues.

    --Jonathan



    --Jonathan

  • The data type of both the fields (field_1 and field_2) is varchar and the index create was a clustered index.

    Execution Plan: Though is show 2% on the "Select" in query#2 and 0% on "Select" in query#1 they both take same time.

  • quote:


    The data type of both the fields (field_1 and field_2) is varchar and the index create was a clustered index.

    Execution Plan: Though is show 2% on the "Select" in query#2 and 0% on "Select" in query#1 they both take same time.


    I don't understand that last "sentence." What are the percentages to which you refer? How are you getting your timings?

    If you copy just the two queries to the QA window and then execute them (as a batch)using the "Show Execution Plan" option, check the Execution Plan window. There will be a line at the top of each plan giving the relative cost; the two numbers should add up to 100%...

    A good way to get absolute timings (which are less useful in this case) is to SET STATISTICS TIME ON before executing the batch. Be aware that other processes may skew any timings.

    Why are you storing dates as varchars?

    --Jonathan



    --Jonathan

  • In this perticular situation we already have dates stored in free-text format as we are doing some extract from Pervasive 2000i database.

    The execution plan does show the diffrence but probably the time is approx. same in this perticular situation because of number of of rows.

    Thanks for your help.

  • In this perticular situation we already have dates stored in free-text format as we are doing some extract from Pervasive 2000i database.

    The execution plan does show the diffrence but probably the time is approx. same in this perticular situation because of number of of rows.

    Thanks for your help.

  • In this perticular situation we already have dates stored in free-text format as we are doing some extract from Pervasive 2000i database.

    The execution plan does show the diffrence but probably the time is approx. same in this perticular situation because of number of of rows.

    Thanks for your help.

  • In this perticular situation we already have dates stored in free-text format as we are doing some extract from Pervasive 2000i database.

    The execution plan does show the diffrence but probably the time is approx. same in this perticular situation because of number of of rows.

    Thanks for your help.

  • quote:


    I also thought exactly same at first but I tested both the queries with 40000 records in table_A, I tried without index, I even tried indexing field_1 and field_2 alone seperately and indexing both of them at one time but anyhow the execution time displayed for both the queries is exactly same to my surprise.


    I expected that to happen. As I have noticed, queries tend to run faster in subsequent executions, Most of my queries used to take about 14 ms in first execution and 0 ms in the second.I guess the execution time might not be a valid measurement of performance while in trials. Please correct me if I'm wrong



    Mohamad Ad-Deeb
    Senior Software Developer, Microtech
    3 Mossadak st. Dokki 12311
    Giza, Egypt.
    Tel. +20 2 336 9855 (Ext.112)
    Fax +20 2 749 8784
    http://www.microtech-eg.com

  • if accurate timing is needed you should use DBCC FREEPROCCACHE before running each one of them otherwise you may be using a precompiled plan

    just my 2 cents


    * Noel

  • ...along with DBCC DROPCLEANBUFFERS

    Frank

    http://www.insidesql.de

    --
    Frank Kalis
    Microsoft SQL Server MVP
    Webmaster: http://www.insidesql.org/blogs
    My blog: http://www.insidesql.org/blogs/frankkalis/[/url]

  • quote:


    if accurate timing is needed you should use DBCC FREEPROCCACHE before running each one of them otherwise you may be using a precompiled plan

    just my 2 cents


    As much of tuning consists of attempting to use a compiled plan (e.g. stored procedures, sp_executesql, etc.), one should be aware of both the parse/compile time and the execution time. SET STATISTICS TIME ON gives you both, but the parse and compile time will only be non-zero when the something compiles, of course.

    If you're working with just query scripts as above, then the DBCC FREEPROCCACHE available in SQL Server 2000 can be helpful. Use this only in a testing environment, though, as the users will not appreciate it when the system bogs as all queries are recompiled. sp_recompile is handy when testing stored procedures and triggers, or you can just rerun the creation script changing CREATE to ALTER. For testing stored procedures, I usually just use

    
    
    EXEC <sp> <param list> WITH RECOMPILE

    --Jonathan



    --Jonathan

Viewing 14 posts - 1 through 13 (of 13 total)

You must be logged in to reply to this topic. Login to reply