Query performance - wierd problem

  • I hope someone can help with this cos it's driving me bananas!

    I have a query (Not hugely complex - across 5 tables) and when I run it in Management Studio it completes in about 2 - 3 seconds.

    However, running it from my application as a strongly-typed dataset in VB2005 it takes between 20 or 30 minutes.

    I'm cutting and pasting the same query text between the Dataset Designer and Management Studio; I'm using the same WHERE clause, and it's the same database.  Any ideas?

    TIA

    pg

     

  • check your app.


    * Noel

  • search also for "parameter sniffing" and solutions to same.

  • When you say "same database" do you also mean on the same server?

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • I believe when you say dataset you mean "suck all the data down to the client and convert it to XML", so the extra delay could be client-side processing.  If you use the same query in a datareader what kind of performance do you see?

  • If you're an admin, the best way is to run a simple trace.  Trace Stored Proc start and stop, and TSQL Statement start and stop events.  If your app connects using your account, use the filter to restrict the results to just your actions.  This is by far the fastest way to figure out the differece.

    However, just from your post, my first guess is different execution plans.  When you run the query from SQL Server Management Studio (SSMS), it may be matching it to a precompiled execution plan.  When you run it from your app, the driver you use may hinder SQL Server from matching the SQL to any existing execution plans.  Or worse, it may match your application SQL to a poorly suited execution plan, which would be even slower than generating a new plan each time!

    There are two options for that: clear the plan cache, or encourage SQL Server to use the same plan for both.  DBCC FREEPROCCACHE will clear out the precompiled statements.  Try this as a first step.  However the best approach is to use a stored procedure.  From SSMS and your app you can call that stored proc, and assuming you pass in the same parameter values, SQL Server will use the same execution plan.

  • 1. Check if there is a blocking

    2. Check if you are seeing any major waits for your applications's SPID in SQL Server.

    3. Use profiler and find if the all delay occoured at SQL End or on app end. (hint: use Duration columns for RPC:Completed or SQL:BatchCompleted).

    4. check connection properties (DBCC USEROPTIONS command or Audit:Login/ExistingConnection event in trace) and see if there are any differences.

    HTH


    -- Amit



    "There is no 'patch' for stupidity."



    Download the Updated SQL Server 2005 Books Online.

  • Just my 2 cents... dunno if it is, or not, but the query should NOT be embedded SQL... you app should be calling a stored procedure for this and returning just the result set.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Thanks everyone for the replies. Some useful suggestions in there for other occasions also.

    I've now managed to narrow it down a bit. Confession - there was one of those strangely invisible typos in the design-time connection string in VS that meant that the server I was connecting to  from VS Query Designer was not the same one as from Management Studio.

    Having got that out of the way, I'm now working solely within Management Studio. So now I'm left with this puzzle, which seems to be related to differing execution plans:

    My query is of the form "SELECT field1, field2 . . . FROM vMyView WHERE mydatefield BETWEEN @startdate AND @enddate"

    Sometimes when I run this, it executes in 3secs, sometimes 25 mins.  This is with the same parameters.  There's no blocking. In the Activity Monitor, there's 13 subthreads, with 9 showing waittype CXPACKET and the others PAGEIOLATCH_SH

    If I change the WHERE clause to 'mydatefield = @startdate' then it executes in 3 secs every time.

    The predicted query plan for the 'BETWEEN' version appears to be doing a table scan of a very large table, which would account for the long execution time, but that table is not the one with the date field in it.  And anyway, surely an index seek would be just as useful for 'BETWEEN' as it is for '='?

  • Well, BETWEEN would require an index scan at least - seeks only get specific values.  Are your statistics up to date?  SQL Server might not know an index scan would be faster than a table scan.  And if it's not using the index on the table holding the dates, that might be the index it was using to join to your other large table.  That could cause a table scan.  Also, if your index isn't a clustered index, then you'd need to ensure it covers all the columns your view is returning.  Otherwise it often won't be used for index scans, because SQL Server figures it has to go to the table to get the columns anyways, so might as well use a table scan.

    CXPACKET is always misleading, and anyways, your query shouldn't be waiting on itself.  PAGEIOLATCH_SH indicates that you're waiting on disk IO using a shared lock on a page.  If you have 3 locks, you're reading from 3 pages worth of data.  No way to know if that's good or bad without knowing the size of the rows in your tables, how many rows from each are coming back, and how your tables are clustered.  It doesn't seem to be too many, though.

    When running ad hoc queries, SQL Server recompiles every time.  Since you're using variables, it will sometimes guess differently than it should.  (I don't know why.)  Just for the fun of it, replace the variables in your between clause with string representing the datetime values.  If it comes back quickly, then you should be able to just put this query, with the variables, in a stored proc and get the same fast response.

    In short, I suggest updating your statistics, running the query with date constants instead of variables (as a test, not in production), and looking to see if your non-clustered indexes are covering.  If all else fails, you can always use hints to encourage the use of a specific index.

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply