Query times out from web app but runs fast from enterprise manager

  • This one has me stumped. I'm using ASP.NET 2.0, connecting to SQL Server 2005. I have a stored procedure that has started timing out frequently when run from the page it's in, but when I take the SQL from profiler, or from my trace logs, and run it via enterprise manager, it always comes back within 1 second.

    I thought maybe it was running fine from EM because the data had been cached in RAM, but if I do things in reverse, and run the sp with different parameters in EM, it runs quickly, then I do the same report from my web app and it times out. I can see the exact same SQL coming in to the profiler. The IO and CPU for the one coming in from the web app are through the roof, and then it gets killed by my 60 second timeout. I can watch the query start, copy out the SQL, paste it into EM, and run it repeatedly while the one started from the web app is still spinning.

    Could there be something about the connection pool causing it?

    I have seen this happen with one other query. Both queries are monsters, so maybe it's happening all the time with lots of other small quick queries and I just don't notice because they still come back quickly.

  • That's definitely odd. Are you certain that it's the SQL server that's timing out, and not something else in between the page sending the query and getting the data back? It sounds like you are, but just to be sure.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • I'm not sure how this formatting will work out, but here are the 2 queries, first the EM, then the web app. TextData is identical for both.

    nameApplicationNameCPUReadsWritesDurationClientProcessIDSPIDStartTimeEndTime

    SQL:BatchStartingMicrosoft SQL Server Management Studio - QueryNULLNULLNULLNULL15206232:23.5NULL

    SQL:StmtStartingMicrosoft SQL Server Management Studio - QueryNULLNULLNULLNULL15206232:23.5NULL

    SP:StartingMicrosoft SQL Server Management Studio - QueryNULLNULLNULLNULL15206232:23.5NULL

    SP:CompletedMicrosoft SQL Server Management Studio - QueryNULLNULLNULL41203815206232:23.532:23.9

    SQL:StmtCompletedMicrosoft SQL Server Management Studio - Query422381554741546715206232:23.532:23.9

    SQL:BatchCompletedMicrosoft SQL Server Management Studio - Query422381554741565715206232:23.532:23.9

    RPC:Starting.Net SqlClient Data ProviderNULLNULLNULLNULL38286732:58.7NULL

    SP:Starting.Net SqlClient Data ProviderNULLNULLNULLNULL38286732:58.7NULL

    SP:Completed.Net SqlClient Data ProviderNULLNULLNULL4725029838286732:58.733:46.0

    RPC:Completed.Net SqlClient Data Provider4518836888964614725080738286732:58.733:46.0

  • Does the stored procedure run just as slow in the query window if you have the same SET options as your applications connection?

    You should be able to get the applications SET options from a profiler trace that includes Sessions:ExistingConnection.

  • Wow. That was it. Now the question is, why?

    I pasted these into enterprise manager and the query took forever.

    -- network protocol: TCP/IP

    set quoted_identifier on

    set arithabort off

    set numeric_roundabort off

    set ansi_warnings on

    set ansi_padding on

    set ansi_nulls on

    set concat_null_yields_null on

    set cursor_close_on_commit off

    set implicit_transactions off

    set language us_english

    set dateformat mdy

    set datefirst 7

    set transaction isolation level read committed

    The only thing that sticks out to me is "read committed". But that gets overridden within the SP. I have this in the first few lines:

    set transaction isolation level read uncommitted;

    set nocount on;

    set xact_abort on;

    But even then I don't think the isolation level should matter, since the tables that are being queried are built at night, and get no updates during the day.

  • In the original query window you were probably getting a different execution plan. Then when you set the options you likely got the same plan as the client app, so that leans toward a problem with parameter sniffing.

    Have a look a look at this blog entry on the subject for some ideas on fixing that.

    http://blogs.msdn.com/khen1234/archive/2005/06/02/424228.aspx

  • It was "set arithabort off". I haven't figured out why yet. That's the last thing I would have considered.

    I just assumed that the defaults sent in by .NET matched the defaults I got when using Enterprise Manager. I never thought to actually check.

  • ericzbeard (8/11/2008)


    It was "set arithabort off". I haven't figured out why yet. That's the last thing I would have considered.

    I just assumed that the defaults sent in by .NET matched the defaults I got when using Enterprise Manager. I never thought to actually check.

    I wouldn't look to closely at changing the connection options, that only affected which query execution plan you were getting when the stored procedure was run. So, if you were to change an option it would just be a temporary fix.

    There are a few ways to address a parameter sniffing issue; using WITH RECOMPILE, replacing parameters with local variables or using plan guides.

    Reference:

    SQL Server 2005 Practical Troubleshooting - The Database Engine by Ken Henderson

  • Wouldn't I see that the plan was different when I was running it? I tried a few times and looked at the plan (with "Include Actual Execution Plan" checked) and saw no differences. I even saved as XML and diffed the files, nothing but the arithabort option changed.

    The biggest chunk of this plan, according to management studio (I need to stop calling it enterprise manager), is 70% for an index seek on a table with around 15 million rows.

    The proc takes 4 parameters, and the one that usually changes is a company id, of which there are hundreds, so the canonical example of parameter sniffing, where you have something like "country='usa'" doesn't seem to apply. I would be surprised if Sql Server ever thought a scan would be faster than a seek.

    Changing query plans would be the only thing that makes sense, with a query that goes from 90 to 2 seconds, but unless I'm missing something, I don't think the plan changed.

    I might try "with recompile", but I worry about creating that complex plan every time the query runs. I'll need to see exactly how much time it adds to execution.

  • Are you sure it's not being caused by the read committed vs read uncommitted? I don't recommend it's use, because you can get some really bad data, but read uncommitted will run faster, quite a lot faster.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • There's no question that it was arithabort. I took each set option and ran it alone with the query after disconnecting and reconnecting.

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply