Help with slow access on one SP

  • Hi all,

    I have a SP that when I run in SSMS it runs in less then a second. I can do this all day long. I then run it from a c# .net app and it will time out. This SP is one of many that are ran thousands of times a day and this is the only one that it times out on. It is not a complicated SP. It involves three tables using joins and lists 7 rows returned.

    I am interested in finding out how to track down why SQL server 2008 r2 has problems returning the information for this SP only. And also make sure that the problem is indeed inside SQL Server.

    One thing to note when running DMV queries to see what that SP is doing it does not show that the SP has been run. Shows lots of other SP, but not this one. This is the DMV query I am running.

    select DB_NAME(st.dbid) DBName, st.objectid SP_Num,

    OBJECT_NAME(st.objectid,st.dbid) StoredProcedure, max(cp.usecounts) Execution_count,

    sum(qs.total_worker_time) total_cpu_time, sum(qs.total_worker_time) / (max(cp.usecounts) * 1.0) avg_cpu_time,

    ROUND (sum(qs.total_worker_time) / (max(cp.usecounts) * 1000.0 ),4) avg_cpu_time_in_Sec

    from sys.dm_exec_cached_plans cp

    join sys.dm_exec_query_stats qs on cp.plan_handle = qs.plan_handle

    CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) st

    where cp.objtype = 'proc' and DB_NAME(st.dbid) = 'Database_Name_Goes_here'

    group by DB_NAME(st.dbid),OBJECT_NAME(objectid,st.dbid),st.objectid

    order by StoredProcedure

    Any help or direction would be greatly appreciated.

    [font="Tahoma"]John Burris:hehe:
    MCITP Database Administrator[/font]

  • John when a proc runs quick in SSMS and slow elsewhere, one of the first things i look for is parameter sniffing.(you can search for that term on this site and get a LOT of info). it's also possible that the SET options (like ansi_warnings, etc) are different and cause the issue , but in my experience it's parameter sniffing first and formost.

    first thing i would do is the following.

    review the stored proc and see what tables it touches.

    then for each tablename it touches, do UPDATE STATISTICS dbo.YourTableName WITH FULLSCAN

    it doesn't take too many rows an large tables to throw stats into a place where the execution plans slow down.

    does the performance from C# improve after that?

    next you'd probably want to post the proc here for a little peer review;

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

  • After more research and investigating. I refreshed the statistics and the problem went away.

    Thanks for the help.

    Can you tell me how the statistics can get that out of whack that far that fast to slow a SP from ms to over 30?

    John

    [font="Tahoma"]John Burris:hehe:
    MCITP Database Administrator[/font]

  • others will chime in with more details, but here's the basics as i understand them:

    SQL server builds a query plan to access the data for the stored procedure based on the statistics at the time it was first compiled.

    those statistics might say the granularity of unique values in a given column is very close to being unique. That allows SQL to create a plan tailored to how unique the data is.

    The plan uses an index and an execution plan that expects maybe one row or a handlful of rows.

    Now the data changes, and less than 20% of the rows int eh table are updated,added or deleted, so auto_update stats doesn't kick in.

    however, now, the values are not nearly as unique as when the plan started.query was originally compiled. now there's 500 or more values in hte database meeting the same criteria.

    The plan is not efficient anymore, because maybe it used an index seek before, but now an index scan would be better...but the plan says use a seek, so that's what it does...now instead of seeking for one row, it seeks (over and over again?) for the 500+ rows that really get returned now.

    joins in the plan which are efficient for one row are not efficnent for lots of rows, so the merging of the data slows it down to (nested loops instead of merge joins, for example)

    rebuilding stats allows the engine to realize the current plan is no longer good enough,and creates a new one.

    If i'm wrong on the process i tried to describe above, the big posters here will help correct me, but that's how i seem to misremember it.

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply