March 3, 2010 at 10:26 am
I have a stored procedure that typically takes <10 seconds to run. It builds a dynamic sql statement against several tables that are multi million row tables..one is approx 2M rows and another is approx 4M rows. This was running fine for several months with no changes in performance and no code changes. I know for a fact this was running smoothly as of last night.
This morning this stored procedure suddenly started timing out. I've been able to reproduce the issue in sql mgmt studio and right now its up to 11 minutes and counting. The query plan shows lots of index seeks and no table scans.
But for some reason something has changed and I cannot figure out what. I doubt that its multi user locking issues because nobody is using the system right now and only one person was using it when this started happening.
Can anybody suggest which system tables/dmvs, or general diagnostics, to use to help me figure out what the heck is going on ?
I've added a try catch block to the SP but it never even makes it there. While executing the dynamically built update stmt it just takes forever until I finally cancel the query. Is up to 20 minutes now. What the heck !?!?
Thanks,
Bill, Charlotte NC
March 3, 2010 at 10:32 am
Just a shot in the dark, but have you tried updating statistics on the database, or at least the tables that the query can/does use?
March 3, 2010 at 11:10 am
Thanks for the suggestion. I updated the statistics on the tables used in the query and it came back to normal. I understand what happened but frankly I'm amazed that the statistics could cause such a dramatic difference in performance.
The tables that I updated the statistics for have 2,136,954 rows, 18 rows, 633,516 rows, and 4,189,124 rows. There is also a view with 633,516 rows.
The table with 2M rows typically has inserts of 10,000 rows or more per transaction for a total of 10,000 to 100,000 new rows a day. The table with 4M rows has a complete flush and refill via ssis once every 2 weeks or so. So there is a fair amount of data churn going on.
This database and winform application does a lot of batch type of work where large quantities of rows are added and or modified.
I will update my data loading process to update the stats on tables when they are flushed and filled and will add a nightly maintenance sql agent job to update the stats on other tables in the mix.
Can anybody confirm that statistics being out of whack can cause such voodoo ?
Thanks.
March 3, 2010 at 11:28 am
Stats get used during execution plan building and validation, so bad stats usually = bad exec plan.
If for example it underestimated how many records might be affected, some of your joins might have been set to use methods that work well with small resultsets.
----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
March 3, 2010 at 11:45 am
Yeah, I basically understand all that and thanks for the information. Needless to say I will be working on my maintenance scripts this afternoon. I was just surprised that this could literally bring my database to its knees from one day to the next.
If you're not making mistakes you're not learning.
March 3, 2010 at 7:14 pm
William Plourde (3/3/2010)
Can anybody confirm that statistics being out of whack can cause such voodoo?
Absolutely. The task of the query optimizer is quite hard enough, without it being fed duff information in the first place. Good statistics are vital to the production of a close-to-optimal plan.
I don't often see this problem in the real world, except occasionally on very large tables with unusual data patterns - do you have statistics set to automatically update? Modern versions of SQL Server are normally pretty good at detecting stale statistics.
Paul
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply