November 20, 2014 at 5:28 pm
Hi everyone,
Background:
We have a daily db restored in the morning for Reporting Services SSRS 2008R2 before users come in the office. Early morning those users run a 2-4 reports (driven by a storeproc) directly from SSRS. Each complete in about 2-3 minutes
Later about 30 minutes later a SSRS subscription initiates 2 more report with different parameters passed to stored proc to generate additional reports also runs in about 2-3 minutes
Problem:
Every few weeks (it's inconsistent) all the reports do not complete for hours so we end up killing the running session in SQL then recompiling the sproc w/ sp_recompile, which resolves it.
In SSRS the report for the first report "rsInternalError" - I couldnt find anything in the Report Logs
Anyway all the subsequent calls for that same report get 'rsProcessingAborted'
Note other SSRS reports are running and being processed so 'reportserver' to my knowledge is operational though his query failed.
In SQL all sessions show actively 'runnable' for several hours, so then we kill them since our normal expected duration is 2-3 minutes for results.
I'm unclear of issue because:
The the proc cache is empty every morning. The first run by the users is always a newly recompiled proc since DB is restored always, which free's up proc cache. However it will run long (we kill it because it runs for hours, which normally takes less than 3-4 minutes)
Yet killing spids, recompiling sproc fixes it everytime.
I would be interested in theories and suggestions. Thanks for reading! 🙂
--------------------------------------------------
...0.05 points per day since registration... slowly crawl up to 1 pt per day hopefully 😀
November 21, 2014 at 1:44 am
Looks like bad parameter sniffing and/or stale statistics.
Here's a good article on the subject: http://www.mssqltips.com/sqlservertip/3257/different-approaches-to-correct-sql-server-parameter-sniffing/
-- Gianluca Sartori
December 2, 2014 at 4:11 pm
spaghettidba (11/21/2014)
Looks like bad parameter sniffing and/or stale statistics.Here's a good article on the subject: http://www.mssqltips.com/sqlservertip/3257/different-approaches-to-correct-sql-server-parameter-sniffing/
Thanks for reply and link. I would think so too, but if the cache is 'cleared' as part of normal sql server behavior when a database is restored I would assume there would be no parameter sniffing? (This db is restored nightly to use as reporting db)
I thought parameter sniffing occurs when something is cached in proc cache.. in this case there isn't anything cached.
Also this only happens occasionally which makes it mysterious. The amount of data is 'normal' that we have in the db.
I would suggest the developers use 'option recompile' - but since there is nothing in the proc cache technically the first run would be recompiled.
--------------------------------------------------
...0.05 points per day since registration... slowly crawl up to 1 pt per day hopefully 😀
December 2, 2014 at 4:21 pm
Stale statistics could be the cause.
Plans compiled against stale statistics are more likely to be sub-optimal.
-- Gianluca Sartori
December 2, 2014 at 7:41 pm
spaghettidba (12/2/2014)
Stale statistics could be the cause.Plans compiled against stale statistics are more likely to be sub-optimal.
That's a possibility thanks!
So maybe the long running plan was using stale stats... but while running it triggered an auto 'update stats' to a table/column. If that's case then do you think if I ran a query to check stats date, I would expect the current day...stats?
It'd make sense then only after a recompile of the sproc (which gets a new plan) that it runs well ...vs not clear the plan cache.
--------------------------------------------------
...0.05 points per day since registration... slowly crawl up to 1 pt per day hopefully 😀
December 3, 2014 at 2:23 am
If the cache is cleared, a new plan is needed for each statement.
If the stats are stale, the plans are compiled against stale stats. All the objects referenced by a statement need stats and some may be good and some may not. The threshold for auto stats update is 20% of modified rows (unless you activated TF 2371). For big objects the threshold may be too high, so when a plan is calculated when the number of modified rows is just below the threshold, you get the worst possible stats (and possibly the worst plan). Wait a few minutes, let the number of modified rows cross the threshold and auto stats kick in, you get fresh stats.
So, long story short, it's possible.
-- Gianluca Sartori
December 3, 2014 at 10:50 am
I suggest RECOMPILE as well. That may take some time, but it will be trivial compared to a terrible plan.
Also, if you leave auto stats update on, explicitly set it to be async instead of the default sync so that current queries do not wait for the stats update(s) to complete.
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
December 3, 2014 at 11:19 am
Thanks SpaghettiaDBA and ScottPletcher for your feedback! I appreciate it 😀
--------------------------------------------------
...0.05 points per day since registration... slowly crawl up to 1 pt per day hopefully 😀
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply