May 5, 2015 at 7:45 am
Why is there often such a dramatic discrepancy between the logical reads recorded in the trace file versus the output of STATISTICS IO?
In the server-side trace I have running I found a reporting procedure that shows having 136,949,501 reads (yes, in hundreds of millions), and it's taking 13,508 seconds to complete.
So I pull the code from the trace and execute it via SSMS - it runs < 1 second, and only generates about 4,000 reads (using various different parameters I get the same result)
The execution plan shows nothing unusual
Has anyone else seen this?
______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience
May 5, 2015 at 7:49 am
User-defined functions.
http://sqlinthewild.co.za/index.php/2009/04/29/functions-io-statistics-and-the-execution-plan/
Or, looking at the duration stats...
Could be different plans. Management studio uses different set options to what most apps use, so running the query from SSMS may have generated a different execution plan, one optimal for the parameters it was run with whereas the app was using a plan generated with other parameters and which is not efficient. Bad parameter sniffing, in other words.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 5, 2015 at 9:55 am
Unless you're operating in a development environment, I wouldn't suggest even using STATISTICS IO. I've actually noticed it negatively impacting queries. Instead I capture events using extended events (you can use trace, but I don't like it). It's a pain because you have to look in two places to see the IO, but it's safer.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
May 5, 2015 at 10:08 am
@Grant - yes it's production (reporting) - and typically I don't do it either but wanted a comparison (its strictly troubleshooting only as I'm not overly familiar with extended events (which is on my "todo" list)). Usually I use the STATS IO on the DEV environment and tweak/tune as needed then it gets deployed to production where it remains untouched - but I was blown away by the difference in logical reads. I mean, 100+ million compared to < 20,000 reads is insane.
I wanted to see it for myself...
______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience
May 5, 2015 at 10:11 am
Yeah, those discrepancies can make you nuts. I usually don't even attempt to resolve them and just say "This one is different than that one" and keep going. Makes my head hurt less.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
May 5, 2015 at 11:19 am
Gail, looks like you hit the nail on the head...
In looking through the reporting procedure I did notice a TVF being called (although it really looks harmless and has a solid execution plan) and noticed some execution plans in the cache that had a very high size_in_bytes value. Looking through one of those plans I noticed a parameter table with about 3.5 million records that was involved in (at least I think it was ) a hash match with data from another index seek operation.
So parameter sniffing could be a culprit - perhaps an bad plan got generated and reused from a small account versus large account and it impacted the rest of the report runs. As when our reporting kicked off this morning it was flying through the reports, but came to a crawl when it hit this specific report....and while some still ran rather quickly, the majority of them took 40+ minutes to execute
Another thing I noticed as extremely high CPU. These same reports were killing the CPU...
As a quick fix (I will change later today) I added WITH RECOMPILE for the problematic procedure, restarted the process, and everything's running fine - after our reports complete in a couple hours I will drill down to the statements inside the proc, and perhaps add an OPTION(RECOMPILE) to the problematic statement or an OPTIMIZE FOR and see if I can correct it
The CPU went down to miliseconds per execution...
Fun Time :hehe:
______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply