March 25, 2016 at 7:29 am
Hello, each week a process is ran from the front end system that sometimes will take at least twice as long to run as other times.
During the slow times, I see sp_execute taking alot longer.
I am on sql server 2008r2 sp3.
I have traced the process during the fast times and the slow, and after some analysis, sp_execute is performing more reads during the slow times, however the line above the sp_execute statement in the trace (the actual statement being executed) is doing the same amount of reads, but still taking longer.
See attached for the comparison.
I know from my monitoring software the the statements are using the same index and query plan, however I can not see a query plan for sp_execute.
Is there a way to see this query plan
Do you have any ideas as to why I am seeing the differences on the spreadsheet attached.
a. for the same amount of reads but longer duration
b. for the increased number of reads and duration during sp_execute
Please discount the date differences, i took the example of the fast query ages ago, but we had an example of the the process running fast just the other day, and a painfully slow example yesterday.
The system was busy to start with, but even when this had dropped off i saw the same performance.
When i have seen this previously on the same system, sp_execute is slow for a number of users.
Please have a look and let me know your thoughts.
Thanks
March 25, 2016 at 8:16 am
From what I can tell, the statement complete is just the definition of the prepared statement that the sp_execute command is going to run. There are differences between the slow runs and the fast in the parameter values being passed. That could account for the differences all by itself. Even if the exec plans are the same, slow to fast, maybe they shouldn't be. It's hard to say more without seeing the plans and the structure. However, just differences in parameter values can easily explain this. One parameter returns 1 row, the next 100.
You know how READUNCOMMITTED works? You do know that this can lead to duplicate or missing rows? It's always worth pointing out just how dangerous a practice it is.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
March 25, 2016 at 9:30 am
Thanks Grant I will upload the plans when I am able, to provide a bit more to go on.
I see your point about the same plan not necessarily being ideal for two different sets of parameters, I will try and find a closer comparison of queries to analyse.
What doesn't make too much sense is it seems to be across the board. The processes executes numerous different queries via sp_execute. The examples I provided were 2 of thousands of queries executed!
Yes I am aware of the implications of read uncommitted, this is a 3rd party application so we don't get a lot of say in the queries, I may bring this to their attention though.
March 27, 2016 at 3:10 pm
Think i have spotted something.
In the attachment i added to this post each of the sp_execute commands is doing at least three times the number of reads as the corresponding stmtcompleted statement. Nb The example I provided features just two instances of thousands of queries that the process runs.
In the 'fast' trace, these reads correspond to each other. ie the same amount of reads are performed by the sp_execute as the stmtcompleted statements. Pretty sure these differing reads are what's causing the performance hit, and this makes me think the query plans I am seeing are related to the stmt and not sp_execute.
I think I need to find the execution plan that the corresponding sp_prepare is using, in order to determine why sp_execute is slow and performing extra reads. However sp_prepare is not picked up by the trace because I started it too late. It is also not picked up by my monitoring software because it has no impact on the system.
Is there a way to find the execution plan that the sp_prepare statement used after the event? Do you agree with my reasoning, or am I on the wrong track!?
Any help is greatly appreciated.
March 28, 2016 at 6:57 am
You're kind of on the wrong tack.
What's going on is your code is preparing statements and then executing those statements. The prepare itself is a very minor operation (low reads, very fast), then the execution is the sp_execute statements you're seeing. There it just passes in the identifier for the prepared statement and any necessary parameters. You're not going to see a query plan for the prepare statement really. You're seeing query plans for the execution of those statements. A prepared statement is more or less a fancy way to do stored procedures without actually doing stored procedures. It's a parameterized query through a prepared statement. This older document describes it (although the comments about pre-compiling are crap, it doesn't get compiled until it gets executed).
The issue is more than likely around the parameter values being passed in. You're going to see parameter sniffing (a good thing most of the time) occur for these prepared statements. It's going to use the value that is passed into the prepared statement the first time it's called to create the execution plan. Those values could call for seeks or scans or certain join types. Then, another value is passed the next time the statement is called, but this value would be better off with a different data access method or a different join, so performance drops. Or, it's simply that one value is returning 10 rows, a certain number of reads, and the next value is returning 100 rows, more reads.
So the key is to focus in on the parameter values, the data, the statistics, and the execution plans. Understand why the query is slow sometimes and fast others. Is it because the plan is different and problematic? Is it because the data just results in differences in performance? Heck, is there something else entirely going on and you're just seeing resource contention which can lead to differences in speed (although not differences in the number of reads, that has to be data, statistics, or execution plans).
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
March 28, 2016 at 10:58 am
Well I have all but ruled out resource contention on account of the differing number of reads, as well as some comparisons I have made re resource usage. Pretty similar for fast and slow runs.
So as you say I will focus on data/statistics etc.
You are right I need to find out what is different about each run. I'm going to make sure I catch the sp_prepares in my trace so I can try and replicate using the correct parameter data types.
Thanks for the pointers Grant, I'll let you know how i get on!
April 27, 2016 at 7:47 am
Hi thought I'd provide an update. thanks for your advice Grant, as a result i have implemented a weekly update stats routine prior to the user's process running and so far haven't had any further issues.
Thanks again!
April 27, 2016 at 8:20 am
Thanks for reporting back. Glad what I said was helpful.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
April 27, 2016 at 10:37 pm
cunningham (3/28/2016)
Well I have all but ruled out resource contention on account of the differing number of reads, as well as some comparisons I have made re resource usage. Pretty similar for fast and slow runs.So as you say I will focus on data/statistics etc.
Create a clustered index on ([CustomerNo_], [Due Date]).
And non-clustered on (CustomerNo_, [Document Class], [Status]) INCLUDE (CurrencyCode) would be useful.
Then you may stop worrying about data, stats, etc.
Only thing - you might need to defragment those indexes time to time, better on a regular basis.
_____________
Code for TallyGenerator
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply