June 28, 2018 at 1:49 am
Hi all,
i'm having trouble separating the total query duration into SQL processing time and network transmission time to client?
Since the queries on SQL server itself using SSMS / Plan explorer consistantly run in ~50ms, but when the exact same query witrh the same parameters is being run from (don't laugh) a OLEDB (MDAC 2.8) based webserver we see 1 - 3 sec runtimes.
I suspect that the webserver is not picking up the answers right away when its under average load.
But how do you prove (or disprove) that?
The network IO waitstat is not helping either, as its always near zero. And using wireshark is not an option as i cant separate the SPIDS in the network streams, and time syncing it to trace / extended events.
Any ideas would be welcome 🙂
grtz
Theo
June 28, 2018 at 4:47 am
Theo Ekelmans - Thursday, June 28, 2018 1:49 AMHi all,i'm having trouble separating the total query duration into SQL processing time and network transmission time to client?
Since the queries on SQL server itself using SSMS / Plan explorer consistantly run in ~50ms, but when the exact same query witrh the same parameters is being run from (don't laugh) a OLEDB (MDAC 2.8) based webserver we see 1 - 3 sec runtimes.
I suspect that the webserver is not picking up the answers right away when its under average load.
But how do you prove (or disprove) that?
The network IO waitstat is not helping either, as its always near zero. And using wireshark is not an option as i cant separate the SPIDS in the network streams, and time syncing it to trace / extended events.
Any ideas would be welcome 🙂
grtz
Theo
I suppose that if you are executing a proc, you can add a new variable at the start of your procDECLARE @StartTime datetime = GETDATE()
Then at the end of the proc, you can return the duration of the proc in ms.RETURN DATEDIFF(ms, @StartTime, GETDATE())
Now you calling process will get a return value which indicates how many milliseconds it took to actually exec the proc.
If you already have return values with meaning, you could use an OUTPUT parameter
June 28, 2018 at 5:06 am
Hi Des,
If only i could, the SQL code is genereted in Java at runtime alas.
Good suggestion though 🙂
June 28, 2018 at 5:11 am
Theo Ekelmans - Thursday, June 28, 2018 5:06 AMHi Des,If only i could, the SQL code is genereted in Java at runtime alas.
Good suggestion though 🙂
Could the Java code also add the extra variable, and return it as a second result set?
June 28, 2018 at 6:54 am
Theo Ekelmans - Thursday, June 28, 2018 1:49 AMHi all,i'm having trouble separating the total query duration into SQL processing time and network transmission time to client?
Since the queries on SQL server itself using SSMS / Plan explorer consistantly run in ~50ms, but when the exact same query witrh the same parameters is being run from (don't laugh) a OLEDB (MDAC 2.8) based webserver we see 1 - 3 sec runtimes.
I suspect that the webserver is not picking up the answers right away when its under average load.
But how do you prove (or disprove) that?
The network IO waitstat is not helping either, as its always near zero. And using wireshark is not an option as i cant separate the SPIDS in the network streams, and time syncing it to trace / extended events.
Any ideas would be welcome 🙂
grtz
Theo
I'd use extended events to capture the query metrics. The performance measurement is at the server itself. The duration will tell you how long the query ran on the server. It doesn't include transmission time. With that, you have everything you need to know.
By the way, you might want to look to the execution plans just in case. It's possible that the remote connection is using different ANSI settings than standard and this could lead to different execution plans and different performance.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply