I have 20 same client programs each installed in windows xp. Each client issues a specific query to sql server 2008 R2 in a database with compatibility level 80 (Sql Server 2000). The clients use ODBC. The performance suddenly dropped too much. except for one machine. and now I'm wondering why. I attach 2 trace files. One from one of the machines that have the problem and one from the machine that is ok. Since it plays even on one machine ok it seems to be a local problem and has to do with the way the machine issues the query. So I change the ODBC from Sql Server (2000) to Sql Server Native Client 10. But nothing. Next step is to reinstall the client application. I am very confused.
March 24, 2021 at 7:08 pm
Trace files are discarded and I attach them again as xml
March 24, 2021 at 7:44 pm
What version/patch # is the server at?
What ODBC driver version is each client using? Do they have identical hardware (CPU, amount of RAM, speed of RAM, etc.)?
Are they pointing at the same server/database?
Execution plans would have been helpful. Row Counts might have been useful to know if we're comparing apples to apples since they aren't processing the same IDs. You probably need to query the tables and determine if the #of rows processed by a sample of the transactions is comparable.
The high stats on the problem trace all seem to be associated with cursor processing. No solution to that... just observation. I was some links related to cursor performance on SQL Server 2012, but not 2008R2. But since you're running in 2000 compatibility mode, perhaps it could date back to that? You're really running past the expiration date on these systems.
I wish you good luck. Please share if you find solution outside the forum.
March 24, 2021 at 10:19 pm
As ratbak noted - Cursors are slow. Cursors work on a row by row basis rather than on the set of data. The more rows you have, the slower the cursor gets. It may be that you have hit a bottleneck that you cannot solve with hardware or software apart from rewriting the SQL code to not use cursors.
As for a solution to performance issues, there is soo much to unpack there. Upgrading to newer SQL Server may help as the query optimizer is improved and you get some new tools you can use to help with your queries (such as the LAG and LEAD functions, which may or may not help in your case). On top of that, do you know where the performance issue is? Is the performance issue on the client side or the server side? Windows XP is pretty ancient technology (and not a secure OS by any means) and it could be the problem is entirely on the client side.
Now, onto analyzing those XML's - they appear to be doing different things. A very quick look at them, the "OK" one declares and sets a bunch of variables and then runs an execute cursor command. The problem one, the first thing it does is insert a bunch of data (looks like it does it by dynamic SQL). Just skimming through the XML files, I don't see where there is an identical query to do a comparison with for performance. The XML doesn't have a lot of useful information in it. Execution plans are what you are going to want to review performance differences.
If it is stored procedures, you may have parameter sniffing problems. Could be statistics are out of date. Could be network hiccups.
My first step would be to determine if this is a client machine issue OR a server issue. If nothing changed on the SQL side and it was fast before, run some tests on your own of queries that are being reported as slow. If the queries are not slow, I'd look at the XP machines.
The above is all just my opinion on what you should do.
As with all advice you find on a random internet forum - you shouldn't blindly follow it. Always test on a test server to see if there is negative side effects before making changes to live!
I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.
March 25, 2021 at 10:56 am
I have not yet read the previous 2 answers but I want to catch you and add an important new fact. Ultimately the problem is not the computers but the users. If I login to the client application only with this (AGOO_27) then there is no problem as shown in the trace!
Capture the execution plans between the two users, the one that's good and the one that's not. Compare them (you can use the SSMS tool to do this). Look specifically at the ANSI settings for the execution plans. They're in the Properties of the first operator. See if there are differences in the ANSI settings. These can lead to different execution plans and different performance.
Likelihood, the problem is something else, statistics, data, database settings, but this is a possibility.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
March 25, 2021 at 11:09 am
I read the answers and I am back to add another evidence. The problem occurred the day I reindexed and updated the database statistics. But I have been doing this for 5 years every 2 months without any problems
March 25, 2021 at 11:27 am
The way that the query optimizer works is it uses the statistics to come up with row counts (estimated or actual). Based on those counts, it makes decisions on how to configure the plan (scans vs. seeks, loops vs. hash vs. merge, etc.). Over time, query behavior can change, not because you've changed code or structures, but because the data & statistics are now producing new row counts.
If possible, again, we're back to the execution plans, get a plan from before the stats update, and one after. Spot the differences. That can lead you up to what's going on.
Also, it's possible that you always do a sampled update and now a full scan would be better because of changes to the data. Or, vice versa, you've always done a full scan, but now, sampled might be better. However, you won't know this until you gather information. That info is in the execution plans for the problem queries.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
March 25, 2021 at 2:34 pm
Another thought is it could be parameter sniffing problem too. If user A passes in 'hello' as a parameter and user B passes in 'world', it could be that the optimizer is well optimized when a 'hello' parameter comes in but the 'hello' plan performs poorly when 'world' is the parameter.
The above is all just my opinion on what you should do.
As with all advice you find on a random internet forum - you shouldn't blindly follow it. Always test on a test server to see if there is negative side effects before making changes to live!
I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.
March 26, 2021 at 10:31 am
As I have already mentioned the problem occurred the day I reindexed and updated the database statistics. I run second time update statistics and everything is ok. Thank you.
March 26, 2021 at 12:27 pm
Sounds like either bad parameter sniffing or out of date statistics or a combination of the two.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply