October 14, 2003 at 5:08 pm
Hello,
I've got a developement NT4 box with a dual installation of MSSQL7 and 2000. I've taken a backup of my MSSQL7 database and restored it on 2000. When I point my app server to the MSSQL7 database, and run my test suite, the test suite runs in the expected amount of time. I then shut down MSSQL7 and start 2000. I point my app server to the MSSQL 2000 port, and run my test suite. The performance is terrible. The reason is likely because when I run my test suite against the MSSQL2000 instance, the CPU is pegged at 100%.
So with everything else being equal, the same set of queries run fine against MSSQL7 and peg the CPU on MSSQL2000.
I've done a reindex of all tables, and updated stats. Anything else I'm missing?
Thanks for your help,
Dave
October 15, 2003 at 5:05 am
I woul watch it with profiler. Also if this is a multiple instance situation where 7 and 2000 are on there 2000 may not have taken over the needed resources. You should go in and configure the maximum amount of memory each instance can use in it's settings to see if corrects.
October 15, 2003 at 7:21 am
Thanks for the reply. I will ensure MSSQL2000 has max memory available. I don't expect that this is the issue, because (1) I don't have 7 and 2000 running at the same time, and (2) there is no disk paging occurring. This is purely a CPU issue as far as I can see. What specifically should I look for with the profiler?
Thanks,
Dave
October 15, 2003 at 7:55 am
Try to run dbcc dbreindex and statistics update to your database to see whether you have some kind of improvement. If not, do as Antares686 suggested, use Profiler to look those queries that take more CPUs times and examine its execution plan.
October 15, 2003 at 8:03 am
Hi Allan,
I've already done a dbreindex and statistics update. I will run the profiler and take a look at the CPU and plan.
Thanks,
Dave
October 15, 2003 at 9:27 am
Actually how are you determining it is SQL pegging the CPU? Make sure you are looking at CPU under Windows Task Manager on the Processes tab. Watch sqlservr.exe to see if it is surely SQL doing the pegging.
October 15, 2003 at 9:37 am
When we upgraded from 7.0 to 2000 we had a couple of queries that exibited the same problem. We have a field in our database that only has integer data stored in a varchar field(inherited system). These queries were doing an "in" statement with several of these integer values seperated only by commas. In SQL7.0 this did not cause any problems. I can only assume SQL7.0 converted them to strings in the query before executing. However, SQL2000 appears to convert each value in the column to an integer. These queries would run for hours, peg out the CPUs, and bring our system to a halt. Once we wrapped each integer within quotes, the queries would run in a matter of seconds.
IE. select col1 from table1 where col2 in (1234,2343,2345)
changed to
select col1 from table1 where col2 in ('1234','2343','2345')
Hope this helps.
October 15, 2003 at 10:39 am
quote:
When we upgraded from 7.0 to 2000 we had a couple of queries that exibited the same problem. We have a field in our database that only has integer data stored in a varchar field(inherited system). These queries were doing an "in" statement with several of these integer values seperated only by commas. In SQL7.0 this did not cause any problems. I can only assume SQL7.0 converted them to strings in the query before executing. However, SQL2000 appears to convert each value in the column to an integer. These queries would run for hours, peg out the CPUs, and bring our system to a halt. Once we wrapped each integer within quotes, the queries would run in a matter of seconds.IE. select col1 from table1 where col2 in (1234,2343,2345)
changed to
select col1 from table1 where col2 in ('1234','2343','2345')
Hope this helps.
Some implicit conversions changed between 7 and 2000. What is being described here could be a likely cause as well. It is always better to do an explicit rather than an implicit conversion to avoid this.
October 16, 2003 at 8:47 am
It is definitely the sqlserver process chewing up the CPU. I think I now know the cause of the problem. I ran the profiler and found a slow running query, and compared the execution plans from 7 to 2000. This particular query is a single table query where we are locating a single row by PK. The primary key is defined with a non-clustered, unique index. The primary key is a varchar.
In 7, the query runs with an index SEEK against the PK index (I/O cost 0.00632, CPU 0.000080).
In 2000, the query runs with an index SCAN against the PK index (I/O cost 3.09, CPU 1.10).
What's happening is that my app server is generating SQL like:
SELECT * FROM foo WHERE pk = N'thepkvalue'
In 7, the implicit UNICODE conversion caused no apparent harm. In 2000, the conversion causes the optimizer to choose a bad plan.
When I change the query to:
SELECT * FROM foo WHERE pk = 'thepkvalue'
the plan uses the expected PK index SEEK.
Thanks for your help.
Dave
October 16, 2003 at 9:02 am
We had the same problem for many queries.
Setting the compatibility to 70 solved it. Not a nice solution, but it worked.
I thought it had something to do with Centura/Gupta programs using ODBC connections.
But we also have implicit conversions. (I never found a way to make programmers stop using that.)
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply