August 23, 2006 at 2:59 pm
We have two servers, one of which is being used as a production server the other an upgrade server. Both are running SQL Server 2000. The production server runs Windows 2000 Advanced Server, the new one Windows 2003 Server Standard. The new server has the same RAM and # of processors (2), but has almost three times the processing power (3.8GHz versus 1.3GHz).
I backed up the database on the existing server and restored it on the new server and got all the users set up. When I powered up Tomcat on the new server, it came up fine, didn't seem to have any problems, and I was all set.
Except the application that we were running on the existing server runs faster in some instances than the application on the new server is.
As per our application vendors, we ran a trace between the two systems. When I submitted the traces to the application vendor, they told me that part of it was that for the same trace, the existing server did a Clustered Index Seek, the other an Index Scan, on identical queries.
I have updated the stats, reindexed tables, cleared the procedure cache, to no avail.
How can I rectify this? If the SQL Server on the new box decided a less efficient execution plan than the the existing box, something is not set up right, and our application vendors are saying it's within SQL Server 2000.
August 24, 2006 at 10:56 am
There will be some differences - you can only truly compare if everything is the same - which it isn't. Are the sql server editions and set up the same and likewise the database settings. seeks vs scans are based upon how the optimiser views the data and just a couple of rows or updated stats can cause that change in a table. I've certainly seen this sort of thing between sql editions with different service packs. A full dbcc reindex will also update stats and cause recompilation of procs and views. One last thing you might like to do is run a dbcc updateusage on each database - I always do this one.
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
August 24, 2006 at 1:02 pm
I have gotten a recommendation from our application developers to use the script wizard to drop and recreate the tables and indices, then use BCP to load the data in.
I have already reindexed and reindexed to no avail. I used the sp_updatestats procedure and have used dbcc freeproccache, and I have also used updateusage with count_rows, still doing the same thing
I do know the newer server has SP4, the older one does not (the newer one came with SP4 preloaded). What is troubling is that the execution plan on the old system uses clustered index seek. The new system uses an index scan (it for some reason skips over the primary key index and uses a secondary index) and does a series of findbookmark statements for the same query.
How do I correct this?
August 24, 2006 at 1:11 pm
Have you tried to force it to use the index that is being used on your old server? Check BOL for Index Hints.
I don't recommend using this all the time, but possibly you can use it to do some testing to see if that truely is the source of your problem.
Recreating your entire solution and using bcp seems a little extreme to me, when backup/restore "should" work fine...
As an alternative, when you say that you have 'reindexed', how do you mean? Are you dropping and recreating the indexes or using DBCC? If it were me, depending on the amount of data, I would probably try to drop and recreate indexes before doing a complete rework from bcp.
You have presented that the processors on the systems are very different, are there any other significant changes? RAM? HDD Configuration?
August 24, 2006 at 1:16 pm
One problem is that I don't have access to the code and I was under the impression that index hints only work within a query, which is hidden in a JAR file
I have already recreated the tables through the script wizard and am waiting to back up the data for a bcp until after hours, so that's under the bridge.
I use DBCC DBREINDEX using a script someone posted on here. I used to use that script a lot on the existing server and it really helped performance. Some tables have hundreds of thousands of rows (and that's AFTER archiving old data) and there are a hundred + tables that I would have to drop and reindex. Isn't that, too, a little extreme?
August 24, 2006 at 4:07 pm
I had a big reply in here, but it was lost when posting process decided to refresh this page. I should have learned from these forums to save off big posts by now...oh well...LOL.
I just keep it short now. I don't envy you not being able to get to the code. Being at the mercy of a vendor can be a mixed blessing.
Good luck with the BCP. I am very interested to see how that turns out for you. It would be great if you could keep this thread alive with your progress.
In regards to dropping and reindexing, I was really trying to suggest using this on the table that the vendor identified as being an issue. Then going further if needed. Yes, dropping and recreating indexes is extreme and you should not have to be in a position to need to do that, just from a back up and restore. But, it is a lot less time consuming than dropping the entire table, BCPing the data out of the live system, BCPing the data into the new, and then reindexing. I don't see what they intend to accomplish by moving the data in this way that couldn't be accomplished by dropping indexes.
Good luck!
August 24, 2006 at 4:26 pm
>>the new one Windows 2003 Server Standard. The new server has the same RAM and # of processors (2),
How much RAM ?
Which edition of SQL Server on each server ? (Standard or Enterprise)
Have you compared BOOT.INI on both servers ? Does 1 have the /3GB switch ?
August 24, 2006 at 10:25 pm
Which build of sp4 are you running? The initial release of sp4 had some big problems. If you're on anything less than 2187 you should patchup. I've seen this as well a lot many times and usually index hints will get around it (note "get around, not fix"). Not nice, but a decent holdoff until you do find the problem. I've also noticed that sometimes setting the MAXDOP to 1 will get around it too. This is a huge debate going over the internet but you gotta do whatever to alleviate the symptoms. Just make sure you keep comming back at it. These types of performance issues are very difficult to troubleshoot.
August 25, 2006 at 6:08 am
Enterprise Manager shows 8.00.2039 as the version. It's SQL Server Standard, so I don't know what good the /3GB switch would do (both machines are running SQL Server 2000 Standard with 4GB RAM)
August 25, 2006 at 8:28 am
>> It's SQL Server Standard, so I don't know what good the /3GB switch would do
It wouldn't do any good. But you hadn't mentioned Standard Edition or total RAM, so I wanted to eliminate the possibility that you had Enterprise Ed. with 1 server using /3GB but the other not.
What if you run sp_configure on both and compare the SQL server configs side by side. Anything stand out as different ?
August 25, 2006 at 8:37 am
Priority Boost is enabled on the old server, disabled on the new server (I was told never to use this option)
Remote Query Timeout is 600 on the new server, 0 on the old
Min Memory per query is 1024 on the new server, 512 on the old
Min server memory is 2157 on the new server, 2048 on the old (my guess is that the 2048 is as large as it should go, but would this affect anything?)
August 25, 2006 at 8:39 am
Also, I did a search on Infoseek and all I saw was articles that seemed to say that SP4 was the cure-all for SQL Server 2000. I guess I could upgrade to 2187 and see, but is there a way if something blows up that I can revert it back to 2039?
August 25, 2006 at 10:48 am
It's unusual for a plan to change this way, but, changes to the optimiser are not documented which is why microsoft say you should be very wary of using hints. So the plans are different - but what is the actual difference in i/o vs time ( you must test this on the server(s) to avoid network issues ) As one who tunes databases - you have to look at the overall performance of the queries and not get distracted - so what are the i/o , execution and cpu differences?
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
August 25, 2006 at 11:17 am
The reason this came up in the first place was that a query on the old server that took maybe 15 seconds to run is taking more than a minute to run on the new server. When we traced the query, it was doing an index scan versus a clustered index seek. It blows my mind why a faster system would behave this way.
What should I do on each server to test what you're talking about?
August 25, 2006 at 11:26 am
The only thing that stands out about the settings you posted above is the Min Server Memory being 2157. At the moment my brain is failing me, and I can see reasons why this both this would and would not be a problem. It is a simple thing to lower it and see.
Any results from the BCP process yet?
What is your "max degree of parallelism" set to? Someone mentioned it earlier, but I haven't seen a response to it. If I had to guess, your old system did not support Hyper Threading and the new system does, or has dual cores. This means that the new system would think that it had 4 processors to work with instead of 2. This could greatly change the plans that are created, sometimes not for the better. It would be worth a try to set it to 1 and test again, assuming it is not 1 or 2 now.
Viewing 15 posts - 1 through 15 (of 49 total)
You must be logged in to reply to this topic. Login to reply