December 16, 2004 at 8:39 am
Some months ago we migrated our main database (10GB) from SQL 7 to SQL 2000. The upgrade was performed by restoring the most recent SQL 7 Backup onto the SQL 2000 server.
Almost immediately two serious problems were encountered. Some queries simply failed to work. They caused the CPU usage of the server to stick at 100% but never completed. This was identified and fixed very rapidly, surprisingly by editing the queries, adding keywords/hints etc that should have been the "defaults" anyway.
The second problem is more irritating. Despite performing index rebuilds, updating the statistics etc etc on every table and index (we updated did everything we could think of on every table/index), the query optimizer consistantly fails to select the most obvious and efficient indexes to use for joins, look-ups etc. This means we have to use copious index hints to get queries to run efficiently. By "failing to use the obvious indexes" i mean like ignoring the primary key when it is the field being joined on, and electing to do a table scan using an entirely different index containing fields not even mentioned in the query.
Has anyone else encountered such problems? Is it related to the fact that the database was "restored" on the SQL 2000 box instead of being transferred using something like DTS? Can anyone suggest anything else we can try that will enable the query optimizer to start acting sensibly?
December 16, 2004 at 5:19 pm
I restored many times backups from SQL 7 to SQL 2000. Didn't have any problems. Microsoft recomends to run some optimisations after restore. I use this script:
USE master
GO
DB_Name.dbo.sp_updatestats
GO
EXEC sp_dboption @dbname = 'DB_Name',
@optname = 'single user' , @optvalue = 'TRUE'
DBCC CHECKDB ('DB_Name', REPAIR_REBUILD)
EXEC sp_dboption @dbname = 'DB_Name',
@optname = 'single user' , @optvalue = 'FALSE'
December 17, 2004 at 3:15 am
Svetlana,
Many thanks for the suggestion. We have already tried updatestats and also dbcc dbreindex. These made no difference at all.
I tried your script and was pleasantly surprised at how quickly it ran on our database (DB size now 20GB, and it took only 20 minutes), but this also seems to have made no difference at all.
I would appreciate any other suggestions anyone has.
P.
December 17, 2004 at 8:34 am
When we upgraded our servers from 7.0 to 2000 we had immediate performance problems as well. We found that if we switched the database to 7.0 compatibility everything ran normally, but when we switched back to 8.0 compatibility we had issues. We finally found the problem to be what I consider bad coding in the SQL. In 2000 changes were made wrt data type precedence. As I recall in 7.0 in a WHERE clause, if a column was compared to a variable of a different type, the column's type would be used for the comparison (assuming an implicit conversion could be done). In 2000 there is a precedence hierarchy for data types (see BOL: data types - converting - data type precedence). So, if an implicit conversion is done, the data type used is based on the hierarchy, not the column which means that indexes could become worthless in the queries.
Because we had alot of sloppy implicit data type conversions going on the optimizer was having fits because it could no longer find the right indexes to use. We had to go through each stored procedure and either fix the datatypes or implement explicit conversions. This rather lengthy effort fixed our problems.
BTW, the way we tackled it was to run profiler to identify the 50 longest running procs and attack those first.
December 17, 2004 at 9:26 am
I would jump at your explanation but for the fact that most if not all our joining and whereing is done using fields/parameters of duplicate type and size.
Consider a prime example using the following three tables:
"Orders" containing fields Order Number INT primary key, Customer Number varchar(20) indexed non-unique.
"Invoices" containing fields Order Number INT indexed non-unique, Invoice Number INT primary key and Date datetime indexed non unique.
"Items" containing fields Item Number INT primary key, and Invoice Number INT indexed non unique
The following query correctly selects the most obvious indexes: Orders.[Order Number], Invoices.[Order Number] and Items.[Invoice Number]
SELECT Items.* FROM Orders INNER JOIN Invoices ON Orders.[Order Number] = Invoices.[Order Number] INNER JOIN Items ON Invoices.[Invoice Number] = Items.[Invoice Number] WHERE Orders.[Order Number] = @OrderNumber AND Invoices.[Date] BETWEEN @FromDate and @ToDate
Change the where clause in the query:-
SELECT Items.* FROM Orders INNER JOIN Invoices ON Orders.[Order Number] = Invoices.[Order Number] INNER JOIN Items ON Invoices.[Invoice Number] = Items.[Invoice Number] WHERE Orders.[Customer Number] = @Customer AND Invoices.[Date] BETWEEN @FromDate and @ToDate
and suddenly the optimizer decides to do a tablescan on the items file instead! But uses Orders.Customer and Invoices.[Order Number]
This is not nice behaviour when there are 21 million records in the Items table.
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply