July 2, 2003 at 10:03 am
Thru Profiler, I found a query which runs 2000 times within 10 minutes. It is,
select * from Cust where cust_no = N'1234567'
My cust table has 200,000 rows in it. I dropped a non-clustered index on cust_no and created a clustered index on it. The performance is little better. It takes,
CPU = 110
Reads = 475
Duration = 110
In showplan, it says it will take only 3 reads to get to a datapage, as it is using the clustered index, and EstimateCPU is very very small. How come in reality, it takes this much time? OR is this time normal? How can I find out? Any help greatly appreciated.
July 2, 2003 at 10:26 am
Ooops. My showplan is not using the clustered index. It uses a non-clustered index on title. there are 8 distinct title values in the 200,000 table including nulls.
By the way, my cust_no is a varchar(10), with most cust_no s are 7 or 8 digits in length. Changing from varchar(10) to char(10) might help. But, I can not do that change on this database now. How can I make it to use the clustered index? Can I just drop the title index? I think it is used in report queries which run at night, and I have not seen any query with a where clause on title=''.
July 2, 2003 at 10:34 am
select * from Cust where cust_no = '1234567'
uses the clustered index.
select * from Cust where cust_no = N'1234567' is not using it. It uses a nonclustered index on title. But, my front end application is sending this query. I will not be able to change the front end query. How can I tune it to use the clustered index? (is it anything to do with ansi nulls etc )Thanks much.
July 3, 2003 at 6:24 am
you can make SQL Server use particular index with a hint like:
SELECT * FROM [table_name] WITH( INDEX(index_name))
WHERE etc.
Try it.
July 3, 2003 at 7:37 am
I think the "N" causes it to convert the text to nvarchar, which is a different data type than your clustered field. That's probably why it's not using the clustered index.
Nvarchar is for unicode. You probably don't need that here. But if you're stuck with it, you'll have to try nikpanov's query hint suggestions.
Alternatively, you could try changing the data type of the cust_no in the Cust table (if you have a development environment which isn't going to hose the production info). If the data types match it just might use the clustered index naturally. However, you will be doubling the size of that field, which increases the size of the table, which may impact performance.
July 3, 2003 at 12:27 pm
Thanks much both of you. After doing more research, unicode is causing this problem. I will take it from here.
July 10, 2003 at 7:07 pm
How are you going to "take it from here"? I have the same problem with our software and it's KILLING the performance on the search. I can't modify their code in any way, so I'm wondering what I could do on the database side to get things working better.
Hmmmm....
cl
Signature is NULL
July 11, 2003 at 4:33 am
If you are using SQL 2000 I would use bigint instead of varchar(10) which is 2 bytes large per row than bigint.
Why are you using unicode reference anyway when the data is non-unicode?
July 11, 2003 at 11:55 am
Well, the software we're using (eCRM) was developed to run against either an oracle or tSQL db, so I guess they figured using queries that requested unicode values would be more stable. But requesting char values as unicode causes major performance issues (ie: ton's of table scans).
Their code is sub-optimal at best, and it's all buried in java classes that we aren't supposed to be able to modify. Limitations like this kill me; in an open source environment I'd be able to fix their code, but as it stands my hands seem to be tied.
Signature is NULL
July 11, 2003 at 6:08 pm
I would definently bring this to their attention. Seems like a waste if they don't help you with their solution or take your findings into account. Otherwise sounds like a real pickle.
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply