April 8, 2006 at 8:49 am
I have a query which is used for report and run slow. It returns all processed records for the day.
The problem is that although I have index on Load_ID it's not unique and returns at average 60,000 records. The other 2 tables are lookup with 10-12 rows in each.
Any idea how I can optimize this query is greatly appreciated.
SELECT "UMF_SUM_MATCHLOG"."LOAD_ID", "UMF_SUM_MATCHLOG"."TABLE_NAME",
"NUM_TYPE"."NUM_TYPE", "ATTR_TYPE"."ATTR_TYPE",
"UMF_SUM_MATCHLOG"."MATCH_PRECISION", "UMF_SUM_MATCHLOG"."ALERTED",
"UMF_SUM_MATCHLOG"."RECORD_COUNT", "UMF_SUM_MATCHLOG"."RECORD_TYPE"
FROM
(UMF_SUM_MATCHLOG "UMF_SUM_MATCHLOG" LEFT OUTER JOIN ATTR_TYPE "ATTR_TYPE"
ON "UMF_SUM_MATCHLOG"."RECORD_TYPE"="ATTR_TYPE"."ATTR_TYPE_ID") LEFT OUTER
JOIN NUM_TYPE "NUM_TYPE" ON "UMF_SUM_MATCHLOG"."RECORD_TYPE"=
"NUM_TYPE"."NUM_TYPE_ID" WHERE "UMF_SUM_MATCHLOG"."LOAD_ID"=:1
ORDER BY "UMF_SUM_MATCHLOG"."TABLE_NAME",
"UMF_SUM_MATCHLOG"."MATCH_PRECISION", "UMF_SUM_MATCHLOG"."ALERTED",
"UMF_SUM_MATCHLOG"."RECORD_TYPE"
Thanks a lot.mj
April 10, 2006 at 2:18 am
Hi mj,
it would be very very helpfull if you post the actual execution plan and table structure/indexes, so it will be obvious where the most of the time is spent. Anyway, general rules for SQL tuning that I see applicable here are:
1. Make sure that data types of JOIN and filter columns match 100%.
2. Reduce the number of OUTER JOINs (if possible).
3. Split the SQL into two parts: one that will execute the core query, and the other that will use the result of the first one and add "cosmetic" data from lookup tables (if necessary).
Of course, the execution plan may reveal inefficient index use or other causes.
Regards,
Goce.
April 10, 2006 at 7:08 am
I would like to extend what Goce has pointed out as 3 rd point.
First of all Split query in 2 part (Your select part and the From part for which you can use temptable) and then in execution plan look at the amount of time each part is taking. 50-50 or say 60-40 etc., that will tell you which query you need to tune first to get max. performance. Index seek, scan, table scan and their % values are the one to consider as well.
Hope this helps,
Kindest Regards,
Sameer Raval [Sql Server DBA]
Geico Insurance
RavalSameer@hotmail.com
April 10, 2006 at 9:49 am
I run similar query all the time. If this is your primary query against UMF_SUM_MATCHLOG, you should create unique CLUSTERED index on (load_id, RECORD_TYPE, ...). If no such index can be created, you probably should look at your data model.
If you have to cluster on another column, then Goce's comments were the ones you can pursue.
Good luck,
Michael
April 10, 2006 at 10:48 am
Hang on a minute. What are you trying to do here ? You have a query which is essentially "SELECT xxx FROM UMF_SUM_MATCHLOG WHERE LOAD_ID = :1" (ignoring the joins). This query will return those rows which match LOAD_ID - you say there are 60,000 matching rows on average. Then you say the query is slow - but any query returning 60,000 rows is bound to be slow.
The next obvious question is: what are you doing with these rows? Is each row a line on your report ? If you are processing the data further (e.g. counting, summing, calculating average) then maybe you can do this within the SQL query and only return the results you need. This would probably be much faster.
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply