November 21, 2007 at 12:28 pm
I am baffled by the difference in performance between the old and new version of this query. Could someone explain why it is so big? It takes less than a second on the new version while the old one takes at least 8.
Thanks,
Ed
OLD
SELECT RL.ReportingLevelID,
RL.ReportingLevelTypeCD,
P.Descrip
FROM teReportingLevel RL
LEFT OUTER JOIN dbm.dbo.vPartyDescripShort P
ON P.PartyID = RL.ReportingLevelID
WHERE P.IsActive = 1
AND P.IsTest = 0
AND P.Descrip IS NOT NULL
NEW
SELECT RL.ReportingLevelID,
RL.ReportingLevelTypeCD,
(SELECT P.Descrip FROM dbm.dbo.vPartyDescripShort P
WHERE P.PartyID = RL.ReportingLevelID) AS Descrip
FROM teReportingLevel RL
LEFT OUTER JOIN dbm.dbo.vPartyDescripShort P
ON P.PartyID = RL.ReportingLevelID
AND P.IsActive = 1
AND P.IsTest = 0
AND P.Descrip IS NOT NULL
November 21, 2007 at 12:41 pm
Well, it probably has to do with the number of reads. While your old query is written as a LEFT JOIN, the query analyzer will treat it as an INNER JOIN because you are filtering on the outer table in the WHERE clause. So you are saying that certain values from the outer table must be present, hence it is an INNER JOIN and not an OUTER JOIN. Moving the filtering criteria into your JOIN clause like in your new query changes where the query analyzer applies the filter. It would help if you posted the execution plans for each. Also, make sure you are clearing out the buffers and cache between each run if you are attempting to compare query executions (don't do this in production).
Also, just curious but why did you decide to put the sub-query in the SELECT? You should change this back to the way you did it in your old query.
November 22, 2007 at 12:19 am
Your new query contains a correlated subquery (subselect in the select clause)
Depending on the optimiser, it's quite likely that the subquery will be executed for each row of the outer query. Essentially, it's a cursor in disguise.
Sometimes the optimiser is smart enough to convert the subquery into a join and evaluate it just like the first option, but not always.
Plus, you're joining vPartyDescripShort into the query twice with the same join criteria. (once in the select, once in the from). Why?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 24, 2007 at 8:49 pm
Have you evaluated the execution plans for both queries? The second has a correlated subquery, but that really shouldn't make much of a difference because it's effectively doing the same thing. However, the execution plans might reveal something interesting. Maybe SQL Server is better able to generate a more efficient execution plan in the first case. If so, it may be good to take what SQL Server is doing and use it as a hint in the first query and see if the performance suddenly gets much better.
K. Brian Kelley
@kbriankelley
November 26, 2007 at 7:15 am
If I don't put the subquery in the SELECT the performance degrades terribly. FYI, the left table has 250 rows & right has ~1 million. I have attached, in a zip file, the execution plans for both queries. This is a database that I inherited and many of the queries that run against it are outside SQL Server in the middle tier.
Thanks,
Ed
November 26, 2007 at 7:18 am
Hi Gail,
This is old code that surfaces now that we have moved to SQL Server 2005. It is kept in the middle tier, there is a lot of resistance from the middle tier group to move queries to the server, and appears to be the sole user of that view. We are looking into the uses of that view because, if there is no other reason than this query, it might best to get rid of it altogether.
Thanks,
Ed
November 26, 2007 at 7:21 am
Hi Brian,
From the execution plan it appears to be using indexes that might not be needed. We are currently in the planning stages of a comprehensive set of tests that would allow us to determine which indexes are no longer, or never were, used and which new ones we might need. I suspect that on the not used column there are going to be many. The application that uses these databases was developed with the philosophy of "full speed ahead, damn the torpedoes" and it's showing now that it is used more.
Thanks,
Ed
November 26, 2007 at 8:31 am
Eduardo Olivera (11/26/2007)
Hi Brian,We are currently in the planning stages of a comprehensive set of tests that would allow us to determine which indexes are no longer, or never were, used and which new ones we might need.
There's a fairly easy way to do that in SQL 2005. Take a look a the sys.dm_db_index_usage_stats and the sys.dm_db_missing_index_details DMVs. (for more on the missing indexes, there's a very good blog article)
They won't do everything for you, but is a place to start. Just note that they contain data only from the last time SQL was started.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 26, 2007 at 8:41 am
Thanks, Gail, I'm glad we finally moved this application to 2005. I keep finding or being alerted to new useful features to make my life easier.
Ed
November 27, 2007 at 7:44 am
Gail,
In the sys.dm_db_index_usage_stats DMV some object id's are returned that do not return a name when passed to the OBJECT_NAME function. Why would that be?
Thanks,
Ed
December 3, 2007 at 12:33 am
They're in a different database to the one you're in. Look at the DB_ID.
If you're using SQL 2005 SP2, you can use the enhanced version on object_name that takes the DB_id as a second parameter, otherwise object_name only works in the current database.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply