May 22, 2014 at 11:07 am
Any hints of how decrease CPU utilization on this one? I inherited ...
SELECT
DS.LocationID
,M.InventoryItemID
,SUM(DS.OrderPrice) As TheoSale
FROM dbo.DayItemSaleLight DS WITH (NOLOCK)
INNER JOIN dbo.LocationGroupMember LM WITH (NOLOCK) ON LM.LocationID = DS.LocationID AND LM.LocationGroupID = 22
INNER JOIN dbo.Item I WITH (NOLOCK) ON I.ItemID = DS.ItemID
INNER JOIN dbo.Recipe R WITH (NOLOCK) ON R.RecipeID = I.RecipeID
INNER JOIN dbo.Period W WITH (NOLOCK) ON W.PeriodType=1 AND DS.DOB BETWEEN W.StartDate AND W.EndDate
INNER JOIN dbo.RecipeMap M WITH (NOLOCK) ON M.RecipeID = R.RecipeID
AND M.DOB = W.StartDate
INNER JOIN dbo.InventoryItem II WITH (NOLOCK) ON II.InventoryItemID = M.InventoryItemID
LEFT JOIN dbo.GLCode GL WITH (NOLOCK) ON GL.GLCodeID = II.GLCodeID
WHERE DS.DOB BETWEEN '2014-05-12 00:00:00' AND '2014-05-14 00:00:00'
AND (''='' OR '' <>'' AND II.InventoryItemName LIKE 'JUICE ORANGE')
GROUP BY DS.LocationID, M.InventoryItemID
Attached the actual execution plan.
It is so intensive that it usually gives up with a timeout.
May 22, 2014 at 11:16 am
Table definitions, index definitions please.
Are the business users happy with the chance of getting incorrect results from this query from time to time?
And, what's the point of this?
AND (''='' OR '' <>'' AND II.InventoryItemName LIKE 'JUICE ORANGE')
AND has precedence over OR, so that will reduce to (TRUE OR (FALSE AND II.InventoryItemName LIKE 'JUICE ORANGE')), which truth tables tells us becomes
(TRUE OR FALSE)
Which becomes just
TRUE
In other words, it's a wasted predicate because it doesn't remove any rows.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 22, 2014 at 11:19 am
The query is a relatively low cost but is going parallel.
It looks like you might have set maxdop to 2 but left the cost threshold at 5.
You might want to adjust the maxdop on the query or adjust the server cost threshold from 5 (5 is extremely low).
And as Gail has requested, please provide table structures etc.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
May 22, 2014 at 11:25 am
GilaMonster (5/22/2014)
Table definitions, index definitions please.Are the business users happy with the chance of getting incorrect results from this query from time to time?
And, what's the point of this?
AND (''='' OR '' <>'' AND II.InventoryItemName LIKE 'JUICE ORANGE')
AND has precedence over OR, so that will reduce to (TRUE OR (FALSE AND II.InventoryItemName LIKE 'JUICE ORANGE')), which truth tables tells us becomes
(TRUE OR FALSE)
Which becomes just
TRUE
In other words, it's a wasted predicate because it doesn't remove any rows.
I know, I know... it has been really hard for me to convince about the proper use of NOLOCK. But our data is mainly static. So Let's say that's another battle I have.
The actual/original filter is ...
WHERE DS.DOB BETWEEN @pStartDate AND @pEndDate
AND (@pInventoryItemName='' OR @pInventoryItemName <>'' AND II.InventoryItemName LIKE @pInventoryItemName)
.. due application design, is hard for me to isolate and test with actual parameters, so I replaced some of them.
May 22, 2014 at 11:32 am
You'll also notice that the Recipe table has been removed from the execution plan. It looks like you probably have a trusted constraint between Recipe and RecipeMap. Based on the output, you could shorten your query to remove the Join to Recipe because it doesn't add anything in this case (RecipeMap has the RecipeID).
Looking at the plan, it appears you have a bit of a possible Cartesian product occurring between Item and RecipeMap (4k items, 446k RecipeMaps and the resulting join produces 1.5M records).
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
May 22, 2014 at 11:36 am
sql-lover (5/22/2014)
.. due application design, is hard for me to isolate and test with actual parameters, so I replaced some of them.
Doing so could change the query's execution plan so much that any changes to the modified query could have no effect at all. For example, the optimiser is capable of doing that logical reduction I did and will just ignore that predicate.
Also, the original predicate form will result in sub-optimal execution plans, whereas your revised version won't.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 22, 2014 at 11:38 am
SQLRNNR (5/22/2014)
The query is a relatively low cost but is going parallel.It looks like you might have set maxdop to 2 but left the cost threshold at 5.
You might want to adjust the maxdop on the query or adjust the server cost threshold from 5 (5 is extremely low).
I got that feeling too. But it is going to be more tricky than usual. This runs on a virtualized environment. I actually moved it from a VM that has 4 vCPU to one with 8.
May 22, 2014 at 11:42 am
sql-lover (5/22/2014)
SQLRNNR (5/22/2014)
The query is a relatively low cost but is going parallel.It looks like you might have set maxdop to 2 but left the cost threshold at 5.
You might want to adjust the maxdop on the query or adjust the server cost threshold from 5 (5 is extremely low).
I got that feeling too. But it is going to be more tricky than usual. This runs on a virtualized environment. I actually moved it from a VM that has 4 vCPU to one with 8.
Both plans are the same. The optimizer picked same plan for both.
Unfortunately, don't have the other one anymore, but it was identical.
I can't recreate the problem again, unless is on the live box. My plan is setup Extended Events to capture data real time, but that won't happen until few days from now.
May 22, 2014 at 11:45 am
Out of curiosity, what is generating the query?
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
May 23, 2014 at 4:47 pm
This is a shot in the dark, but try the following:
SELECT
DS.LocationID
,M.InventoryItemID
,SUM(DS.OrderPrice * isnull((select count(w.startDate)
from dbo.Period m
where W.PeriodType=1
AND w.startDate = m.dob
and w.endDate >= ds.dob), 0)) As TheoSale
FROM dbo.DayItemSaleLight DS WITH (NOLOCK)
INNER JOIN dbo.LocationGroupMember LM WITH (NOLOCK) ON LM.LocationID = DS.LocationID AND LM.LocationGroupID = 22
INNER JOIN dbo.Item I WITH (NOLOCK) ON I.ItemID = DS.ItemID
INNER JOIN dbo.RecipeMap M WITH (NOLOCK) ON M.RecipeID = I.RecipeID
and m.dob <= ds.dob
INNER JOIN dbo.InventoryItem II WITH (NOLOCK) ON II.InventoryItemID = M.InventoryItemID
WHERE DS.DOB BETWEEN '2014-05-12 00:00:00' AND '2014-05-14 00:00:00'
AND (''='' OR '' <>'' AND II.InventoryItemName LIKE 'JUICE ORANGE')
GROUP BY DS.LocationID, M.InventoryItemID
On the surface this code looks like it should perform worse, but I think you might be surprised. I'm not sure if the surprise will be performance-related, or related to solving the wrong problem though...
If you can analyze the data in the period table and add additional filters to the possible values of m.dob, you might be able to improve things even more.
Dan Guzman - Not the MVP (7/22/2010)
All questions have to be prefaced by Server version and 'according to MS Docs' or 'my own personal opinion based on how much detail I felt like digging into at the time.'
June 3, 2014 at 5:31 am
After some thought and running a trace , I fixed it this way.
I created a temporary table for RecipeMap, the biggest table. Run a simple select statement against that table but filtered by DOB range (needed date range)
Then, on the join, I replaced M for this temporary table or data set. This drastically reduced the CPU usage; there was a hash match operator that was getting half million rows there. The change took the overhead away as the data for the join on M is already reduced.
It seems that the SQL engine was scanning the whole table for the joined values and not applying the filtered until the end. Discarding the unneeded values on that table in advance helps a lot.
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply