August 8, 2014 at 12:10 pm
ScottPletcher (8/8/2014)
TheSQLGuru (8/8/2014)
ScottPletcher (8/8/2014)
My first choice would be to use RECOMPILE and just force SQL to rebuild a plan every time.But, if a HASH join is the "good" plan, forcing a HASH join is much safer overall than forcing a LOOP join. You might try that for a range of values and verify that it works OK across all of them. This, too, may not be the "best" solution, but it should be a workable solution.
I am curious why you say HASH force would be safer. I would say just the opposite...
My thinking is:
LOOP is extremely -- even prohibitively -- expensive on a very large number of rows.
HASH might not be ideal for a smaller number of rows, but it shouldn't be awful either.
Expensive in lots of logical IOs, yet. But those can be exceedingly quick due to cached iterative hits on same page for multiple rows. More importantly from my experience is the page locks that will (hopefully) be taken which can DRASTICALLY improve concurrency. Those blocking index/table scans are a killer from that perspective.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
August 8, 2014 at 12:15 pm
TheSQLGuru (8/8/2014)
ScottPletcher (8/8/2014)
TheSQLGuru (8/8/2014)
ScottPletcher (8/8/2014)
My first choice would be to use RECOMPILE and just force SQL to rebuild a plan every time.But, if a HASH join is the "good" plan, forcing a HASH join is much safer overall than forcing a LOOP join. You might try that for a range of values and verify that it works OK across all of them. This, too, may not be the "best" solution, but it should be a workable solution.
I am curious why you say HASH force would be safer. I would say just the opposite...
My thinking is:
LOOP is extremely -- even prohibitively -- expensive on a very large number of rows.
HASH might not be ideal for a smaller number of rows, but it shouldn't be awful either.
Expensive in lots of logical IOs, yet. But those can be exceedingly quick due to cached iterative hits on same page for multiple rows. More importantly from my experience is the page locks that will (hopefully) be taken which can DRASTICALLY improve concurrency. Those blocking index/table scans are a killer from that perspective.
I've just not had the experience of loops being "exceeding quick" once the number of rows gets too large. Indeed, to me it seems that often the only reason SQL is using a loop is that it couldn't accurately pre-determine the cardinality of rows.
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
Viewing 2 posts - 16 through 16 (of 16 total)
You must be logged in to reply to this topic. Login to reply