May 19, 2016 at 8:47 am
I've been working on a large, multi-user, multi-customer SQL database system for several years now and have always tried to reduce blocking and deadlocking, in the belief that this would result in better performance for the users. And by "better performance" I mean them getting things done more quickly (and with fewer deadlocks, obviously).
So we've been following the (presumably) usual path of identifying the worst offenders, fixing these, monitoring, identifying and fixing the next worst offenders and so on - i.e. rinse and repeat.
However, somebody has just suggested that any one of these cycles, in effectively moving blocking / deadlocking on from one statement to another, could actually mean that users get worse performance.
While this struck me as being counter to everything I believe to be true, I've also learned over the years that "Forget it Jake, it's Chinatown performance."
So - could reducing blocking / deadlocking ever be BAD for performance? What are your thoughts?
May 19, 2016 at 9:19 am
This sentence does not make sense to me:
However, somebody has just suggested that any one of these cycles, in effectively moving blocking / deadlocking on from one statement to another, could actually mean that users get worse performance.
Look, a lot of people say a lot of things. Unless someone has an example where they are running SP_WHO2 and showing you what they're talking about I would move forward under the assumption that reducing and/eliminating deadlocks is a good thing for users as is reducing blocking. The faster the query returns data, the happier everyone is. These blocking/deadlocking slows queries down.
My philosophy is that the best way to reduce blocking and deadlocking is to have a server powerful enough for the job and configured correctly: correct disk allocation, memory settings, parallelism settings, etc. This followed by a good database design where it's modeled correctly, indexed correctly, uses the correct data types, etc. Then, and this is paramount, writing high-performing, RBAR-free, queries designed to take advantage of your high-performing server, good database design and healthy, well-designed indexes.
-- Itzik Ben-Gan 2001
May 19, 2016 at 9:41 am
Is it possible, i.e., not a logical contradiction? Sure.
All it takes is for the queries blocking one another to tax a resource (most likely CPU or storage) for which response times get exponentially worse as the load passes a certain point. So, for example, let's say there are a few queries that when run simultaneously typically block each other, and end up getting executed one at a time. Let's also say that these queries heavily stress your storage. Let's finally say that your storage is right at the brink, and any more IO is going to lead to latency increases disproportionate to the increase in IO.
In that case it's quite possible that if the only thing that changes is that the queries don't block each other anymore, then they will in aggregate take longer than before, because the latency increase from all running simultaneously is so disproportionately higher than linear.
As my introductory question might have indicated, while possible, that's not particularly common. After all, it's not usually the case that blocking/deadlocking is removed and all else remains equal. Usually reducing blocking/deadlocking means reducing the reads and duration of the queries involved.
If that point is supposed to be more than some interesting trivia about something that is technically possible, for example, used to support "let's not be so aggressive about reducing blocking/deadlocking", then it's a pretty weak reason; as Alan indicated, the operating assumption should be that tuning your code to minimize blocking/deadlocking is a desirable goal.
Having said all that, the point is still technically correct, although it's really true of any bottleneck, not just blocking/deadlocking. It is almost always possible that removing one bottleneck just exposes another bottleneck that is more painful than the first. That doesn't mean we shouldn't work towards minimizing bottlenecks in our systems, though 🙂
Cheers!
May 19, 2016 at 10:01 am
Look, a lot of people say a lot of things.
As the person who said this particular thing, I thought I might elaborate on my thought process a bit:
Lets say you have a chain of stored procedures:
SP1
SP2
SP3
Let's also assume there are background tasks happening that block/deadlock with this SP chain.
SP1 usually blocks for 15 seconds, then deadlocks, preventing further progress down the chain. This is deemed unacceptable to the user so we fix SP1 to stop it deadlocking so often.
SP2 then starts blocking for 45 minutes, then deadlocks.
This in theory has made performance worse for the end user. It is better to get a quick deadlock so they can try again, rather than waiting a long time due to blocking or contention with a background task then deadlocking and still having to restart.
Happy for people to tell me I am an idiot and wrong 🙂
May 19, 2016 at 8:09 pm
simon.friling (5/19/2016)
Look, a lot of people say a lot of things.
As the person who said this particular thing, I thought I might elaborate on my thought process a bit:
Lets say you have a chain of stored procedures:
SP1
SP2
SP3
Let's also assume there are background tasks happening that block/deadlock with this SP chain.
SP1 usually blocks for 15 seconds, then deadlocks, preventing further progress down the chain. This is deemed unacceptable to the user so we fix SP1 to stop it deadlocking so often.
SP2 then starts blocking for 45 minutes, then deadlocks.
This in theory has made performance worse for the end user. It is better to get a quick deadlock so they can try again, rather than waiting a long time due to blocking or contention with a background task then deadlocking and still having to restart.
Happy for people to tell me I am an idiot and wrong 🙂
Nah... not an idiot and not wrong. At least not for the information given. The real problem hasn't been addressed in the thought process, though, and that is that all the code in that chain has more than one performance and resource usage problem. Sometimes, to fix the car, you have to change more than one flat tire at a time. An even better thing to do would be to identify the cause of the multiple flat tires and fix that so you don't have to spend so much time fixing flats. The hard part is that a lot of people just want to fix a flat and move on to the next milestone without ever understanding what the real problem is never mind actually fixing the problem.
And, to be sure, the low performing, high resource usage code that causes such blocking and deadlocking is nothing more than a symptom of the things that actually need to be fixed.
--Jeff Moden
Change is inevitable... Change for the better is not.
May 19, 2016 at 9:36 pm
simon.friling (5/19/2016)
Look, a lot of people say a lot of things.
As the person who said this particular thing, I thought I might elaborate on my thought process a bit:
Lets say you have a chain of stored procedures:
SP1
SP2
SP3
Let's also assume there are background tasks happening that block/deadlock with this SP chain.
SP1 usually blocks for 15 seconds, then deadlocks, preventing further progress down the chain. This is deemed unacceptable to the user so we fix SP1 to stop it deadlocking so often.
SP2 then starts blocking for 45 minutes, then deadlocks.
This in theory has made performance worse for the end user. It is better to get a quick deadlock so they can try again, rather than waiting a long time due to blocking or contention with a background task then deadlocking and still having to restart.
Happy for people to tell me I am an idiot and wrong 🙂
First, welcome to the SSC forum.
I certainly wasn't trying to offend anyone. Your explanation makes sense and there's no need for me to elaborate on what Jeff said.
By "a lot of people say a lot of things" - I was saying that, without convincing details, take what you're told with a grain of salt. Throughout my career as a DBA, SQL Developer and BI Developer I think I've received almost as much bad advice as good advice; actually I'd say it's been like 33% good, 33% bad and 33% useless. I always encourage people to question the advice they're given.
The best advice I've gotten is from SSC, I would encourage you to visit again Simon. 😛
-- Itzik Ben-Gan 2001
May 20, 2016 at 2:29 am
First, welcome to the SSC forum.
I certainly wasn't trying to offend anyone. Your explanation makes sense and there's no need for me to elaborate on what Jeff said.
No offence taken and thanks for the welcome 🙂
May 20, 2016 at 3:16 am
It is possible to eliminate deadlocks in such a way that it degrades overall performance, I'm fixing a system which did just that.
In order to eliminate deadlocks on ImportantTableA, they used applocks to ensure that only one session could ever be accessing ImportantTableA. Needless to say, that's made everything slooooow (hence why I'm fixing it)
But that's not normally how one fixes deadlocks.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 20, 2016 at 4:23 pm
Alan.B (5/19/2016)
Throughout my career as a DBA, SQL Developer and BI Developer I think I've received almost as much bad advice as good advice; actually I'd say it's been like 33% good, 33% bad and 33% useless.
As my Dad used to say "Half of all that is written is wrong and the rest is written in such a fashion that you can't tell if it's wrong". 😛
--Jeff Moden
Change is inevitable... Change for the better is not.
May 20, 2016 at 4:26 pm
GilaMonster (5/20/2016)
It is possible to eliminate deadlocks in such a way that it degrades overall performance, I'm fixing a system which did just that.In order to eliminate deadlocks on ImportantTableA, they used applocks to ensure that only one session could ever be accessing ImportantTableA. Needless to say, that's made everything slooooow (hence why I'm fixing it)
But that's not normally how one fixes deadlocks.
There must be a disturbance in the Force. I'm going through that very same thing right now. It also uses/reserves extra blocks of memory ironically driving some of the faster code out of cache.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply