Incident Review Something will always go wrong with our software systems. It might happen regularly or rarely, but something is going to go wrong and a customer or client will complain. Often the way we handle these situations determines whether our customers continue to do business with us or look for an alternative. In my career, I've been a part of many incidents, many of which were service outages for customers. I've attended some as a technical person working to diagnose the issue. In some incidents, I've been a developer that has to fix code. During others my role was as a manager trying to ensure information moves smoothly between resources and our "fix" doesn't cause another problem. At other times, I've also had to take part in a post-incident review. Unfortunately, this has happened far less often than it should. When I read this description of a post-incident review, it is unlike many of the after-action meetings I've attended. Usually, there is one meeting, someone is being blamed, and senior management is often there, putting pressure on everyone in attendance to "never let this happen again." I haven't known anyone that wanted to go through another outage or a post-incident meeting, but with complex systems, and humans managing them, something is bound to go wrong. What we want is this same type of incident to never happen again, which comes about if we learn from our mistakes and design better protocols and systems that help us catch silly mistakes. We should accept that mistakes will happen and try to find ways to detect the problem quickly, limit the scope of impact, and provide a way to share this knowledge with other workers. Depending on humans to be more perfect in the future isn't likely to be successful. These days I read modern post-incident reviews, internal ones that we publish after an outage, I find them fact-based, focusing on what things went wrong without blaming a person. They include analysis not only of the actual issue but the conditions that led to the hardware/software failure or decision that was made. There are learnings about how we might have prevented something with a time machine, while still assuming that humans would make mistakes or a component might fail. There are also suggestions for improvements in hardware, software, training, or monitoring that might assist in quicker recovery in the future. Coming out of an incident with a positive mindset is the best way to try and prevent a repeat of the same incident in the future. This requires that we not only avoid blaming someone for an error but that we also take steps to limit the potential for future errors. If the issue is someone clicking the wrong selection in a drop-down or pressing "OK" when they meant to press "Cancel", there are limited ways to prevent those issues. However, we can adopt the mindset an outage is a team failure and build a habit of double checking each other. That's much better than blaming one person, giving the job to another human, who might easily make the same mistake. Many humans struggle to avoid placing blame on others and just accepting that some mistakes will happen. A DevOps mindset, with blameless reviews instead focuses on how we can do better as a group, rather than how we failed as an individual. This little change helps us build a better team, one that often performs better in the future. Steve Jones - SSC Editor Join the debate, and respond to today's editorial on the forums |