A while ago I was part of the Cardiff pilot of Practical Strategies for Learning from Failure (#LFFdigital). My job was to explain the James Reason Swiss Cheese Failure Model in 300 seconds (5 minutes).
This is what I did.
The Swiss Cheese Model of Accident Causation (to give it the full name), was developed by Professor James T. Reason at the University of Manchester about 25 years ago. The original 1990 paper,“The Contribution of Latent Human Failures to the Breakdown of Complex Systems”, published in the transactions of The Royal Society of London, clearly identifies these are complex human systems, which is important.
Well worth reading is the British Medical Journal (BMJ), March 2000 paper, ‘Human error: models and management’. This paper gives an excellent explanation of the model, along with the graphic I’ve used here.
The Swiss Cheese Model, my 300 second explanation:
- Reason compares Human Systems to Layers of Swiss Cheese (see image above),
- Each layer is a defence against something going wrong (mistakes & failure).
- There are ‘holes’ in the defence – no human system is perfect (we aren’t machines).
- Something breaking through a hole isn’t a huge problem – things go wrong occasionally.
- As humans we have developed to cope with minor failures/mistakes as a routine part of life (something small goes wrong, we fix it and move on).
- Within our ‘systems’ there are often several ‘layers of defence’ (more slices of Swiss Cheese).
- You can see where this is going…..
- Things become a major problem when failures follow a path through all of the holes in the Swiss Cheese – all of the defence layers have been broken because the holes have ‘lined up’.
- Source: Energy Global Oilfield Technology http://www.energyglobal.com/upstream/special-reports/23042015/Rallying-against-risk/
Who uses it? The Swiss Cheese Model has been used extensively in Health Care, Risk Management, Aviation, and Engineering. It is very useful as a method to explaining the concept of cumulative effects.
The idea of successive layers of defence being broken down helps to understand that things are linked within the system, and intervention at any stage (particularly early on) could stop a disaster unfolding. In activities such as petrochemicals and engineering it provides a very helpful visual tool for risk management. The graphic from Energy Global who deal with Oilfield Technology, helpfully puts the model into a real context.
Other users of the model have gone as far as naming each of the Slices of Cheese / Layers of Defence, for example:
- Organisational Policies & Procedures
- Senior Management Roles/Behaviours
- Professional Standards
- Team Roles/Behaviours
- Individual Skills/Behaviours
- Technical & Equipment
What does this mean for Learning from Failure? In the BMJ paper Reason talks about the System Approach and the Person Approach:
- Person Approach – failure is a result of the ‘aberrant metal processes of the people at the sharp end’; such as forgetfulness, tiredness, poor motivation etc. There must be someone ‘responsible’, or someone to ‘blame’ for the failure. Countermeasures are targeted at reducing this unwanted human behaviour.
- System Approach – failure is an inevitable result of human systems – we are all fallible. Countermeasures are based on the idea that “we cannot change the human condition, but we can change the conditions under which humans work”. So, failure is seen as a system issue, not a person issue.
This thinking helpfully allows you to shift the focus away from the ‘Person’ to the ‘System’. In these circumstances, failure can become ‘blameless’ and (in theory) people are more likely to talk about it, and consequently learn from it. The paper goes on to reference research in the aviation maintenance industry (well-known for its focus on safety and risk management) where 90% of quality lapses were judged as ‘blameless’ (system errors) and opportunities to learn (from failure).
It’s worth a look at the paper’s summary of research into failure in high reliability organisations (below) and reflecting, do these organisations have a Person Approach or Systems Approach to failure? Would failure be seen as ‘blameless’ or ‘blameworthy’?

It’s not all good news. The Swiss Cheese Model does have a few criticisms. I have written about it previously in ‘Failure Models, how to get from a backwards look to real-time learning’.
It is worth looking at the comments on the post for a helpful analysis from Matt Wyatt. Some people feel the Swiss Cheese model represents a neatly engineered world. It is great for looking backwards at ‘what caused the failure’, but is of limited use for predicting failure. The suggestion is that organisations need to maintain a ‘consistent mindset of intelligent wariness’. That sounds interesting…
There will be more on this at #LFFdigital, and I will follow it up in another post.
So, What’s the PONT?
- Failure is inevitable in Complex Human Systems (it is part of the human condition).
- We cannot change the human condition, but we can change the conditions under which humans work.
- Moving from a Person Approach to a System Approach to failure helps move from ‘blameworthy’ to ‘blameless’ failure, and learning opportunities.
I can’t help thinking that when Reason says “complex human systems” he really means ‘complicated human designs’. Afterall the whole premise of a complex system is that it’s non-linear. Things don’t have to line up and the relationship between cause and effect can be oblique. It would be like one layer of cheese being a baked Camembert (it just slows problems down), one is an American Slice (bouncing problems off in all directions) and one a thin wedge of unbreakable Parmesan. What’s more the line of failure could be the equivalent of a hot wire that simply slices through, holes or not. That’s enough of the metaphor.
The Cognitice Science has moved on in the past 25 years and what is described as failures in human cognition are now more clearly recognised as contextual strengths, not failures. If you work in a widget factory full of machines designed for specific purposes, then we expect them to do exactly what they are supposed to do. In this context, it’s a big machine with a few annoying biological bits mucking up the teleological perfection. Health is not that.
Health is an ecosystem, a Biology with the odd stupid inert mechanical bit doing the boring stuff. In this sense, we don’t want high reliability – quantitative efficiency – out of the qualitative context. For example, consistently giving every third person an infection is highly reliable. In health we’ve suffered from the bell curve effect. NICE set up most of their advice for the middle line on an effecient normal distribution of idealised patients. What that means is that the perfectly designed best practice works perfectly for a tiny proportion of the world. The job is actually more about tailoring every decision to fit the individiual. Sounds mad doesn’t it, but that’s why it takes 14 years to become a Doctor. Unlike factories and boats, in complex systems there are different outcomes, in different directions for people with different wants and needs. In the end every body dies, so in Reason terms the whole health system is one massive failure.
Health doesn’t need to be highly reliable, like a machine. Albeit some parts like labs and radiology and theatre are more like the Ships and Powerstations of the research. The majority of health needs to be resilient. Going wrong is all part of being alive, the trick is, as you say, to be sensitive to the present, spot inevitable variations early and make a choice each time. It’s why zero harm campaigns don’t work. We’re in the business of harm, we exchange one harm (apendicitis) for a lesser harm (appendectomy). So harm, can’t be a failure.
Just stirring up your head ready for the conversation.
Thanks for the montion, “axe weilding” made me laugh out loud.
I haven’t got past the Camembert , Ameican Cheese Slice and Parmasan metaphor for the minute.
The Matt Wyatt Cheese of The World / Exotic Cheese Board Model of Failure could be the 21st Century version.
You should work on the graphic.
It would be brilliant.
Welcome form holidays, I’ve missed you.
Exotic Cheese Board of Failure graphic, coming up!
[…] cyflwyniad Chris Bolton ar Fodel Methiant Caws y Swistir gan James Reason, sy’n cymharu systemau dynol i haenau o Gaws y Swistir. Dewisodd Reason Caws y Swistir am […]
[…] Bolton’s presentation was on the James Reason Swiss Cheese Failure Model, which compares human systems to layers of Swiss Cheese. Reason chose Swiss Cheese for a reason […]
[…] @whatsthepont The James Reason Swiss Cheese Failure Model in 300 Seconds […]
[…] wedi gwneud cryn dipyn o waith ar fethiant dros y blynyddoedd diwethaf drwy ein Rheolwr Chris Bolton. Mae’r gwaith hwn wedi bod yn sylfaen i’r wybodaeth rydym yn rhannu a’n ffocws ar […]
[…] done a fair bit of work around failure over the last couple of years through our Manager Chris Bolton. This work has underpinned a lot of our information sharing and our focus on improvement. So it’s […]
[…] Swiss Cheese and Failure. Previously I’ve written about the James Reason Swiss Cheese Model which is widely used to illustrate how failure happens in complex systems. I’ve even had a go at trying to explain it in 300 seconds (link here). […]
[…] a legacy, big messy project prevails the Swiss cheese failure model. In software development, the holes in the defence are the unknown concepts and the assumptions. […]
Reblogged this on An Audible Patient Voice and commented:
And a great summary about systems approach to failure.
[…] complex systems” by University of Manchester professor James T. Reason. The model, explained in a What’s the Point article, compares layered defenses — whether intended to prevent a data breach, an accident, or […]
[…] The James Reason Swiss Cheese Failure Model in 300 Seconds […]
[…] Read another opinion […]
[…] Basically everything gets mixed together in a gloopy mess. This is an idea that builds upon the Swiss Cheese Model of Failure (explained here) where everything is mixed together for the review, like the cheese in a fondue. What is drawn out […]
[…] (I love that “To make it easy to remember” part.) Here is James Reason’s version (from the NC archives). Reason is an “error management” scholar, and the developer of the model: […]
[…] (I love that “To make it easy to remember” part.) Here is James Reason’s version (from the NC archives). Reason is an “error management” scholar, and the developer of the model: […]
[…] (I love that “To make it easy to remember” part.) Here is James Reason’s version (from the NC archives). Reason is an “error management” scholar, and the developer of the model: […]
[…] (I love that “To make it easy to remember” part.) Here is James Reason’s version (from the NC archives). Reason is an “error management” scholar, and the developer of the model: […]
[…] (I love that “To make it easy to remember” part.) Here is James Reason’s version (from the NC archives). Reason is an “error management” scholar, and the developer of the model: […]
[…] (I love that “To make it easy to remember” part.) Here is James Reason’s version (from the NC archives). Reason is an “error management” scholar, and the developer of the model: […]
[…] (I love that “To make it easy to remember” part.) Here is James Reason’s version (from the NC archives). Reason is an “error management” scholar, and the developer of the model: […]
[…] (I really like that “To make it straightforward to recollect” half.) Right here is James Motive’s model (from the NC archives). Motive is an “error administration” scholar, and the developer of the mannequin: […]