The James Reason Swiss Cheese Failure Model in 300 Seconds

James Reason Swiss Cheese Model. BMJ, 2000 Mar 18:320(7237): 768-770
James Reason Swiss Cheese Model. Source: BMJ, 2000 Mar 18:320(7237): 768-770

A while ago I was part of the Cardiff pilot of Practical Strategies for Learning from Failure (#LFFdigital). My job was to explain the James Reason Swiss Cheese Failure Model in 300 seconds (5 minutes).

This is what I did.

The Swiss Cheese Model of Accident Causation (to give it the full name), was developed by Professor James T. Reason at the University of Manchester about 25 years ago. The original 1990 paper,“The Contribution of Latent Human Failures to the Breakdown of Complex Systems”, published in the transactions of The Royal Society of London, clearly identifies these are complex human systems, which is important.

Well worth reading is the British Medical Journal (BMJ), March 2000 paper, ‘Human error: models and management’. This paper gives an excellent explanation of the model, along with the graphic I’ve used here.

The Swiss Cheese Model, my 300 second explanation:

  • Reason compares Human Systems to Layers of Swiss Cheese (see image above),
  • Each layer is a defence against something going wrong (mistakes & failure).
  • There are ‘holes’ in the defence – no human system is perfect (we aren’t machines).
  • Something breaking through a hole isn’t a huge problem – things go wrong occasionally.
  • As humans we have developed to cope with minor failures/mistakes as a routine part of life (something small goes wrong, we fix it and move on).
  • Within our ‘systems’ there are often several ‘layers of defence’ (more slices of Swiss Cheese).
  • You can see where this is going…..
  • Things become a major problem when failures follow a path through all of the holes in the Swiss Cheese – all of the defence layers have been broken because the holes have ‘lined up’.
Source: http://www.energyglobal.com/upstream/special-reports/23042015/Rallying-against-risk/
Source: Energy Global Oilfield Technology http://www.energyglobal.com/upstream/special-reports/23042015/Rallying-against-risk/

Who uses it? The Swiss Cheese Model has been used extensively in Health Care, Risk Management, Aviation, and Engineering. It is very useful as a method to explaining the concept of cumulative effects.

The idea of successive layers of defence being broken down helps to understand that things are linked within the system, and intervention at any stage (particularly early on) could stop a disaster unfolding. In activities such as petrochemicals and engineering it provides a very helpful visual tool for risk management. The graphic from Energy Global who deal with Oilfield Technology, helpfully puts the model into a real context.

Other users of the model have gone as far as naming each of the Slices of Cheese / Layers of Defence, for example:

  • Organisational Policies & Procedures
  • Senior Management Roles/Behaviours
  • Professional Standards
  • Team Roles/Behaviours
  • Individual Skills/Behaviours
  • Technical & Equipment

What does this mean for Learning from Failure?  In the BMJ paper Reason talks about the System Approach and the Person Approach:

  • Person Approach – failure is a result of the ‘aberrant metal processes of the people at the sharp end’; such as forgetfulness, tiredness, poor motivation etc. There must be someone ‘responsible’, or someone to ‘blame’ for the failure. Countermeasures are targeted at reducing this unwanted human behaviour.
  • System Approach – failure is an inevitable result of human systems – we are all fallible. Countermeasures are based on the idea that “we cannot change the human condition, but we can change the conditions under which humans work”. So, failure is seen as a system issue, not a person issue.

This thinking helpfully allows you to shift the focus away from the ‘Person’ to the ‘System’. In these circumstances, failure can become ‘blameless’ and (in theory) people are more likely to talk about it, and consequently learn from it. The paper goes on to reference research in the aviation maintenance industry (well-known for its focus on safety and risk management) where 90% of quality lapses were judged as ‘blameless’ (system errors) and opportunities to learn (from failure).

It’s worth a look at the paper’s summary of research into failure in high reliability organisations (below) and reflecting, do these organisations have a Person Approach or Systems Approach to failure? Would failure be seen as ‘blameless’ or ‘blameworthy’?

High Reliability Organisations: Source BMJ, 2000 Mar 18:320(7237): 768-770
High Reliability Organisations: Source BMJ, 2000 Mar 18:320(7237): 768-770

It’s not all good news. The Swiss Cheese Model does have a few criticisms. I have written about it previously in ‘Failure Models, how to get from a backwards look to real-time learning’.

It is worth looking at the comments on the post for a helpful analysis from Matt Wyatt. Some people feel the Swiss Cheese model represents a neatly engineered world. It is great for looking backwards at ‘what caused the failure’, but is of limited use for predicting failure. The suggestion is that organisations need to maintain a ‘consistent mindset of intelligent wariness’. That sounds interesting…

There will be more on this at #LFFdigital, and I will follow it up in another post.

So, What’s the PONT?

  1. Failure is inevitable in Complex Human Systems (it is part of the human condition).
  2. We cannot change the human condition, but we can change the conditions under which humans work.
  3. Moving from a Person Approach to a System Approach to failure helps move from ‘blameworthy’ to ‘blameless’ failure, and learning opportunities.

About WhatsthePONT

I'm from Old South Wales and I'm interested almost everything. Narrowing it down a bit: cooperatives, social enterprises, decent public services, complexity science, The Cynefin Framework, behavioural science and a sustainable future. In 2018/19 I completed a Winston Churchill Travelling Fellowship, looking at big cooperative enterprises and social businesses in NE Spain and the USA. You can find out more here: https://whatsthepont.com/churchill-fellowship/

23 Responses

  1. I can’t help thinking that when Reason says “complex human systems” he really means ‘complicated human designs’. Afterall the whole premise of a complex system is that it’s non-linear. Things don’t have to line up and the relationship between cause and effect can be oblique. It would be like one layer of cheese being a baked Camembert (it just slows problems down), one is an American Slice (bouncing problems off in all directions) and one a thin wedge of unbreakable Parmesan. What’s more the line of failure could be the equivalent of a hot wire that simply slices through, holes or not. That’s enough of the metaphor.

    The Cognitice Science has moved on in the past 25 years and what is described as failures in human cognition are now more clearly recognised as contextual strengths, not failures. If you work in a widget factory full of machines designed for specific purposes, then we expect them to do exactly what they are supposed to do. In this context, it’s a big machine with a few annoying biological bits mucking up the teleological perfection. Health is not that.

    Health is an ecosystem, a Biology with the odd stupid inert mechanical bit doing the boring stuff. In this sense, we don’t want high reliability – quantitative efficiency – out of the qualitative context. For example, consistently giving every third person an infection is highly reliable. In health we’ve suffered from the bell curve effect. NICE set up most of their advice for the middle line on an effecient normal distribution of idealised patients. What that means is that the perfectly designed best practice works perfectly for a tiny proportion of the world. The job is actually more about tailoring every decision to fit the individiual. Sounds mad doesn’t it, but that’s why it takes 14 years to become a Doctor. Unlike factories and boats, in complex systems there are different outcomes, in different directions for people with different wants and needs. In the end every body dies, so in Reason terms the whole health system is one massive failure.

    Health doesn’t need to be highly reliable, like a machine. Albeit some parts like labs and radiology and theatre are more like the Ships and Powerstations of the research. The majority of health needs to be resilient. Going wrong is all part of being alive, the trick is, as you say, to be sensitive to the present, spot inevitable variations early and make a choice each time. It’s why zero harm campaigns don’t work. We’re in the business of harm, we exchange one harm (apendicitis) for a lesser harm (appendectomy). So harm, can’t be a failure.

    Just stirring up your head ready for the conversation.
    Thanks for the montion, “axe weilding” made me laugh out loud.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s