EEJIT-PROOF ORGANISATIONS?

Tricky questions are my favourite sort. They stimulate the grey matter and get me thinking. Courtesy of a new colleague in a networking group, I got a pearler earlier this month.

He positioned his question in the context of solving the burgeoning crises associated with fossil fuels. He wonders, as do I if nuclear power is the solution? 

Like many people, I have a real issue with conventional atomic fission. It produces dangerous waste materials that are too often weaponised. Moreover, the impact of nuclear accidents is severe (e.g., Chernobyl, Fukushima Daiichi). My reading of contemporary science is that fusion, the cleaner, greener sibling of fission will be in relatively common use in my lifetime (I’m 58 at the time of writing). 

All the same, the generation of nuclear power through whatever means is one of those things that has to be unquestionably ‘eejit-proof.’ Organisations such as those generating nuclear power are prone to what Charles Perrow termed ‘normal accidents.’ These occur where there is unanticipated interaction of multiple failures in a complex system, due to risk factors and complexity. Examples of ‘normal accidents’ include the Three Mile Island nuclear incident, the Challenger Disaster and Columbia Disaster, the Bhopal chemical leak, the Tenerife air crash, the Mann Gulch forest fire, the Black Hawk friendly fire incident in Iraq).

Where organisations succeed in avoiding such incidents while having the same broad characteristics as the less fortunate, they’re known as ‘high-reliability organisations’ (HROs). The term was developed by a group of scholars including Karlene Roberts, Todd Laporte, Gene Rochlin, Karl Weick and Kathleen Sutcliffe (I acknowledge their work in this post). They characteristically are defined by

  1. Hypercomplexity with an extreme variety of components, systems, and organisational levels or layers.
  2. Tight coupling wherein many organisational units or layers enjoy reciprocal interdependence.
  3. Extreme hierarchical differentiation, having multiple organisational levels, each with an individual and elaborate control and regulating mechanisms.
  4. Complex communication networks that connect large numbers of decision-makers, and which are characterised by redundancy in control and information systems.
  5. A significant degree of accountability that rarely exists in most organisations, deviation from which attracts severe adverse consequences.
  6. Decisions made that attract high frequency, rapid feedback.
  7. Significant activities measured in seconds.
  8. Multiple critical outcomes that must happen simultaneously signifying complex operations as well as considerable difficulty in reversing or modifying operational decisions.

Typical HROs include air traffic control systems, naval aircraft carriers, and nuclear power operations. I’ve also applied these principles to fire and rescue services in the UK. I believe the ideas potentially apply to a range of industrial and commercial settings. Where reliability is central to operational success (e.g., airline operations, large scale financial services operations, health care), then these ideas are appropriate.

So how do they stop the eejits and what can we learn from them? 

HROs embody ‘organisational mindfulness’ in that they must successfully operate ‘here and now.’ There can be no excuse for dwelling in the past or looking too far ahead. It’s now that matters. 

They share five characteristics:

  1. HROs expect and seek out failure. They treat anomalies as symptoms of a systemic problem. Latent organisational weaknesses contributing to small errors can also develop in more significant issues. Hence mistakes are promptly reported so problems can be found and fixed.
  2. HROs are reluctant to simplify interpretations. They deliberately and systematically seek a comprehensive understanding of their complex operating environment. Hence, they look across system boundaries to determine the path of problems. They look for where problems started and where they might end up. A diversity of experience and opinions is valued.
  3. HROs are sensitive to operations, and particularly to changed conditions. Situational awareness is critical to HROs. They monitor their systems’ safety and security, ensuring that barriers and controls remain in place and operate as designed.  
  4. HROs are committed to resilience. They develop capabilities to detect, contain, and recover from errors. Errors will happen, but they do not paralyse HROs.
  5. HROs defer to expertise. They follow the conventional communication hierarchy during routine operations. However, they defer to experts to solve problems during irregular events. During a crisis, front line staff make decisions. Authority migrates to the person who can solve the problem, regardless of their position in the organisation.

Superficially, this looks like a massive commitment. However, we can see things that all organisations regardless of size, should aspire to if we pare back the scale of ‘typical’ HROs:

  1. Learn from small moments of truth.
  2. Look for the actual cause of problems; don’t be satisfied with the obvious.
  3. Be aware; all business contexts change quite quickly nowadays.
  4. Be resilient. Work on mental toughness.
  5. Listen to experts, especially those working directly with customers.

If you can’t be an HRO at least be an RSO (reliability seeking organisation). It’s a chance to control the eejits that blight organisational life.