ORC HSE Book Review: How Could This Happen? Managing Errors in Organizations
ORC HSE Book Review:
“How Could This Happen?”

Managing Errors in Organizations
If you feel happy, is it time to be concerned? You have likely seen the video of singer Pharrell Williams’ infectious song featuring the refrain – I’m Happy! When it comes to safety, we may need to sing a different tune.
How Could this Happen: Managing Errors in Organizations is a compilation of 17 papers by authors with wide-ranging approaches to error management.
E dmondson and Verdin quote John Carroll’s research that in many organizations “workers are worried, supervisors are concerned, managers are mixed, and executives are happy!” Carroll’s observation reminds me of the Iceberg of Ignorance frequently presented by Allergan’s David Eherts. Carroll and Eherts’ ideas strike me as two sides of the same coin. What do you think?
Important information about safety problems and near-miss events is often not shared with senior officials in an organization. Fear of negative consequences can deter reporting and sharing. In some cases, bad news is shared upwards but not acted upon, which also discourages reporting.
C arroll explains that “many reporting systems are ‘black holes’ from the viewpoint of the workforce ‒ reports go in and nothing comes out. … The feeling is that they were not heard, not respected, not appreciated. … Management does not listen; management does not care.” Effective worker engagement in safety, which is a key aspect of the Safety Differently model, is not possible when ineffective reporting systems breed cynicism.
Lessons from Tenerife
Giolito and Verdin examine the 1977 Tenerife air disaster in which 747 Boeing jets operated by KLM and Pan Am collided, killing nearly 600 people. The authors explain that in the airline industry, “safety first” is a cardinal rule. Detailed procedures back up industry’s commitment to this goal. One example is the requirement to double-check that the plane is cleared for takeoff. But when the KLM captain mistakenly took off without clearance, the flight crew failed to speak up. The captain was one of the most respected pilots at KLM; the crew complied with the rule of hierarchy, rather than the safety rule. As this demonstrates, in the real world, safety goals compete with other goals such as efficiency and production.
One outcome of the tragedy was a new emphasis on crew resource management training to ensure that less senior people will challenge the captain when they have a safety concern.
Lessons from the Davis-Besse near-disaster
Dechy, Dien, Marsden, and Rousseau explain that learning from incidents should include not just events in your own organization, but in other organizations including those not in your business sector. The 2002 incident at the Davis-Besse nuclear power plant in Ohio provides lessons for many organizations.
A cavity the size of a football was discovered on a nuclear reactor. Over several years, corrosion from boric acid had eroded nearly seven inches of the steel shell; only two inches remained. Key underlying organizational causes included:
A production focus rather than a safety orientation;
Lack of management of organizational changes;
Lack of learning from internal and external events;
Excessive reliance on inspections;
Normalization of excessive corrosion problems;
Corrective actions only addressed symptoms; and
The safety regulator (Nuclear Regulatory Commission) was complacent despite ongoing severe problems.
The authors note many similarities were uncovered in the investigation of the BP Texas City oil refinery disaster in 2005. Notably, there was a failure to learn from past incidents. Effective learning from problems and mistakes requires “an open and just culture in which blame is considered to be counterproductive.”
Is Safety Really First?
When management slogans such as “safety first” do not match a company’s actions, safety messages lose their credibility and become counterproductive. The friction between safety and production is a complex dynamic and distinctions are easily blurred.
Dechy et al. observe that “performance pressures and individual adaptation push systems in the direction of failure and lead organizations to gradually reduce their safety margins and take on more risk.” This process often is imperceptible, taking place slowly over an extended period. “As these steps are usually small, they often go unnoticed. A ‘new normal’ is repeatedly established, and no significant problems are noticed until it is too late.”
Safety and the Burden of Proof
Investigations of the Challenger and Columbia space shuttle disasters found that the way organizations such as NASA address the burden of proof for safety is important. At NASA, the burden of proof rested with safety staff to prove that production (launches) must be stopped. There was no requirement for production staff to prove that all is well or that a problem would only result in a fail-safe situation.
In this context, past success can blind organizations to seeing the seriousness of potential safety problems. When it comes to infrequent but high-consequence accidents, lack of past serious incidents is a poor indicator of future success.