No theory forbids me to say "Ah!" or "Ugh!", but it forbids me the bogus theorization of my "Ah!" and "Ugh!" - the value judgments. - Theodor Julius Geiger (1960)

Safety Management - Advances in supporting a fundamental need

People have a fundamental need to be and feel safe. Organizations therefore have to and want to do their best to prevent accidents (Provan and Rae, 2020). Accidents are the result of a dynamic convergence of technology, people and organizations. While the organizational budget fixes expectations for a year or longer, organizational goals are emerging from a bargaining process among groups. Some of these goals are set to make sense out of unplanned adaptations and accidental developments (Perrow, 2014). Safety goals may be put in terms of “risks as low as reasonably practicable”. The risks might be calculated, or proxy measures are used. This is part and parcel of organizational life. But safety management has some serious pitfalls we have to reckon with.

 Prevention versus success

 How do you determine a risk? Bookcases full have been written about this. Instead of rich descriptions of risk, often, a simplified measure is chosen, such as an injury frequency rate. But what does this number say? Not every hour worked is the same as the other. In any given hour, how many chances were there that something bad would happen (Provan and Rae, 2020)? By using simplified risk measures, we sometimes forget that in the end managing risks is all about supporting normal work (Provan et al, 2020) in our primary processes, outside the office. How do we ensure that this is as successful as possible, which also means it’s safe (and timely etc.) production?

 Damned Lies…And Statistics

 A manager can become nervous if several accidents occur at a certain location in a short period of time. The call to do something about it quickly follows. After that, things can remain quiet for a while, until several accidents occur again in a short time. And the urge to do something comes up again. But accidents happen more or less randomly (they follow a Poisson distribution; Nicholson and Wong, 1993), so that a cluster of multiple accidents is sometimes nothing more than statistical noise. But that does not have to be a reason not to improve anything!

 The Shock Wave of a New Worldview

 By now almost everyone has heard of Safety-I and Safety-II. Every revolution in our worldview generates emotional and existential shock waves. This also applies to Safety-I and Safety-II. There are people who say they have something better in store, eg Safety-III. There are also people who say that Safety-II is old wine in new bottles, “we wrote that more than thirty years ago”. People prefer to stick to their own doctrine and continue to desperately win souls for it.

 Erik Hollnagel already indicated in 2014 that the way ahead is a combination of existing practices of Safety-I with the complementary practices of Safety-II (Hollnagel, 2014, p. 178). Some Safety-I approaches should possibly be replaced with Safety-II approaches to have greater safety gains, or to be more social in nature. Unsolved problems from the “old” Safety-I approaches can be solved with Safety-II, while a lot of problems already solved by Safety-I can be solved that way in the future (see Pigliucci, 2018, p. 235). Thus, this advance in safety science is of a normal cumulative sort (see Kuhn, 2002, p. 12). So, the baby does not have to be thrown out with the bath water, as Carsten Busch aptly put it (Busch, 2019).

 The Blame Game is a Dead End

 The idea that humans are our biggest problem has to be overcome. This image of humans is attributed to Safety-I; it’s not in the theory but rather in certain applications and interpretations.  Of course, disaster begs for an explanation. But calling someone a fool - e.g. "you can't fix stupid" - or otherwise blaming people is a stopgap, an excuse for not having to look for better explanations (Boudry, 2015). The use of ''error'' in management and regulation reflects its apparent value as a means for social control (Cook and Nemeth, 2010).

 Although it is very human to look for patterns and to want to predict the behavior of other people, it does not help organizational learning. This way of looking at human behavior is a meme, a piece of cultural information that propagates itself by leaping from brain to brain via a process of imitation (Dawkins, 1976). If these kinds of moral intuitions and social rules still exist in your organization, cultural evolution is needed to create memes that are better adapted to the - always messy - organizational environment. After all, we cannot choose what to believe when we are confronted with the facts, but we can, to some extent, choose which facts to expose ourselves to (Boudry, 2015).

 The Hard Lesson and the Tortuous Road

 We can’t prevent all accidents by forcing people to comply with rules. We can’t know all cause-and-effect relationships, especially if they’re systemic and dynamic in nature. System performance is always variable (Hollnagel et al, 2006). Where does that leave us as safety professionals? One of the important lessons that Safety-II offers us: Try to prevent accidents, but while you do that, also look at where things didn't go wrong. Understand why the work is going well there and try to maintain or improve those organizational conditions to keep things going well more often (Provan and Rae, 2020). That helps to meet both the reason your organization exists and people's fundamental need to be and feel safe. It’s hard work – it often seems like a tortuous road - , but in the end it’s a more gratifying journey.

 Acknowledgement

Thanks to Carsten Busch for his valuable suggestions.

 Sources

 Boudry, M. (2015), Illusies voor gevorderden – Of waarom waarheid altijd beter is; Antwerpen: Polis.

Busch, C. (2019), Brave New World: Can Positive Developments in Safety Science and Practice also have Negative Sides?, in: MATEC Web Conf., Volume 273, 2019, International Cross-industry Safety Conference (ICSC) - European STAMP Workshop & Conference (ESWC) (ICSC-ESWC 2018).

 Cook, R.I, Nemeth, C.P. (2010), “Those found responsible have been sacked”: some observations on the usefulness of error. Cognition, Technology and Work, 12: 87-93.

 Dawkins, R. (1976), The Selfish Gene, Oxford, England: Oxford University Press.

 Hollnagel, E. (2014), Safety-I and Safety-II – The Past and Future of Safety Management; Farnham, Surrey: Ashgate.

 Hollnagel, E., Woods, D.D, Leveson, N. (Eds.) (2006), Resilience Engineering – Concepts and Precepts; Boca Raton, Florida: CRC Press.

 Kuhn, T.S. (2002), The Road since Structure: Philosophical Essays, 1970-1993, with an Autobiographical Interview, Chicago: University of Chicago Press.

 Nicholson, A., Wong, Y.D. (1993), Are accidents Poisson distributed? A statistical test, in: Accident Analysis & Prevention 25 (1): 91-7.

 Perrow, C. (2014), Complex Organizations – A Critical Essay; Third Edition reprint; Brattleboro, Vermont: Echo Point Books & Media, LLC.

 Pigliucci, M. (2018), Nonsense on Stilts – How to Tell Science from Bunk; second edition; Chicago: The University of Chicago Press.

 Provan, D.J., Woods, D.D., Dekker, S.W.A., Rae, A.J. (2020), Safety II professionals: How resilience engineering can transform safety practice, in: Reliability Engineering and System Safety Vol. 195, March 2020, 106740.

 Provan, D.J, Rae, A.J. (2020), The Safety of Work Podcast – Ep. 57, 58 & 59; www.safetyofwork.com