No theory forbids me to say "Ah!" or "Ugh!", but it forbids me the bogus theorization of my "Ah!" and "Ugh!" - the value judgments. - Theodor Julius Geiger (1960)

Hindsight and Foresight

As safety professionals, we both want to look back – “find causes” after an accident has occurred - and look ahead to anticipate safety risks. Looking back can introduce a bias that counteracts thorough investigation, along with the political pressures and practical limitations of investigation (Rasmussen, 1990). In hindsight we often see events as simpler, more linear and more predictable than they were (Dekker, 2014). But foresight is not immune for bias either. Both in looking back and in looking ahead we often assume that the world can be described by successive and unambiguous relationships between causes and effects. Where we focus on elements of safety management systems, we need to stay aware of context and circumstances that can influence the validity of these elements. Below, I first describe Taylor’s legacy and the systemic approach as an alternative. In the second half of this article, I apply these frameworks to foresight for safety management.

 Taylor's legacy

 Built on the Newtonian paradigm, Frederick Taylor's scientific management optimizes system performance through optimization of its functional sub-components. It does so through a bureaucratic hierarchy providing coordination and accountability through performance reviews and compliance management. Because, in Taylor's view, future states of a system can be derived from knowing the initial state and the forces acting on the system, we are able to distribute tasks, exchange parts, use standard procedures and check for quality. Following this view, we organize hierarchically: management thinks and workers work (Dooley et al, 1995). Taylor’s approach was based on determinism and centralized control. Processes would run predictably and safely, step by step. Any problems would be detected, identified and remedied without disrupting the system. The output would be monitored by a staff department and checked for specified tolerances.

 Scientific management has had a huge influence on the way we organize, including our safety management systems. Some safety professionals and researchers who have become consultants (eg, Reason, 2016, p. 110) may prefer the simple research analysis methods that have emerged from scientific management., but these might come at an expense: they might not be able to give a thorough account of the systems in which they are used. Waiting impatiently for quick results, we search for a cookbook recipe from one of the gurus in our field, who are taking advantage of this situation (see also Dooley et al, 1995).

 A systemic approach, first in quality management

 In a reaction against reductionism, systems theory was developed (von Bertalanffy, 1968).

 “A fault in the interpretation of observations, seen everywhere, is to suppose that every event (defect, mistake, accident) is attributable to someone (usually the nearest at hand), or is related to some special event. The fact is that most troubles with service and production lie in the system. - W. Edwards Deming (1982)

 Deming, together with Walter Shewhart, was a founder of statistical process control and quality management in production companies. He emphasized the importance of an integral view of any system, the variation in it, a theory of knowledge and psychology (Deming, 1993). The bottom line is that constantly adapting processes in response to non-conformity increases variation and deteriorates quality.

The fact that Deming was most successful in Japan might have something to do with East-Asian dialectics, which - other than Western logic that focuses on unshakable identities, non-contradiction and being versus non-being - focuses on contradictions, on processes of change, and on the fact that parts only have meaning in relation to the whole (see Nisbett, 2015).

 A systemic approach for safety management

 “Risk does not correspond to any given ‘state of things’, human or nonhuman, but rather arises from the interactions between entities that give rise to a particular ‘state of things’. As a result the greater the number of interactions the greater the risk OR uncertainty. So risk intensifies in line with the number of interacting entities and the volume of interactions. Work or effort is always required to maintain these interactions in a form in which risk is not performed.” - Stephen Healy, 2004

 Unplanned and unforeseen interactions can lead to disruptions and unforeseen incidents. Problems cannot be recognized just like that; hence accidents occur. Normal Accident theory states, among other things, that, in loosely coupled systems operations, when things go wrong, there is time and flexibility to fix the problem without catastrophic consequences; it also states that analysts should recognize as a given, that not all accidents are preventable (Perrow, 1984; Downer, 2010). Systems are dynamically coupled, alternately loosely and tightly (Snook, 2000). Besides, frontline operators, management, regulators and government exert and/or experience economic, social and political pressures and individual work pressures. Systemic research can help to analyze, learn and change and to forgive those involved (Kee et al, 2017).

 If we want to help people to work safely, our starting point is to investigate the dynamics of work relationally: as researchers, with the people who work in this system, we describe the functions, factors and the relationships between the functions within the system (Hollnagel, 2012). We then test the model in practical situations. Make no mistake, such a complex model description does not provide a neat standard. Besides, this work is more difficult than evaluating generic, predefined components (Haavik, 2014).

 It’s understandable when safety professionals, who like everyone trade-off efficiency and thoroughness, use simpler models like RCA, STEP and MORT in certain circumstances. Examples of more complex models are systemic / dynamic models like STAMP (Leveson, 2011), Accimap (Rasmussen and Svedung, 2000) and FRAM (Hollnagel, 2012). Given that an effective use of systemic tools requires a considerable amount of theoretical and multidisciplinary knowledge and a long time to prepare an accident report with these models, they have not yet been widely implemented (Underwood and Waterson, 2012). This may also be because organizations continue to look for simple ways to identify “errors” as a means of social control (Cook and Nemeth, 2010).

 Now, let’s look at the origins of foresight.

 Origins of foresight

 The mathematician and philosopher Alfred North Whitehead described the elements of foresight:

- recognizing developments before they become trends;

- seeing patterns before they fully emerge;

- understanding relevant characteristics of social currents that are likely to determine the direction of future events;

- looking for generality where there is variation;

- looking for variation where there is generality (North Whitehead, 1933).

 “Foresight is the ability to see through apparent confusion” - Alfred North Whitehead.

 The philosopher John Dewey stated that the purpose of foreseeing consequences is to determine the meaning of current activities and, as far as possible, to safeguard the activities according to that meaning. It is about constantly and systematically assessing current desires and habits for their tendency to have certain consequences. Original expectations are confronted with ultimate results, as are expected futures to current practices (Dewey, 2012 [1921]).

 Weak signals

 Weak signals, or precursors of accidents, have been identified as instrumental for proactive safety management in High Reliability Organizations Theory (Sutcliffe, 2011) and along similar lines in previous work, e.g. already in the work of Herbert W. Heinrich (Heinrich, 1929). In HROs, people actively look for surprises and weak signals that the system is acting in unexpected ways. They try to recreate an outage or minor problem in a safe-to-fail way so that they know more about their system and what can interrupt it.

Organizations are already struggling to learn from accidents and to take preventive and innovative measures that should prevent recurrence (Waring, 2015). How can they foresee a type of accident that they have not experienced yet? Barry Turner described the stages of events associated with the failure of such foresight (Turner, 1976): Initially, there are culturally accepted beliefs about the world and its dangers. Associated precautionary standards are laid down in laws, codes of practice, morals and folk customs. An unnoticed chain of events is accumulating, contrary to accepted beliefs about dangers and the standards for their avoidance. Only when disaster strikes, it attracts attention and changes general perceptions.

 Communication problems and unheeded warnings that Turner considers to be central to the stages leading up to a disaster can only be identified afterwards (Perrow, 1981). Significant noise mixes with possible warning signs and masks the warnings; they can only be distinguished from normal signals and false warnings in hindsight (Gephart, 1984).

 Eve Guillaume identified organizational factors that block the treatment of weak signals in her PhD thesis (Guillaume, 2011). Data on latent and recurring technical failures, weaknesses in work coordination and underestimation of risks were ignored by the organizations, a steel-making plant and a refinery. This information was filtered out because it was not predefined in the safety management tools. The fragmented organizational structure appeared to contribute to this: complex prescribed rules led to necessary adjustments and were a fertile breeding ground for informal practices, including the persistent weakness of the signals. Weak signals do not appear to be inherently weak, but are made or kept weak by the way they are handled by the company's learning systems and their ability to detect, code, transmit and process signals. Diane Vaughan made similar remarks about the Challenger Launch Decision (Vaughan, 1997).

 Foresight as strategic and tactical conversation

 McKelvey and Boisot (2009) view foresight as a form of strategic conversation and as a vehicle to promote learning in an organization. Jean-Cristophe Le Coze illustrates the important place of strategy for safety work: strategic economic, political, social and technological actions result in technological and organizational changes at different levels. These actions are important, because they often affect the design and / or implementation of safety barriers. A safety department or function monitors this situation by processing signals and challenges the organization, supported by external or internal safety assessments (Le Coze, 2013). Foresight regarding safety is about tactically coordinating information flows and actions in the network system (eg Wetmore, 2008). It is also about anticipating future operating conditions, reviewing risk models, keeping reserve resources available to respond to what arises, and proactive learning about the boundaries of the system, gaps in understanding, trade-offs and reprioritization (Hollnagel, 2017; Provan et al, 2020).

 According to David Provan and colleagues, in their article about Safety-II Professionals, the safety professional has to give foresight based on empirical data by leveraging his/her domain knowledge and deep understanding of the organization (Provan et al, 2020). Resilience implies reacting to and relying on variety. Instead of attempting to guard against every evil, only the most likely or most dangerous would be covered, with the full expectation that whatever was missed would be countered as and after it occurred (Douglas and Wildavsky, 1982). The organization would seek to learn from the situations in which surprises and new information emerged, and the organization was able to revise its plans and models and successfully adapt to the situation. The safety professional supports the organization in understanding how this successful adaptation takes place, what information and resources are used, how they are interpreted and deployed, and what further capabilities are essential for these situations (Provan et al, 2020).

 Facilitating foresight

 Just as hindsight has its advantages and pitfalls, foresight is not without pitfalls either. Foresight has risks of connecting unconnected things, struggling with randomness, and shortcuts gone bad (Vyse, 2014). Due to change, contradiction and multiple influences, our knowledge about systems remains uncertain (Nisbett, 2015). The safety professional has to be relatively independent cognitively, socially and organizationally in order to fulfill this foreseeing role (Perrow, 1999; Woods, 2005).

 When the safety department or function wants to give more foresight, the people involved must have critical thinking skills: they can't hold onto ideas about risk out of stubborn loyalty or because someone they highly regard says so. Anticipating well means we don't do something because it just seems reasonable, or “makes sense”, either. It asks for theory-building and hypothesis-testing (Vyse, 2014) by safe-to-fail experimentation (Snowden and Boone, 2007). For strategic developments, using a systemic model like FRAM, Accimap or STPA might help.

 Helping people to be successful

 To avoid the pitfalls of hindsight as much as possible and maximize the benefits of foresight, safety management should be more focused on helping the organization to be successful. Line management should arrange empowerment and resources so that the safety professional can investigate precarious work situations, challenge organizational goals to maintain safety margins, and question and investigate technical specialists and all levels of management (Provan et al, 2020). This role shift from control officer to organizational guide requires a number of additional skills, e.g. conducting diagnostic, confrontational and process-oriented investigations (Schein, 2011).

 Conclusion

 In our complex systems, like organizations and the “safety market” as a whole (Le Coze, 2019), cause and effect can only be deduced in retrospect, without any guarantees of their validity. We can't simply know the future state either by knowing the current state and its influences. We must manage systems to be in control and at the same time learn by (safe-to-fail) experimentation. This requires a helping mindset rather than just a compliance mindset, it also requires the safety professional to have some additional skills and it requires from management to empower the safety professional to be successful in this new role.

 Discussion

 Our knowledge of safety in complex organizations is far from complete and it never will be. Rather than converging to a set of normative practices through ISO standards, organizations can actively experiment with different approaches and learn from and communicate about theory and practice (Dooley et al, 1995). Paraphrasing Kenneth Burke: any way of seeing safety is also a way of not seeing safety (Burke, 1935). Any terminology with which we want to say something about safety implies selection and deflection of reality (Burke, 1966). We can ask ourselves which aspects of safety we pay attention to and which aspects we avoid. And which actions and values are made possible by our way of looking at safety, and which actions and values are made impossible? Those are important questions, both in hindsight and in foresight.

 Acknowledgements

I would like to thank Carsten Busch and Jos Villevoye for their helpful suggestions.

 References

 o   Bertalanffy, L. von (1968), General System Theory as Integrating Factor in Contemporary Science and in Philosophy, in: Akten des XIV. Internationalen Kongresses für Philosophie, Kolloquien, pp. 335-340.

o   Burke, K. (1935), Permanence and Change – An Anatomy of Purpose, New York: New Republic Inc.

o   Burke, K. (1966), Language as Symbolic Action, Oakland: University of California Press.

o   Dekker, S. (2014), The Field Guide to Understanding ‘Human Error’, Third Edition, Farnham, Surrey, England: Ashgate.

o   Deming, W.E. (1982), Out of the Crisis, Cambridge Massachusetts: MIT Center for Advanced Engineering Study.

o   Deming, W.E. (1993), The New Economics for Industry, Government, Education, Cambridge Massachusetts: MIT Center for Advanced Engineering Study.

o   Dewey, J. (2012 [1921]), Human Nature and Conduct, Mineola NY USA: Dover Publications.

o   Dooley, K., Johnson, T., and D. Bush (1995), "TQM , Chaos, and Complexity," Human Systems Management, 14(4): 1-16.

o   Douglas, M., Wildavsky, A. (1982), Risk and Culture, Berkeley: University of California Press.

o   Downer, J. (2010), Anatomy of a Disaster: Why Some Accidents Are Unavoidable, DISCUSSION PAPER NO: 61, Economic & Social Research Council, London: London School of Economics and Political Science.

o   Gephart, R.P. (1984), “Making Sense of Organizationally Based Environmental Disasters”, in: Journal of Management 10 (2) pp. 205-225.

o   Guillaume, E.M.E. (2011), Identifying and responding to weak signals to improve learning from experiences in high-risk industry – Dissertation, Delft, the Netherlands: Delft University of Technology.

o   Haavik, T.K. (2014), “On the ontology of safety”, in: Safety Science, Volume 67, August 2014, Pages 37-43.

o   Healy, S. (2004), “A ‘post‐foundational’ interpretation of risk: risk as ‘performance’”, in: Journal of Risk Research, 7:3, pp. 277-296.

o   Heinrich, H.W. (1929), “The Foundation of a Major Injury”, in: National Safety

News, 19 (1): 9-11.

o   Hollnagel, E. (2012), FRAM: The Functional Resonance Analysis Method: Modelling Complex Socio-technical Systems, Boca Raton FL: CRC Press.

o   Hollnagel, E. (2017), Safety-II in Practice – Developing Resilience Potentials, Routledge.

o   Kee, D., Jun, G.T., Waterson, P., Haslam, R. (2017), “A systemic analysis of South Korea Sewol ferry accident – Striking a balance between learning and accountability”, in: Applied Ergonomics, Vol. 59, Part B, March 2017, pp. 504-516.

o   Le Coze, J.C. (2013), New models for new times: An anti-dualist move,

o   In: Safety Science, Vol. 59, pp. 200-218.

o   Le Coze, J.C. (2019), How safety culture can make us think, in: Safety Science, Volume 118, October 2019, Pages 221-229.

o   Leveson, N.G. (2011), Engineering a safer world: systems thinking applied to safety, Massachussets Institute of Technology.

o   McKelvey, B. en Boisot, M. (2009), "Redefining strategische foresight: 'fast' and 'far' sight via complexity science", in: Costanzo, LA, MacKay, RB (eds.), Handbook of Research on Strategy en Foresight, Cheltenham, VK: Edward Elgar.

o   Cook, R.I., Nemeth, C.P. (2010), ‘‘Those found responsible have been sacked’’: some observations on the usefulness of error, in: Cogn Tech Work (2010) 12:87–93.

o   Nisbett, R.E. (2015), Mindware – Tools for Smart Thinking, New York: Farrar, Straus and Giroux.

o   North Whitehead, A. (1933), Adventures of Ideas, The Free Press.

o   Perrow, C. (1981), Normal accident at Three Mile Island, in: Society, July/August, 17-26.

o   Perrow, C. (1999 [1984]), Normal Accidents – Living with High-Risk Technologies, Revised Edition, Princeton NJ: Princeton University Press.

o   Perrow, C. (1999), Organizing to Reduce the Vulnerabilites of Complexity, in: Contingencies and Crisis Management, Vol. 7, No. 3, pp. 150-155.

o   Provan, D.J., Woods, D.D., Dekker, S.W.A., Rae, A.J. (2020), Safety II professionals: How resilience engineering can transform safety practice, in: Safety Science, Volume 195, March 2020, 106740.

o   Rasmussen, J. (1990), Human error and the problem of causality in analysis of accidents, in: Philosophical transactions of the Royal Society of London, Series B, Biological sciences, Volume 327, Issue 1241, 12 Apr 1990, Pages 449-460; discussion 460-462.

o   Rasmussen, J., Svedung, I. (2000), Proactive Risk Management in a Dynamic Society, Karlstad Sweden: Swedish Rescue Services Agency.

o   Reason, J. (2016), Organizational Accidents Revisited, Boca Raton FL: CRC Press.

o   Schein, E. (2011), Helping, How to offer, give and receive help, Berrett-Koehler Publishers Inc.

o   Snook, S.A. (2000), Friendly Fire: The Accidental Shootdown of U.S. Black Hawks Over Northern Iraq, Princeton NJ: Princeton University Press.

o   Snowden, D.J., Boone, M.E. (2007), A Leader’s Framework for Decision Making, in: Harvard Business Review, November 2007.

o   Sutcliffe, K.M. (2011), “High Reliability Organizations (HROs)”, in: Best Practice & Research Clinical Anaesthesiology, Vol. 25, Issue 2, pp. 133-144.

o   Turner, B.A. (1976), The Organizational and Interorganizational Development of Disasters, in: Administrative Science Quarterly, Vol. 21, No. 3, (Sep. 1976), pp. 378-397.

o   Underwood, P., Waterson, P. (2012), A critical review of the STAMP, FRAM and Accimap systemic accident analysis models, in: Stanton, N. (Ed.), Advances in Human Aspects of Road and Rail Transportation, Chapter: 39.

o   Vaughan, D. (1997), The Challenger Launch Decision: Risky Technology, Culture and Deviance at NASA, Chicago: The University of Chicago Press.

o   Vyse, S. (2014), Believing in Magic – The Psychology of Superstition, updated edition, Oxford/New York: Oxford University Press.

o   Waring, A. (2015), Managerial and non-technical factors in the development of human-created disasters: A review and research agenda, in: Safety Science Volume 79, November 2015, Pages 254-267

o   Wetmore, J.M. (2008), Engineering with Uncertainty: Monitoring Airbag Performance, in: Sci Eng Ethics (2008) 14: 201–218.

o   Woods, D.D. (2005), “Creating Foresight: Lessons for Enhancing Resilience from Columbia”, in: Starbuck, W.H., Farjoun, M. (Eds.), Organization at the Limit: Lessons from the Columbia Disaster, Wiley-Blackwell.