No theory forbids me to say "Ah!" or "Ugh!", but it forbids me the bogus theorization of my "Ah!" and "Ugh!" - the value judgments. - Theodor Julius Geiger (1960)

Disowning Fukushima

The occurrence of a disaster doesn't just signal a change in our world—it's a stark realization that what we believed about it might have been fundamentally different. John Downer's "Disowning Fukushima" dives deep into this paradigm shift, suggesting that disaster investigations aren't just about preventing future mishaps; they're a desperate attempt to reaffirm our existing cultural beliefs despite the disruption to our understanding.

 

In 2011, the Fukushima nuclear disaster caught the world off guard. Nuclear authorities disassociated themselves from the event to protect their credibility. The disaster's unexpectedness reveals a fundamental societal blindness to nuclear risks, unlike other industries where accidents are anticipated. The lack of preparedness at Fukushima, both in plant procedures and broader bureaucratic response, indicates a systemic issue across nations rather than solely a Japanese problem. There's a tendency for insincere and insufficient disaster planning, illustrated by the clustering of reactors and inadequate emergency guidelines. Even when plans exist, they lack institutional authority, leading to negligence in implementation.

 

There was a paradoxical lack of preparedness for nuclear disasters despite the recent emphasis on resilience in policymaking for other threats like terrorism or flooding. The institutional confidence that rendered contingency planning unnecessary for nuclear accidents is rooted in a belief that nuclear risks are objectively calculable and entirely preventable, unlike other threats.

 

Regulatory agencies shifted focus from assessing the consequences of nuclear accidents to emphasizing the likelihood of such events, aiming to distinguish between "credible" and "hypothetical" accidents. This transition was marked by a turn to probabilistic reliability analysis, epitomized by the influential 1975 WASH-1400 study. Despite skepticism within the scientific community regarding the adequacy of probabilistic approaches, these methods became central to nuclear regulatory oversight. Today, almost all nuclear regulatory decisions rely on Probabilistic Risk Assessment (PRA), emphasizing calculations that indicate the likelihood of severe accidents to be incredibly low, legitimizing public discourse about nuclear power. Despite criticisms and recognition of imperfections in these approaches, probabilistic risk assessment remains entrenched in nuclear regulation, influencing policy discussions and downplaying the costs and risks associated with nuclear accidents.

 

There is a disconnect between nuclear engineers' nuanced discussions about the limitations of probabilistic assessments and the public or policymaker dialogues, where such nuances are largely absent. Despite experts' awareness of calculation limitations, institutional practices emphasize an "Ideal of Mechanical Objectivity" promoting the notion that nuclear risk can be precisely quantified and guaranteed.

 

There is a prevalent belief among regulatory bodies and the industry that complex technological properties like risk and reliability can be objectively known through mathematical calculations. This idealization of objectivity characterizes most modern technological discourse and is especially prominent in the nuclear context due to the severity of potential accidents, demanding declarative assurances and calculative certainty. Sociological perspectives criticize this mechanical notion of technology assessment. Critics argue that these assessments limit discussions about social priorities and emphasize how formal risk calculations often exclude other relevant considerations. Normal Accident Theory highlights that accidents resulting from improbable events are probable in systems where numerous opportunities for such events exist.

 

Technological knowledge has an irreducible epistemological ambiguity, especially in the realm of complex, safety-critical systems. Why do the Fukushima disaster, and the broader history of nuclear power disasters, fail to speak for itself, despite sociological arguments highlighting the insufficiency of nuclear risk calculations?

 

Nuclear experts have an enduring credibility despite the failures of nuclear risk assessments. Contrary to the typical assumption that crises would damage expertise credibility, nuclear disasters like Windscale, Three Mile Island, Chernobyl, and Fukushima haven't significantly undermined public faith in nuclear risk calculations. Even though Fukushima led to policy changes in Japan and Germany, it didn't substantially erode public trust in nuclear risk assessments globally. Despite the accidents, the predictions indicated an anticipated resurgence in nuclear power. Many countries were planning or constructing new reactors, and by 2012, projections suggested a significant global increase in nuclear energy use.

 

This scenario is like the recurrent scene from Peanuts where Lucy repeatedly convinces Charlie that things will be different, only to repeat the same action. Similarly, the nuclear establishment seems adept at justifying past failings without denting the credibility of their assurances. Despite the undeniable impact of past accidents, the narrative of calculative certainty persists, allowing nuclear reliability calculations to maintain credibility even in the aftermath of disasters like Fukushima.

 

Various defenses are employed to rationalize the Fukushima nuclear disaster without undermining the credibility of nuclear reliability assessments. Scientific theories are often retained despite conflicting evidence. Four core arguments were used to shield reliability assessments from the Fukushima disaster: (1) Interpretive Defense: This denies the failure by manipulating the definition of "failure." It exploits the ambiguity of what constitutes a failure in a nuclear plant. For instance, if an incident occurs but doesn't breach certain safety parameters, experts might argue that it doesn't qualify as a failure. (2) Relevance Defense: This claims that the failure of one assessment doesn't discredit others. It separates failures as isolated incidents rather than a reflection of the credibility of the entire reliability assessment system. (3) Compliance Defense: it's argued that the assessments were sound, but people didn't adhere to the rules or protocols properly, leading to the incident. It attributes the failure to human error rather than the flawed assessment itself. (4) Redemption Defense: This suggests that while assessments were flawed in the past, they've been rectified now, implying that the present assessments are trustworthy.

 

The "Interpretive Defense" particularly explores the concept of "beyond design basis," where experts claim that events exceeding the preconceived design parameters of a nuclear plant cannot be considered failures. For example, if a flood surpasses the estimated maximum, engineers might argue that it wasn't a failure as the calculations didn't account for such extreme conditions.

The defense further employs arguments that imply Fukushima's catastrophic consequences didn't technically breach the plant's expected limits. It points to the tsunami's unprecedented scale, framing the disaster as an "Act of God," arguing that the plant functioned as designed, or at least within its designed capabilities, despite the overwhelming natural event. Experts and authorities used terms like "success" or "extremely unlikely event" to downplay the failures in the face of extraordinary circumstances.

 

The Relevance Defense seeks to isolate the failed assessment, suggesting it's not representative of assessment practices as a whole. It often highlights differences in technology or regulatory environments between the affected plant (Fukushima) and others to imply uniqueness in the failed assessment.

 

The Compliance Defense: Arguments center around attributing the disaster to a significant degree of malfeasance or incompetence by operators and overseers. This narrative aims to portray the incident as a "man-made" disaster caused by human error, emphasizing noncompliance or malfeasance by those responsible for operations.

 

Amidst highlighting errors and shortcomings, there's a consistent theme suggesting that these issues are fixable. The narrative revolves around the notion that the problems highlighted by the Fukushima disaster are addressable by periodically evaluating safety measures and adopting evolving best practices.

 

These defenses often serve to minimize the impact of the disaster on the broader assessment practices by attributing the failure to specific circumstances, either technological, regulatory, or human errors. The arguments attempt to reinforce the idea that the disaster was an exception, not reflective of inherent flaws in nuclear reliability assessments, and that improvements or corrective measures can prevent similar incidents in the future.

Acknowledging errors in risk calculations but emphasizing that experts have identified and rectified these errors to ensure safety in future assessments, the correctional approach highlights reviews undertaken post-Fukushima and the purported improvements in assessment practices.

While acknowledging some truth in the redemption narratives (such as differences in assessment practices), Downer questions their credibility in fully redeeming the assessment failures post-Fukushima.

Downer scrutinizes the interpretive defense that emphasizes the ambiguous definition of failure and the concept of "beyond-design-basis" arguments. Criticism is directed at the notion that the Fukushima incident doesn't constitute an assessment shortfall due to being beyond the plant's design basis.

Downer challenges the notion of Fukushima's exceptionalism and argues against claims that the disaster is an exception rather than reflective of broader assessment practices. He questions assertions that the unique circumstances of Fukushima make it irrelevant to other reactor designs and assessment practices.

Downer also critically examines the compliance defense regarding nuclear accidents, particularly emphasizing human errors and malpractice, questioning the narrative that perfectly compliant operators could avoid disasters. He delves into the complexities of human behavior within complex systems, suggesting that even the most stringent rule compliance doesn't ensure safety in nuclear risk assessments.

Downer draws parallels between previous nuclear accidents (Three Mile Island, Chernobyl) and Fukushima, highlighting how error attribution, though present, doesn't absolve flawed risk assessments. He challenges the premise that perfect assessments are achievable if everyone behaves rationally and follows rules.

Downer discusses the inherent unpredictability in human behavior within complex systems, citing academic literature that explores human errors in technological contexts. It emphasizes that non-compliance and malpractice are not exceptional in such environments, challenging the assumption that assessments can rely on perfect compliance for accuracy.

The ambiguity and interpretive nature of rules governing complex systems is highlighted, suggesting that even the most comprehensive regulations require interpretation. It argues that the ambiguity in rules and the unpredictability of human performance are significant issues in nuclear risk assessments.

The narrative that claims to have found and fixed errors post-Fukushima is questioned, arguing against the promise of complete error eradication. Downer challenges the notion that lessons learned from accidents can lead to perfect risk assessments, emphasizing the unrealistic expectation of perfection in nuclear safety.

Downer discusses the limitations of error margins in risk assessments, noting that unexpected events ("unknown unknowns") can disrupt calculations, even with generous error margins. It suggests that certain events might challenge nuclear plants beyond anticipated margins, questioning the adequacy of risk assessments.

 

Ref.

Downer, J. (2013), Disowning Fukushima: Managing the credibility of nuclear reliability assessment in the wake of disaster, in: Regulation & Governance, Volume 8, Issue 3, September 2014, Pages 287-309.