No theory forbids me to say "Ah!" or "Ugh!", but it forbids me the bogus theorization of my "Ah!" and "Ugh!" - the value judgments. - Theodor Julius Geiger (1960)

A Guide to Luhmann's Systems Theory for HSE Management

Have you ever seen the opening credits of Monty Python and the Holy Grail? When a subtitle contains an error, the screen announces, "Those responsible have been sacked." A moment later, another announcement: "Those responsible for sacking the people who have just been sacked, have been sacked." It's absurd, but it highlights a very real tendency in organisations. When things go wrong, we look for someone to blame. This cycle of blame often prevents real learning.

My book introduces a different way of seeing safety, not as a simple matter of individual actions and errors, but as a complex product of communication systems. Using the ideas of sociologist Niklas Luhmann, I explore how organisations function, how they fail, and how we can ask better questions to understand safety in a more insightful way.


The core idea: Systems are made of communication instead of people and parts

The fundamental shift in Luhmann's systems theory is this: a social system, like an organisation or a company, is composed of communications. People, with their thoughts and intentions, are part of the system's environment. They can create communications, but it's the system itself that determines how those communications are processed.

"We must not focus on individual actions, but on dynamics within the system. We need to understand actions as part of an interconnected system."

Let's make this concrete with an example: a safety report that gets ignored.
Imagine a construction worker notices a serious safety issue and responsibly writes a report. This report is a communication. What happens next has less to do with the worker's diligence and more to do with the organization's internal programming.
- The communication: The worker submits the safety report.
- The system's response:

1. Does the system even recognise the report as valid information? (Selection failure)

2. Is there a reliable process for getting it to the right person, or can it get lost in a digital folder? (Dissemination failure)

3. Does the recipient have the authority and context to understand its importance, or do they dismiss it as a non-issue? (Understanding failure)

A valid concern can be filtered out at any of these stages. If nothing happens, the failure is not simply the fault of an individual who didn't act. The failure is a property of the communication system itself that was not designed to process that specific communication into further action.
This concept is the key to understanding the rest of the theory. Systems survive by managing the overwhelming complexity of the world, and they do this by filtering and processing communications according to their own internal logic.

How systems survive: Reducing complexity (and creating blind spots)
The world is infinitely complex. For any organisation to function, it must simplify this reality by deciding what information is relevant and what can be ignored. This reduction of complexity is a system's primary survival mechanism.


"Organisations can only function by defining what does not need to be known; through decision premises, routines, and culture, so that not everything must be decided or communicated at every moment."


This strategic simplification is a functional necessity! It has a crucial consequence however and that is that it creates blind spots. By focusing on certain things, a system necessarily ignores others. The Deepwater Horizon disaster serves as a case study. The failure wasn't just a matter of engineering; but the organisation's inability to doubt itself. The system was designed in a way that filtered out observations that challenged its core assumptions. The project design systematically eliminated pauses for reflection. The priority was cost and completion, which meant there was no time built into the process to step back, re-evaluate, and question what was happening. While procedural steps were followed, the observations made along the way only served to confirm the existing belief that everything was on track. The system lost its ability to reflect on its own assumptions. Doubt was not a communicable concept within the system's logic at that moment. When a system is this focused on its own internal goals and procedures, it can become trapped in its own reality. This leads us to the concept of operational closure.

Trapped in the loop: Operational closure in practice
Operational closure means that a system can only observe and react based on its own internal state and its own communications. It cannot see the real world directly. Instead, it constructs its own version of reality based on the information circulating within it. This can lead to situations where the system's reality dangerously diverges from the actual state of the world.

Case Example - The Tank Cleaning Incident
A formal permit for a dangerous tank entry was never issued. However, internal and external operational departments made verbal suggestions and partial preparations. Each subsystem acted as if approval had been given, observing the actions of the others. The system as a whole began to behave as if a final decision was made, creating its own reality that the work has been approved. It operated based on its internal communications, not the external fact that no permit was signed.


The NCC SAFA Tanker Fatality
The crew believed a cargo tank was safe because previous communications that consisted of old permits and routine assumptions, had declared it so. A leaking valve silently filled the tank with nitrogen, changing the reality. But no new communication, like a fresh gas test for instance, interrupted the system's existing belief. When the first worker collapsed, the mate's instinct to help a crewmate, which was a powerful cultural norm, overrode any procedural pause for a new risk assessment. The system repeated its previous declaration of safety, which was now lethally inaccurate... 

These cases show how systems can act on information that is internally consistent but factually wrong. This internal logic also influences how systems interpret failure, often leading to the convenient but misleading label of human error.

Re-thinking failure: The myth of human error
From a systems theory perspective, human error is often a label a system applies when it fails to recognise its own design flaws. It's an easy explanation that places blame on an individual while leaving the system's blind spots unexamined.

Consider this next incident.

The double mouse bridge operator
An operator in the Netherlands was controlling two separate bridges from a single remote station. Each bridge was operated by a nearly identical computer interface, each with its own mouse. In a critical moment, the operator used the wrong mouse, causing a ship to collide with a bridge. The operator was held legally responsible. However, a deeper analysis reveals a structural flaw. The system was designed with identical interfaces and lacked basic safeguards like confirmation prompts or clear labels. The system was unable to recognise its own fragility and instead placed the entire burden of failure on the individual operator.

“The judge should take a course in ergonomics, and empathy.”

This expert critique highlights the systemic blindness. The error wasn't caused by a careless operator, but rather by a system design that made such a mistake likely.
Errors often reveal a system's structural blind spots. This insight also applies to how we think about rules and procedures. Systems create rules for stability, but they also create conditions where those same rules must be broken.

The two faces of rules: Legitimacy vs. Reality
Rules and procedures play a dual role in organisations. They provide a formal structure for how things should work, but they must also contend with the messy reality of getting work done. Niklas Luhmann argued that procedures are more than just bureaucratic burdens we often take them for. They are essential mechanisms for creating trust and legitimacy in complex systems where no single person can understand everything. Following a formal procedure signals fairness, impartiality, and continuity. It stabilises expectations and allows different parts of an organisation to trust each other, even when they don't have full visibility into each other's work.

While rules provide legitimacy, sometimes breaking them is necessary to achieve a system's goals. Luhmann called this useful illegality. This historical parable illustrates this perfectly.

Prince Friedrich von Homburg 
In a battle, the Prince, a cavalry general, defied a direct order to hold his position and not attack without explicit command. He ordered a charge anyway, and his bold, unsanctioned move led to a decisive victory.

Instead of being rewarded, the Prince was arrested and sentenced to death. Why did this happen, you ask? Because the military command structure had to defend its procedural integrity. Rewarding the Prince's disobedience, even though it worked, would undermine the entire command structure. He was punished because his rule-breaking was successful. The Prince was only pardoned after he accepted the legitimacy of his sentence and submitted to the system's authority. He was celebrated not for his defiance, but for his ultimate obedience.
This dynamic happens in modern organisations all the time. A frontline worker who bypasses a clumsy protocol to avert a disaster might be reprimanded, even if they saved the day. The system first reasserts its rules before it can reframe the action as heroic.


A new way of asking questions
When we try to understand safety through Luhmann's systems theory we're not finding a new set of answers, but we're learning to ask a new set of questions. It's a shift from blaming individuals to understanding the hidden logic of the system itself.
We go from focusing on individual actions and errors, to focusing on patterns of communication and systemic dynamics. We go from seeing order as the norm and failure as a deviation, to seeing order as an improbable and continuous achievement that needs constant work. And we go from aiming to eliminate all errors, to aiming to enhance the system's ability to adapt and anticipate variability. This perspective encourages us to move beyond asking "Who made a mistake?" and instead to probe the system itself. As you continue to learn, keep these more powerful questions in mind:
- What risks are communicable within your system’s categories?

- What gets filtered out?

- Who or what defines ‘what counts’?

Want to read more? Buy the book!

The book is for sale at Amazon and Barnes & Noble:

Hardcover

Paperback/Kindle