No theory forbids me to say "Ah!" or "Ugh!", but it forbids me the bogus theorization of my "Ah!" and "Ugh!" - the value judgments. - Theodor Julius Geiger (1960)


Hollnagel, E. (2021), Synesis – The Unification of Productivitiy, Quality, Safety and Reliability, Abingdon/New York: Routledge.


In a time where adaptability and resilience are hugely important, navigating the complexities of modern organizational management demands a fresh perspective. Erik Hollnagel offers such a perspective with his book "Synesis – The Unification of Productivity, Quality, Safety, and Reliability" (2021).

Hollnagel challenges the traditional siloed approach to the organizational challenges mentioned in the book's subtitle. He stresses the need to holistically manage internal and external factors shaping an organization’s performance. Synesis is about adaptability. It shifts our focus from isolated concerns to embracing interconnectedness. It advocates for a dynamic, function-centric approach over static structures, prioritizing processes over outcomes, and understanding change as a constant.

Hollnagel’s insights transcend conventional change management models. Synesis invites us to view organizations as living, evolving entities, urging us to explore the breadth-before-depth problem-solving approach. It emphasizes the human factor, acknowledging cognitive limitations and biases in decision-making, shaping more pragmatic and effective strategies.

In essence, Synesis invites leaders and managers to rethink their approaches, nurture a culture of adaptability, embrace variability, and understand the interdependence of productivity, quality, safety, and reliability. It's a call to action to unify diverse aspects of organizational performance, paving the way for enhanced resilience and sustainable success.


Coping with multiple issues in concert

This book discusses the challenges faced by organizations in today's rapidly changing environment, including the need to manage internal and external factors that impact their performance. Organizations must be dynamic, agile, and resilient to thrive and survive in the competitive market. Management is seen as an attempt to cope with the internal and external unpredictability where the former in principle can be controlled while the latter cannot. The management of how a system or organization performs is always relative to one or more issues or criteria. Organizations need to consider multiple issues, including safety, quality, and productivity, rather than focusing on a single issue.


The historical reason for the fragmented approach to dealing with these issues in organizations is that these issues achieved prominence at different times, and the psychological reason is that people tend to deal with problems as they happen rather than in advance. Quality and safety became major issues in the early 20th century once productivity problems had been solved. Reliability became a concern due to the introduction of electronic components and devices in military and civilian systems. The motivation for addressing these issues came from cost and efficiency concerns rather than a concern for people's well-being or the greater good of society. Fragmentation is a fundamental feature of the physical world. Western scientific thinking divides problems into smaller, constituent problems and departments become isolated from other specialities.


Looking at functions and change instead of structures and stability

Sociologist Robert K. Merton’s law of unanticipated consequences states that there will always be outcomes that have neither been foreseen nor are intended by purposeful action. This means that when something unexpected happens, the natural response is to try to understand why so that steps can be taken to prevent it from happening again. This find-and-fix-attitude makes efficiency, and particularly the speed of response, more important than thoroughness, which often leads to solutions that are incomplete and reduce the tractability of the system. As systems become more interconnected and interdependent, they can become entangled, where no part can be understood unless all the other parts are accounted for. The current complexity of the world exceeds humans' limited cognitive capacity. Therefore, it’s necessary to find a way to describe the world that is both comprehensible and able to make challenges manageable. The approach advocated by Hollnagel is to think in terms of functions rather than structures, processes rather than outcomes, and change and dynamics rather than stability and states.


The Productivity Silo

There is a lack of understanding of the interconnections between different issues and problems, leading to difficulties in addressing them effectively. Productivity is defined as the ability to generate goods and services that satisfy human needs, while the costs needed to produce something must be less than the value of the outcome. The drive to increase productivity has always been present, and Adam Smith proposed that growth is rooted in the increasing division of labor. Frederick Winslow Taylor also contributed to this field by applying scientific management to improve efficiency and productivity in the workplace. After the IT revolution in the late 1940s, work changed from being manual to being cognitive, and new technologies increased the speed and precision of industrial processes. This led to the emergence of human factors engineering, which aimed to improve the design of work systems and equipment to enhance performance, safety, and well-being.


The Quality Silo

The history of quality control in the context of scientific management and mass production: Task decomposition was used to identify the basic steps of a task, and people were selected based on their capabilities to ensure specified performance. Standardization of work and compliance with standards were important to achieve the one best way to carry out a task. The paramount concern was the elimination of waste, and quality was an important issue for scientific management as it made it impossible to achieve the level of standardization required for productivity. The variability of quality of the output was an impediment for the drive to standardize the production process and make it as efficient as possible. Walter Shewhart's book "Economic Control of Quality of Manufactured Product" identified the problem of variable quality and provided a practical solution for quality control. Standardization is an essential aspect of productivity, and it is as old as humankind. Statistical process control was developed by Shewhart to ensure that the variability distribution of output quality stayed within acceptable limits. This established the foundation for quality control and monitoring a process through sampling. The unspoken assumption was that process averages could be expected, and small variations could be explained as due to chance causes, meaning extraneous influences that could not be eliminated by improved design, or that perhaps were not worth the effort. Hollnagel argues that this assumption was reasonable in the 1920s, but it’s not today since systems and organizations today are recognized as socio-technical systems, which are intractable, and it is not assumed that things go well because the processes are flawlessly designed and effectively managed.


The Safety Silo

Herbert William Heinrich believed that accidents should be studied scientifically, and that the prevention of accidents is a science. (Hollnagel remarks it’s not treated scientifically today.) Heinrich proposed a sequence of causes and effects where the cause would lead to an accident which would lead to an injury. He believed that psychology lay at the root of the sequence of accident causes. The solution to accident prevention was to find the root cause and eliminate it, which is now known as root cause analysis. The approach was successful, to the extent that it has critically become second nature in how we think about safety. The understanding of safety as being a condition where as little as possible goes wrong, where there are as few accidents as possible, where the possibility of harm to persons or property damage is reduced to and maintained at an acceptable level, or any other definition along the same lines, quickly became the established truth.


Safety-1, defined by its absence, tries to achieve its goal by means of elimination, prevention, and protection, and safety is therefore measured by something that gets smaller and smaller until it becomes zero. Safety-2 is productive and tries to make as much as possible go well; the simple logic being that if something goes well, then it simultaneously cannot fail. Therefore, the more things that go well, the fewer accidents there will be. The safety issue created a third silo. The legacy of safety includes linear causality and root cause analysis, safety-1, and safety-2.


The Reliability Silo

Reliability has historically been associated with safety rather than productivity because component failure can result in accidents and harm. Reliability was defined by the U.S. military in the 1940s to mean that a product or equipment would operate as required for a specified period of time. The U.S. Department of Defense in 1950 investigated reliability methods for military equipment, and proposed three lines of activity to address the core of the reliability problem: systematically collect data from the field, use quantitative reliability requirements as a contractual obligation, and develop ways to estimate and predict component reliability before equipment was built and tested. The so called AGREE report published in 1957 provided assurance that reliability could be specified, allocated, and demonstrated.


Reliability became necessary when some parts of production were delegated to others and were no longer under direct control, and it became even more crucial when the source was e.g. supply lines for electrical power, water or communication. Quality and reliability are closely linked, even though they are separate chronologically. Reliability problems turned out not to be limited to technical components; they also included humans and organizations. The accident at the Three Mile Island nuclear power plant in 1979 showed that reliability assessment of the installation had missed the essential human component, the operator. The response to the accident resulted in the establishment of the field of human reliability assessment, which grew rapidly during the 1980s and 1990s. Charles Perrow’s book Normal Accidents: Living with High-Risk Technologies analyzed the social side of technological risk and argued that the engineering approach to ensuring safety did not work because it led to increasingly complex systems where failures were inevitable.


A limited span of attention

The limited span of attention is a biological or biologically based characteristic that affects human attention and focus. The understanding of how the mind works is limited by the fragmented view. A better understanding of human thinking is necessary to improve organizational management.

William James and George Miller wrote about the limited span of attention and its effects on perception, decision-making, discrimination, and memory. Miller's research took place in the context of the technological developments that resulted in information input overload, a relative condition where the capacity to process information is insufficient due to an increased rate or amount of input information or a temporarily reduced processing capacity. The consequence is a fragmented view or understanding of the information presented. Coping strategies to deal with information input overload range from temporary non-processing to abandoning the task completely. The reactions imply a right coupling between the information needed in a situation and how it is treated. Many systems or frameworks were developed to help make sense of confusing reality, such as the abstraction hierarchy, living systems theory, viable systems model, and Cynefin framework, all offering a solution by proposing a small set of discrete categories.


Human constraints make us pragmatic and social decision-makers

In ancient Greece, Aristotle explored the ideal of a perfectly rational decision maker. Human decision-making is often not rational due to psychological limitations and cognitive constraints. Decision theory has attempted to develop more psychologically realistic models of decision-making, such as descriptive decision-making and naturalistic decision-making. Management theory has challenged the idea of the rational decision maker and proposed alternative descriptions, such as the concept of satisficing by Simon and muddling through by Lindblom. These alternative descriptions suggest that decision-making is a more pragmatic, social process than a purely rational one. The limitations of human decision-making have consequences for how organizations are managed, the quality of expectations and anticipations, and strategic and tactical planning. Humans also frequently use heuristics instead of rational deliberation.



The efficiency-thoroughness trade-off (ETTO) principle refers to the balance between the resources used and the quality of output achieved in a task. People and organizations often have to make a trade-off between resources spent on preparing to do something and those spent on actually doing it. People frequently need to adjust their performance to meet the demands and current conditions. Thoroughness means that an activity is carried out only if the necessary and sufficient conditions are in place so that it can achieve its objective and that it will not create any unwanted side effects.



People and organizations use shortcuts, heuristics, and rationalizations to make their work, problem-solving, and decision-making more efficient. Confirmation bias describes the tendency of people to search for, interpret, favour, and recall information that confirms their pre-existing beliefs, hypotheses, or assumptions. Confirmation bias can contribute to overconfidence in personal beliefs and can maintain or strengthen beliefs in the face of contrary evidence.


Human constructions

Humans experience the world through constructions or habitual ways of describing and perceiving the surrounding world. These constructions allow people to address problems one by one, but they make it difficult to understand that things are interconnected. Humans prefer to see the world as relatively undivided or at most, as one concept together with its opposite. This is known as dichotomous or binary thinking, which is easy on the mind and seems helpful in overcoming the limitations of attention, but it also limits the ability to understand complex problems. Humans prefer simple or monolithic explanations because they fit well with the various limitations in thinking and reasoning. Attempts to provide simple explanations for complex problems only result in solutions that are incapable of actually improving anything. The reasoning according to counterfactual causes is that if X had been present, then Y would not have happened. This is a standard explanation where the cause is seen as a lack of something. Human understanding of cause and effect is limited. It leads to the concept of the efficient cause. The fragmentary view that results from this kind of thinking favors linear, cause-effect reasoning.


Breadth Before Depth

Hollnagel describes the Breadth-Before-Depth approach to problem-solving, which involves considering several alternatives at once and adopting a more comprehensive, thorough view of the problem. We need models and methods that correspond to the real world. Synesis is about unifying different issues rather than separating them. Synesis is an indication of how to improve productivity, efficiency, quality, and safety in managing intractable organizations.


Navigating change

Knowing one's position accurately in various contexts, regardless of whether it's physical, in naval disasters, or symbolic, in business, and change management, is important. In two naval disasters, the inability to accurately determine the position resulted in a significant loss of life and property. The voyage metaphor is often used in change management, but managing change is more like a sea voyage than a land voyage.


From steady-state to never-ending change

Interventions in organizations are often derived from existing trends, industry practices, or good models of other organizations. It’s difficult to completely understand how an organization works or functions because they are not designed like a technical system such as a cruise liner or a factory. Organizations grow and evolve in ways that are far from completely understood. In order to manage organizational changes, it’s necessary to have a good understanding of how an organization works. Organizations generally fare better during periods of stability, which is called steady-state management. But management today is rarely limited to ensuring stable production or to maintaining constant performance and quality under stable conditions. Instead, management should be prepared to face a never-ending series of changes in order to cope with unstable conditions and meet current as well as future challenges.



Accidents can be viewed as entropy, meaning that the world becomes increasingly disordered and uncertain. Systems are built for a specific purpose, and organizations, businesses, and other entities rely on these systems to keep increasing entropy temporarily at bay. There are two systems to consider: the target system and the system that controls or manages it. These systems can be nested into each other, but at some point, the system under consideration will be closed and left to itself. Examples of this include the world economy, the environment, the population of Earth, and pollution. Anthropogenic entropy is the increasing disorder and decreasing predictability caused by human interventions that are made inadequately or without sufficient understanding. Anthropogenic entropy exists because of the psychological need for fragmentation, and it is difficult to fully comprehend the whole situation.


Unanticipated consequences

Situations and events often don’t develop as intended or hoped for. The occurrence of unanticipated consequences is as old as humankind itself. The factors or conditions that contribute to the occurrence of unanticipated consequences are ignorance, error, imperious immediacy of interest, basic values, and self-defeating predictions. As the world and systems become more disorderly and uncertain, the number of unanticipated consequences will increase. The law of requisite variety states that the controller's understanding of a system should match the variety of the system to be controlled. If something unexpected happens, control may be lost.


Limited knowledge of socio-technical systems

In our time, physical distances are becoming less significant, and the transmission of information is becoming faster and more efficient. As the complexity of organizations grows, it becomes increasingly difficult to control and manage them effectively, and incomplete knowledge can have serious consequences. Achieving complete knowledge in socio-technical systems is impossible due to their dynamic nature and because all necessary parameters can’t be fully defined or identified.


Misunderstanding complex systems

Decomposition is at the basis for building and understanding systems, including socio-technical systems. Decomposition has been successful in explaining how things happen and is used widely across society. But socio-technical systems are dynamic rather than static and hence change over time.

Two troublesome assumptions at the core of change management are the assumption of ceteris paribus (other things being equal; essential for the established scientific method and the empirical testing of hypotheses) and the substitution myth (the introduction of a change will have no other/unintended effects). Both myths are wrong and can lead to misunderstandings of complex systems.


Managing productivity and…

While managing productivity involves controlling procedures and collecting productivity data, it also relies on managing people and identifying obstacles that need to be overcome. One particular version of managing productivity is the principle of faster, better, cheaper, which aims to optimize these three criteria simultaneously. Faster, better, cheaper may lead to problems since these criteria are mutually dependent. This principle may oversimplify complex problems and assume that it’s possible to optimize all three criteria at the same time. Managing productivity is also intertwined with managing quality and safety. For products, it’s essential that each one is of the same standard, and the same applies to services, but individualized treatment is necessary. Safety management focuses on managing the absence of safety, such as reducing unwanted outcomes, while reliability management focuses on identifying and addressing potential sources or causes of problems. These issues are all essential to consider when managing businesses or organizations.


Human reliability

Reliability in technological equipment was commonly measured by the probability of failure-free operation for a specific period of time in a specified environment. The calculation of the failure probability became the common measure for reliability. The partial meltdown of reactor number two of the Three Mile Island Nuclear Generating Station in 1979 demonstrated the need to take the human component into account. Human reliability assessment was created to address this issue. Managing human reliability, unlike technical systems, presents a problem because humans are not designed. The reliability of human performance is crucial since all systems are socio-technical systems, which include and depend on the reliability of human performance on many levels. The concept of high reliability organizations emerged to address this problem, which was further explored by a group of researchers at the University of California, Berkeley. HROs have five qualities that characterize such organizations. The fragmentation of change management thinking with regard to reliability, safety, risk, and resilience is evident. The focus tends to be on managing the lack of organizational reliability rather than managing reliability itself.


Three change management approaches

Change management requires the competence to set a course, adjust it, and ensure that it is followed until the set waypoint or target is reached. Change management should be systematic and follow a suitable plan of action that requires sufficient time to plan. It should not be an isolated activity but rather an ongoing process that supports, sustains, and improves the normal functioning of the organization. There have been three major suggestions for change management,  in chronological order:

  1. the Shewhart cycle or the specification-production-inspection cycle, which consists of three steps that must go in a circle instead of in a straight line. The cycle constitutes a dynamic scientific process of acquiring knowledge. The fourth step of the cycle is to act, which establishes a loop or cycle, a circle instead of a straight line. This later developed into Deming’s Plan-Do-Study-Act (PDSA).
  2. Action research has been the main inspiration for change management in social systems. This is a research methodology developed in the 1940s to bring about transformative change through the process of taking action and doing research and linking the two together by critical reflection. Kurt Lewin's model of action research is an iterative process in which research leads to action, and action leads to evaluation and further research. Lewin's model includes three main parts: examining the idea, planning, and fact-finding. It’s less neat than Deming’s Plan-Do-Study-Act (PDSA) cycle, but more realistic, as it contains the possibility to modify or revise the overall plan. Change in social systems requires careful planning and preparation, as the system and people may resist the change.
  3. The OODA (observe-orient-decide-act) loop was developed by USAF Colonel John Boyd to describe decision-making as taking place in a recurring cycle. Observations are used to formulate a concept, and a concept is used to shape future observations. The loop is used in adversarial change management, where external changes happen faster than responses can be made. Change management can be difficult when changes happen too slowly or too quickly. The OODA loop is different from PDSA and Lewin's model because it assumes an active opponent or even an enemy, and events develop rapidly. The loop is essential in military operations where the opponent's response to action is observed.


Fragmentation in change management

A lack of stability in organizations and their surroundings needs to be recognized as a fundamental premise for change management. The three change management approaches are burdened by several types of fragmentation. One kind of fragmentation is due to the habit of focusing on a single issue or concern at a time, while another is a consequence of attempting to understand larger systems by means of decomposition. Fragmentation of focus and scope results in limited understanding of complex systems, making it challenging to implement effective change management strategies.

The assumption that systems undergoing change remain passive during the change may be true for technical systems, but it does not hold for social systems or socio-technical systems. Change management for socio-technical systems does not usually include an adversary, and therefore it does not need to put the same premium on speed as in the OODA loop. Social systems are never stable and are always changing because they are open systems in partly unknown surroundings.


The disregard of other influences

The order of events should not be mistaken for a cause-and-effect relationship, although it often is. The planned interventions are the signal, and the signal is supposed to be stronger than the noise. Yet, the question is whether change management can take for granted that there are no extraneous inputs that may have as much effect or even a larger effect than the signal. All change management approaches assume that the planned intervention is so strong that any other influences or noise can be disregarded. This is a risky assumption to make. Change management for socio-technical systems needs an approach that carefully times small changes that are then iteratively adjusted to fit the system's dynamics.


Three ways of fragmentation

Making and managing changes is not a simple exercise. Instead, it is a complex and constantly evolving process that requires ongoing attention to the organization's performance and functions. Change management is fragmented in at least three ways: in terms of focus, scope, and time. Overcoming these types of fragmentation requires uncomfortable solutions that may require significant resources and energy. The long-term benefits of spending the effort may outweigh the short-term gains.

People are attracted to simple or monolithic solutions and tend to solve problems one by one rather than seeing them all together. This approach may lead to psychological fragmentation and a delay in action. The urge to relieve uncertainty quickly is universally present, but it is essential to trace something unfamiliar back to its roots to understand and solve problems effectively.


Quality productivity safety and reliability in work and production

Improving quality can reduce expenses and increase productivity and market share. Lack of safety can lead to disruptions or failure of productivity.

Lack of stability or dependability of processes can also affect productivity.

As technology became more complicated, other sources of noise such as human performance and organizational reliability became important.

Productivity, quality, safety, and reliability can be seen as dynamic actions rather than static conditions. A workplace must be understood as a complex socio-technical system. Social and technical factors should both be considered in organizational performance. An enlarged view of work is needed. One that considers the supporting technology to the organization, the life cycle of equipment and infrastructure, and the interdependence of ongoing activities.


From states to functions

The characteristics of a system are defined by the purpose of the motor system, and a model of the brain can be based on this. In some cases, the material is more important than the functional aspect. A diagram that does not represent the structure or functioning of the system cannot be used to understand the system or make decisions about it. This is a common problem with simple block diagrams and flowcharts. A model for change management should capture the internal workings of the system or organization to be managed, and help choose interventions that will lead to desired outcomes. The model should enable steering the organization in the right direction by making clear what needs to be done to move from the current position towards the target. The elements of the model and their relationships should be meaningful, but most models fail in this respect. A flowchart does not constitute a model unless the links between its elements are defined. To provide a model that describes the mutual relations of the four issues (productivity, safety, quality, and reliability) in the Plan-Do-Study-Act (PDSA) approach, a causal loop model may be used. The causal loop model could be represented as functions rather than states. One way to develop a description of the issues as functions is the functional resonance analysis method (FRAM), which produces a functional model of how an organization works by interpreting the four issues as four functions. The description of a function includes inputs, outputs, preconditions, resources, control, and time. By using FRAM, it will gradually become clear what the four issues actually entail as functions or activities, and therefore how they may be mutually dependent.


Pragmatic justification for accepting fragmentation of scope in organizations

The practical approach is to define a narrower boundary that includes the problem in its local context but considers everything else as part of the surroundings. This artificially limits the scope of the changes that are considered, although usually without fully understanding whether this is reasonable. Whenever a change or an improvement is planned, it is tacitly assumed that we know what will happen and that nothing unexpected will happen that may jeopardize the outcome of the intended changes either inside the (temporary) boundary or in the surroundings. Given that changes happen all the time, though, both inside the system and in its surroundings, it is more likely that problems may happen in the surroundings. This unpredictability is called entropy, which cannot be eliminated but only distributed. Since changes in the surroundings are mostly uncontrolled and quite possibly larger and with greater impact than the planned improvements, the veracity of the ceteris paribus principle can no longer be taken for granted. The idealized development can be represented as taking place in isolation, but the planned change is never the only thing that happens in an organization. The intended change is just one of the many changes that happen throughout the system, some before, some overlapping in time, and some following. Therefore, it is wise to try to understand as well as possible what is likely to happen outside the chosen boundary and to find a way to create synergy with that.


Using variability or noise rather than suppressing it

Hollnagel suggests developing a good functional model of how the waves occur and determining what changes will happen outside the limited scope. To overcome the fragmentation of scope, it is necessary to realize that it is impossible to make a clean cut or define a clear boundary between the system and surroundings. Long-term goals can be broken into short-term subgoals that can be achieved more quickly, taking into account that the surroundings remain stable, and defining subgoals such that the expected time between them is shorter than the mean time to meet. The subgoals should be chosen such that the surroundings can adapt and remain flexible while the smaller change takes place. The analogy of the coracle explains how to make changes by looking out for developments in the surroundings and working with natural forces rather than battling against them.


Limited traditional cause-and-effect thinking in change management

David Hume's rules for causality were reasonable in the 18th century, but not in today's world. The fragmentation of time assumes linearity and tractability, which may not always be the case. The concept of emergence, proposed by Henry Lewes, suggests that the whole is more than the sum of its parts, and emergent effects may not be traceable to individual causes. The idea of entanglement, proposed by Erwin Schrödinger, is a quantum mechanical phenomenon that further complicates the traditional understanding of causality. These limitations make it difficult to determine the causes and effects of change and to manage change effectively.


Considering the duration of a change

A recent school reform in Denmark illustrates how changes can be evaluated too soon. The reform aimed to improve education in primary schools, but after five years, it was found that students had not become more proficient in Danish and mathematics. It’s essential to understand how long a change will take to implement fully, and it can be challenging to find a principle or approach that will overcome the fragmentation in time. Changes must be considered within a limited duration, and the consequences of change can only be seen after some time has passed. Therefore, it is crucial to understand the timing of the change, including the earliest and latest starting times, to maximize the window of opportunity to introduce the change. Defining the time window for a change should be done by understanding the system and its dynamics. The time window must be defined relative to the chosen scope and should not be based on convenience or habits. The basic dynamics of making a change to a system are illustrated through a causal loop diagram, which shows the importance of spending sufficient time and effort during the planning and preparation stages of a change. The diagram also emphasizes that the time spent analyzing feedback is inversely proportional to the time available for planning the next step. This diagram should not be seen as a realistic model of change management, but rather as a representation of some of the basic dynamics of a process.


Challenges associated with managing change

The principle of incrementalism is a tactic where changes are limited to smaller systems or subsystems that will remain stable, representing an intentional reduction of scope to realize the consequences. The challenge of managing change is the need to think of things together unified rather than fragmented. We need to understand what lies behind the challenges and opportunities that individuals and organizations face in managing change. There is a lack of a universally accepted classification in social and behavioral sciences, and the relevant knowledge is organized relative to the practical problems that must be solved. For change management to succeed, one needs knowledge of systems, organizations, and individuals.


Structure: From stability to functions

The organizational structure of a company determines how roles and responsibilities are assigned, controlled, and coordinated, as well as how information flows between different parts of the organization. It’s more useful to describe an organization's structure in terms of its functions, which define the intended overall performance and how they are organized. The origin of the word organization is from the Latin organum, which means instrument or tool.


A system of necessary knowledge

An alternative to structure is a system or organization of necessary knowledge, which is defined as the set of coupled functions required to produce a certain performance. The functional definition is not only more precise, but it also makes it possible to refer to an organization as existing, rather than being in the process of being constructed. A set of functions can be thought of as active or inactive, with the active set changing continuously in relation to each other, while the inactive set is more stable and predictable. There may also be dormant functions that are needed in the longer term but are best characterized as inactive relative to a given focus of activity. March and Simon (1993) used this approach to describe an organization and argue that only a small number of functions at any point in time can be adaptive, undergoing change, while the remainder would be stable. The basic argument is that a system or organization cannot function if everything changes at the same time; there must be some functions or sets of stable (background) functions that can serve as a basis of reference for change and adaptation, and these must be more stable than the active (foreground) functions undergoing change. By thinking about sets of functions with different degrees of stability, it becomes possible to think about organizational structures without the need for decomposition and components.


Understanding the dynamics of a system

To be able to suggest specific interventions to bring about changes in the system, it’s important to understand its dynamics. The environment surrounding the system is an important source of noise that can either weaken or strengthen the system's signal. The boundary of a system or organization is relative rather than absolute, and depends on criteria that depend on the system's function rather than its structure. Defining the boundary of a system can be done by considering two aspects: whether a function is important to the system's purpose and whether it is possible for the system to perform that function within a given context. Some functions are more important than others. A pragmatic definition of the system boundary can be used to limit the functions analyzed and excluded, based on their relevance to the system's purpose.


From assignable or special causes to conditions for adjustment

Hollnagel proposes focusing on the conditions that lead to adjustments. The philosophy of eliminating assignable causes is misguided as it assumes that unacceptable and acceptable outcomes happen for different reasons when they both happen for the same reasons. The variability of outcomes is explained as a necessary part of work in socio-technical systems. Knowledge about variability and performance patterns are important to retain control in work situations.


Understanding human performance in organizations and systems

Human performance is heuristic rather than algorithmic. Heuristics are found throughout an organization, including change management. Regularity and efficiency are important and interconnected items in human performance. To ensure the existence and sustainability of an organization, common issues must be addressed by bringing together the entire system and not in isolation or fragmentation.