SYSTEMANTICS: ZEN AND THE ART OF AMPLIFIER MAINTENANCE
EPISODE 12 MUSIC IN PHASE SPACE
The Systems Bible: The Beginner’s Guide to Systems Large and Small by John Gall
Summary
Systems are seductive. They promise to do a hard job faster, better, and more easily than you could do it by yourself. But if you setup a System, lets say you build yourself a knockoff copy of Marshall JCM 900 with a bunch of capacitors, fuse holdings, lights, jack plates, potentiometers, switches. tubes, semiconductors and hardware you are likely to find your time and effort now being consumed in the care and feeding of the system itself. New problems are created by its very presence. Once set up, it won’t go away; it grows and encroaches. It begins to do strange and wonderful things and breaks down in ways you never thought possible.
The strange behavior (antics) of complex systems kick back, gets in the way and opposes its own proper function. Your own perspective becomes distorted. You become anxious and push on it to make it work. Eventually you come to believe that it so delivers what you really wanted all the time. At that time, encroachment is complete. You have become absorbed. You are now a systems-person.
In other words, the system has a severely censored and distorted view of reality from biased and filtering sensory organs which displaces understanding of the actual real-world which pales and tends to disappear. In addition to negatively affecting those inside the system, the system attracts to it people who are optimized for the pathological environment the system creates. Thus, systems attract systems-people.
Systemantics
General Systemantics is a systems engineering treatise by John Gall in which he offers practical principles of systems design althought it’s more about how not to design systems. The primary precept is that large complex systems are extremely difficult to design correctly despite best intentions, so care must be taken to design smaller, less-complex ones and to do so with incremental functionality based on close and continual touch with user needs.
The term systemantics is a commentary on prior work by Alfred Korzybski called General Semantics which conjectured that all systems failures could be attributed to a single root cause — a failure to communicate. Dr. Gall observes that, instead, system failure is a feature not a bug.
What Is a System?
In the most basic sense, a system is any group of interacting, interrelated, or interdependent parts that form a complex and unified whole that has a specific purpose. Without such interdependencies, we have just a collection of parts, not a system.
Sometimes we cannot identify distinct systems. Consider something like music literacy. It consists of a mix of different interplaying states of degree of complexity, some chaotic-generative and random, some emergent and self-organizing, some stable, ordered and sustaining and some rigid and near-to or collapsing.
Most complexity / design thinkers will say the problems they wish to address: involve human beings, particularly those very large systems such as national governments, nations themselves, religions, the railway system, the post office…” though the intention is that the principles are general to any system.
Reality is more complex than it seems and complex systems always exhibit unexpected behavior. A system is not a machine. It’s behavior cannot be predicted even if you know it’s mechanism because
Everything is a system.
Everything is part of a larger system.
The universe is infinitely systematized, both upward (larger systems) and downward (smaller systems).
All systems are infinitely complex.
Are open, overlapping and adjacent to one another and Contain non-linear connections, both direct and indirect
Consist of many sub-problems, where addressing one can lead to unintended consequences for another, or create new problems
Contain a high diversity of actors and resources, and interactions between them and have multiple indirect feedback loops
Are dynamic in the sense they are always adapting, emerging and resolving at different rates; They are neither measurable nor predictable
PURPOSE
Systems have purpose. The purpose, however, is a property of the system as a whole and not of any of the parts. For example, the purpose of an automobile is to provide a means to take people and things from one place to another. This purpose is a property of the automobile as a whole and cannot be detected in just the wheels, the engine, or any other part.
You’ll never encounter a situation where you wake up one morning and your car has changed its purpose to be a lawnmower. Complex systems, on the other hand, are continually evolving and have the capacity to change their purpose, temporarily or permanently.
PARTS
All parts must be present for a system to carry out its purpose optimally. If you can take pieces away from something without affecting its functioning, then you have a collection of parts, not a system. In our toolbox, it doesn’t matter whether the screwdrivers are piled on top or buried at the bottom of the box. In a system, however, the arrangement of all the parts matters a great deal. (Imagine trying to randomly rearrange the parts in your automobile!)
Practical systems design
The Vector Theory of Systems: Systems run better when designed to run downhill. Loose systems last longer and work better. (Efficient systems are dangerous to themselves and to others.)
WHAT DO SYSTEMS DO — Linear Vs Feedback
The linear view sees the world as a series of unidirectional cause-and-effect relationships: A causes B causes C causes D, etc.
A → B →C →D →
The feedback loop perspective, on the other hand, sees the world as an interconnected set of circular relation- ships, where something affects some- thing else and is in turn affected by it: A causes B causes C causes A, etc.
When we take the linear view, we tend to see the world as a series of events that flow one after the other. For example, if sales go down (event A), I take action by launching a promotions campaign for the release of a live album (event B). I then see orders increase (event C), sales rise (event D), and backlogs increase (event E). Then I notice sales decreasing again (event F), to which I respond with another live or studio album (event G) . . .
From a feedback loop perspective, I would be continually asking myself “How do the consequences of my actions feed back to affect the system?” So, when I see sales go down (event A), I launch a promotions campaign (event B). I see orders increase (event C) and sales rise (change in event A). But I also notice that backlogs increase (event D) (another eventual effect of event B), which affects orders and sales (change in events C and A), which leads me to repeat my original action (event B).
But here’s a key insight in systems thinking: How we describe our actions in the world affects the kinds of actions we take in the world.
New Systems mean new problems. Once a system is set up to solve some problem, the system itself engenders new problems relating to its development, operations and maintenance. The author points out that the additional energy required to support the system can consume the energy it was meant to save.
I would change the bottom right box, or add to it: “In order to optimize the whole, we must accept the sub-optimality of the parts”
FEEDBACK
The most important feature of feedback is that it provides information to the system that lets it know how it is doing relative to some desired state. For example, the normal human body temperature is 98.6 degrees. If you run, the exertion warms your body beyond that desired temperature. This change activates your sweat glands until the cooling effect of the perspiration readjusts your temperature back to the norm.
Or, in our car example, imagine that you are steering your car into a curve. If you turn too sharply, you receive feedback in the form of visual cues and internal sensations that you are turning too much for the speed at which you’re traveling. You then make adjustments to correct the degree of your turn or alter your speed, or some combination of both.
Systems tend to oppose their own proper function
Calling it “feedback” doesn’t mean that it has actually fed back. It hasn’t fed back until the system changes course. The reality that is presented to the system must also make sense if the system is to make an appropriate response. The sensory input must be organized into a model of the universe that by its very shape suggests the appropriate response.
Too much feedback can overwhelm the response channels, leading to paralysis and inaction. Systems which don’t know how much feedback there will be or which sources of feedback are critical, will begin to fear feedback and regard it as hostile, and even dangerous to the system. The system which ignores feedback has already begun the process of terminal instability. This system will be shaken to pieces by repeated violent contact with the environment it is trying to ignore.
Nature is only wise when feedbacks are rapid. Like nature, systems cannot be wise when feedbacks are unduly delayed. Feedback is likely to cause trouble if it is either too prompt or too slow. However, feedback is always a picture of the past. The future is no more predictable now than it was in the past, but you can at least take note of trends.
Advanced systems functions
Before they tend to oppose their own proper functions, systems develop goals of their own the instant they come into being. Intrasystem goals come first. The system is its own best explanation – it is a law unto itself. Systems don’t work for you or me. They work for their own goals and behaves as if it has a will to live
Some apropos laws. The Functional Indeterminacy Theorem (F.I.T.): In complex systems, malfunction and even total non-function may not be detectable for long periods, if ever. The Newtonian Law of Systems Inertia: A system that performs a certain way will continue to operate in that way regardless of the need or of changed conditions.
Not only do systems expand well beyond their original goals, but as they evolve they tend to oppose even their own original goals. This is seen as a systems theory analog of Le Chatelier’s principle that suggests chemical and physical processes tend to counteract changed conditions that upset equilibrium until a new equilibrium is established.
Why systems behave poorly
Complicated systems produce unexpected outcomes [Generalized Uncertainty Principle].
- The Aswan Dam diverting the Nile River’s fertilizing sediment to Lake Nasser (where it is useless) requiring the dam to operate at full electrical generating capacity to run the artificial fertilizer plants needed to replace the diverted sediment.
- The space Vehicle Assembly Building at Kennedy Space Center designed to protect vehicles from weather is so large that it produces its own weather
If a big system doesn’t work, it won’t work. Pushing systems doesn’t help and adding manpower to a late project typically doesn’t help. However, some complex systems do work and these should be left alone. Don’t change anything. A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and can not be made to work. You have to start over, beginning with a working simple system. Few areas offer greater potential reward than understanding the transition from working simple system to working complex system
What’s in a name
People performing roles in systems often do not perform the role suggested by the name the system gives that person, nor does the system itself perform the role that its name suggests. Also People in systems do not actually do what the system says they are doing [Functionary’s Falsity]. The system itself does not actually do what it says it is doing. [The Operational Fallacy] In the same vein, a larger system does not do the same function as performed as the smaller system. The larger the system the less the variety in the product.
System failure
Most large systems are operating in failure mode most of the time. So, it is important to understand how it fails, how it works when it’s components aren’t working well, how well does it work in failure mode. The failure modes can typically not be determined ahead of time and the crucial variables tend to be discovered by accident
A complex system can fail in an infinite number of ways. (If anything can go wrong, it will; see Murphy’s law.) The mode of failure of a complex system cannot ordinarily be predicted from its structure.
The larger the system, the greater the probability of unexpected failure. “Success” or “Function” in any system may be failure in the larger or smaller systems to which the system is connected. The Fail-Safe Theorem: When a Fail-Safe system fails, it fails by failing to fail safe.
Systems tend to malfunction conspicuously just after their greatest triumph. In complex systems, malfunction and even total non function may not be detectable for long periods, if ever. Large complex systems tend to be beyond human capacity to evaluate.
There will always be bugs and we can never be sure if they’re local or not. Cherish these bugs, study them for they significantly advance you towards the path of avoiding failure. Life isn’t a matter of just correcting occasional errors, bugs, or glitches. Form may follow function but don’t count on it. As systems grow in size and complexity, they tend to lose basic functions (supertankers can’t dock)
Colossal systems cause colossal errors and these errors tend to escape notice. If it is grandiose enough, it may not even be comprehended as an error (50,000 Americans die each year in car accidents but it is not seen as a flaw of the transportation system, merely a fact of life.) Total Systems tend to runaway and go out of control
LEVELS OF PERSPECTIVE
The observer effect — the system is altered by the probe used to test it. However, there can be no system without its observer and no observation without its effects
“ACTING” IN DIFFERENT MODES
Events – Reactive
Events are very compelling because they often require an instant response. For example, if a house is burning, we react by immediately trying to put out the fire. Putting out the fire is an appropriate action, but if that’s all we ever did, it would be inadequate from a systemic perspective. Why? Because it has solved the immediate problem (the burning house), but it has done nothing to alter the fundamental structures that caused that event (inadequate building codes, lack of fire detectors, lack of fire-pre- vention education, etc.). Nor has it addressed the mental models and vision that have generated the problematic systemic structures.
Patterns — Adaptive
If we look at the problem over a period of time, we may notice a pattern, such as slower internet at certain times of the day. We can then adapt our processes to make the best use of the current system, perhaps in this case by simply accepting the fact that there’s going to be slower speed. Notice that we are not trying to change the pattern; instead, we’re simply adapting to it.
Systemic Structures — Creative
Systemic structures produce the patterns and events that make up our day-to-day reality. They are also the mechanisms through which mental models and vision get translated into action
By creating new systemic structures (either through redesigning existing ones or making new ones), we can change the events and patterns we get. We alter the system, rather than just adapting or reacting to it. This is the level at which reorganisations, process redesign, reengineering, compensation schemes, etc happen.
Mental Models — Reflective
Complex systems tend to produce complex responses (not solutions) to problems. Great advances are not produced by systems designed to produce great advances. The bigger the system, the narrower and more specialized the interface with individuals. Systems delusions are the delusion systems that are almost universal in our modern world. Designers of systems tend to design ways for themselves to bypass the system. If a system can be exploited, it will and any system can be exploited
Catalytic managership is based on the premise that trying to make something happen is too ambitious and usually fails, resulting in a great deal of wasted effort and lowered morale. On the other hand, it is sometimes possible to remove obstacles in the way of something happening. A great deal may then occur with little effort on the part of the manager, who nevertheless (and rightly) gets a large part of the credit.
Catalytic managership will only work if the system is so designed that something can actually happen — a condition that commonly is not met. Gandhi is reported to have said, “There go my people. I must find out where they are going, so I can lead them.” Choosing the correct system is crucial for success in catalytic managership. Our task, correctly understood, is to find out which tasks our system performs well and use it for those. Utilize the principle of utilization
Vision — Generative
The system itself does not solve problems. The system represents someone’s solution to a problem. The problem is a problem precisely because it is incorrectly conceptualized in the first place, and a large system for studying and attacking the problem merely locks in the erroneous conceptualization into the minds of everyone concerned. What is required is not a large system, but a different approach. Solutions usually come from people who see in the problem only an interesting puzzle, and whose qualifications would never satisfy a select committee. Great advances do not come out of systems designed to produce great advances. Major advances take place by fits and starts.
Most innovations and advancements come from outside the field
Malfunction is the rule and flawless operation the exception. The height and depth of practical wisdom lies in the ability to recognize and not to fight against the Laws of Systems. The most effective approach to coping is to learn the basic laws of systems behavior. Problems are not the problem; coping is the problem.
Systems don’t enjoy being fiddled with and will react to protect themselves and the unwary intervenor may well experience an unexpected shock.
INFORMATION
In setting up a new system, tread softly. You may be disturbing another system that is actually working. It is impossible not to communicate — but it isn’t always what you want. The meaning of a communication is the behavior that results.
Knowledge is useful in the service of an appropriate model of the universe, and not otherwise. Information decays and the most urgently needed information decays fastest. However, one system’s garbage is another system’s precious raw material. The information you have is not the information you want. The information you want is not the information you need. The information you need is not the information you can obtain.
What Can Be Done
Inevitability-of-Reality Fallacy — things have to be the way they are and not otherwise because that’s just the way they are. The person or system who has a problem and doesn’t realize it has two problems, the problem itself and the meta-problem of unawareness
Creative Tack — if something isn’t working, don’t keep doing it. Do something else instead — do almost anything else. Search for problems that can be neatly and elegantly solved with the resources (or systems) at hand. The formula for success is not commitment to the system but commitment to Systemantics
The very first principle of systems-design is a negative one: do without a new system if you can. Two corollaries: do it with an existing system if you can; do it with a small system if you can.Almost anything is easier to get into than out of. Taking it down is often more tedious than setting it up.
Systems run best when designed to run downhill. In essence, avoid uphill configurations, go with the flow. In human terms, this means working with human tendencies rather than against them. Loose systems last longer and function better. If the system is built too tight it will seize up, peter out, or fly apart. Looseness looks like simplicity of structure, looseness in everyday functioning; “inefficiency” in the efficiency-expert’s sense of the term; and a strong alignment with basic primate motivations
Slack in the system, redundancy, “inefficiency” doesn’t cost, it pays
Bad design can rarely be overcome by more design, whether bad or good. In other words, plan to scrap the first system when it doesn’t work, you will anyway
What the pupil must learn, if he learns anything, is that the world will do most of the work for you, provided you cooperate with it by identifying how it really works and identifying with those realities. — Joseph Tussman
It is generally easier to aim at changing one or a few things at a time and then work out the unexpected effects, than to go to the opposite extreme, attempting to correct everything in one grand design is appropriately designated as grandiosity. In dealing with large systems, the striving for perfection is a serious imperfection. Striving for perfection produces a kind of tunnel-vision resembling a hypnotic state. Absorbed in the pursuit of perfecting the system at hand, the striver has no energy or attention left over for considering others, possibly better, ways of doing the whole thing
Nipping disasters in the bud, limiting their effects, or, better yet, preventing them, is the mark of a truly competent manager. Imagination in disaster is required — the ability to visualize the many routes of potential failure and to plug them in advance, without being paralyzed by the multiple scenarios of disaster thus conjured up. In order to succeed, it is necessary to know how to avoid the most likely ways to fail. Success requires avoiding many separate possible causes of failure.
Be weary of the positive feedback trap. If things seem to be getting worse even faster than usual, consider that the remedy may be at fault. Escalating the wrong solution does not improve the outcome. The author proposes a new word, “Escalusion” or “delusion-squared or D2“, to represent escalated delusionIf things are acting very strangely, consider that you may be in a feedback situation. Alternatively, when problems don’t yield to commonsense solutions, look for the “thermostat” (the trigger creating the feedback)
Reframing is an intellectual tool which offers hope of providing some degree of active mastery in systems. A successful reframing of the problem has the power to invalidate such intractable labels as “crime”, “criminal”, or “oppressor” and render them as obsolete and irrelevant as “ether” in modern physics. When reframing is complete, the problem is not “solved” — it doesn’t even exist anymore. There is no longer any problem to discuss, let alone a solution.
If you can’t change the system, change the frame — it comes to the same thing. The proposed reframing must be genuinely beneficial to all parties or it will produce a destructive kickback.
In order to remain unchanged, the system must change. Specifically, the changes that must occur are changes in the patterns of changing (or strategies) previously employed to prevent drastic internal change. The capacity to change in such a way as to remain stable when the ground rules change is a higher-order level of stability, which fully deserves its designation as Ultra-stability
SCALING LAWS
There is, potentially, an in-between place where we may not have a theory for the way the system works in infinite detail, but Geoffrey west, has shown that you generally can get what physics calls a “coarse grain description”: an understanding of the idealized, average behavior of these systems. That’s what the scaling laws represent. They show that if you tell me the size of a mammal — its mass, how much it weighs. I can tell you to 80 or 90 percent accuracy how much food it needs to eat a day, how fast its heart beats, how long it is going to live, the length and radius of the fifth branch of its circulatory system, the flow rate of blood of a typical capillary, the structure of its respiratory system, how long it needs to sleep and so on.
You can answer all of these kinds of systems for the average idealized mammal of a given size, and it will be correct to an 85 or 90 percent level. I can predict the metabolism of an elephant, for example, but give me a specific element and I won’t be able to tell you exactly what the metabolic rate is. By metabolic rate, I mean that in colloquial terms: How much it is going to need to eat each day. To bring it closer to home, one can roughly predict the lifespan of a mammal of a given size, and in particular where this life span of a hundred years come from. I can tell you what the parameters are that control that, but I won’t be able to tell you detail how long you’re going to live.