Ric Amurrio
15 min readDec 12, 2018


Fair Use Disclaimer. This article includes unlicensed copyright-protected works for the purpose of criticism, research or comment.


Most people in the Western cultural tradition still believe in the Victorian ideal of progress, “the assumption that a pattern of change exists in the history of mankind . . . that it consists of irreversible changes in one direction only, and that this direction is towards improvement.” Progress is a law of nature: the mammal is swifter than the reptile, the ape subtler than the ox, and man the cleverest of all.

What is progress? Progress is the attempt to make something better, which implies hierarchical thinking: if there is something better, this means that there is also something worse. During the Italian Renaissance, artists strove to make things better, to paint better, to build better, to compose better. They were modern as a result of their intention to be better, and not the other way around.

Our technological culture measures human progress by technology: the club is better than the fist, the arrow better than the club, the bullet better than the arrow. We came to this belief because it delivered. But these ideas of material progress are only significant only in the past three hundred years or so” — coinciding closely with the rise of science and industry and the decline of traditional beliefs. First is that the term is inherently optimistic, if it is not, then it is regress. Secondly, progress requires a belief, however hazy, in a future state of perfection or in the perfectibility of things. At the very least, it necessitates a continuous forward process which may someday arrive at its intended destination. That new knowledge about the world can be acquired and put to use, that the current state of affairs represents a material advance over the past, and that the future might be better than the present, may have been taken as a given in 1950, but in the 1600s, this was a radical idea.

Our practical faith in progress has ramified and hardened into an ideology — a secular religion which, like the religions that progress has challenged, is blind to certain flaws in its credentials. Progress, therefore, has become “myth” in the anthropological sense. Successful myths are powerful and often partly true. Myth is an arrangement of the past, in patterns that reinforce a culture’s deepest values and aspirations. . . . Myths are so fraught with meaning that we live and die by them. They are the maps by which cultures navigate through time.”

Classical thinkers conceived of history’s motion as being essentially cyclical in nature, born of their observations of celestial movement. While the Christian era’s introduction of Judgment Day and its emphasis on the afterlife laid the groundwork for a linear conception of time with not just a beginning, but also a middle, and an end. In repudiating the authority of antiquity as well as the doctrinal finality of the church, Enlightenment thinkers established the modern age.

Mathematically, progress means that some new information is better than past information, not that the average of new information will supplant past information, which means that it is optimal for someone, when in doubt, to systematically reject the new idea, information, or method. Clearly and shockingly, always. Why?

Nassim Taleb

We often unthinkingly pick up a narrative of progress in which each generation of technology is an improvement on the last, from abacus to iPhone. We marvel that we carry more computing power in our pockets than was used to put a man on the moon in 1969. What we have at our fingertips is smaller, faster and more complicated than before.

But is it necessarily better? We are living at a time when inequality in incomes and living standards is rising. But at least we have iPhones, the thinking goes. There may be collateral damage from technological progress, but the end-product elevates us all. But from a historical perspective, technological progress has not always resulted in the betterment of humanity.

Take the spread of watermills for grinding corn, which began around 1000AD in Europe. Watermills have always been presented as an example of enlightened development, enabling people to grind much larger quantities of grain at once. Yet milling by hand preserves more nutrients in the grain than mechanized milling, and a move to watermills — generally owned by feudal lords — was imposed by force on a reluctant peasant population. Around this time the average height of European peasants began to decrease, indicating a worsening diet.

In the computer age, we are similarly spun into cycles of obsolescence and upgrades that benefit us little but which are difficult to opt out of. The economics of microchip production — where factories must operate at enormous scale and only the very latest products make a profit — dictates a relentless pace of device upgrades, regardless of what consumers really need.

This either/or mindset bestrides the landscape. Again and again we hear, that whatever its failings, GAME A reigns supreme when it comes to delivering progress. If we want wonderful things we need the cut-and-thrust and inevitable casualties of Game A competition. “Boycott Foxconn and you boycott the 21st century”.


The fact that all the new computer and mobile technology of the past 20 years has not led to an increase in productivity. Employees must constantly learn new ways to perform the same task over and over again as technology changes. However, this does not necessarily increase the speed at which jobs are done.

Moreover, modern computers and mobile phones are hampered by a design flaw that dates back to the 1940s: a clock that dictates that only one tiny process can happen at a time; only a small amount of frenetic activity happens at a time, while most of the device remains idle.

There are other routes that we could have taken with technology. Until the 1960s around half the world’s computers were still analogue: in fact, it was analogue computers that enabled that first moon landing. Analogue computers had many advantages. They could be more intuitive to use and even in the 1980s were significantly faster and cheaper than their digital rivals. They could be made from a variety of materials.


Almost all today’s computers still use an architecture devised in the 1940s for fragile, valve-based machines to do precisely one thing at a time — and no more. This is why the typical modern computing device needs such a fast, energy-hungry processor.

Yes, it uses technologies (von Neumann architecture, binary arithmetic units, random access memory, Unix or a similar operating system, TCP/IP, DHCP, HTTP, USB, Ethernet). Most of them are surprisingly old technologies, developed long before GAME A became computer enthusiasts. And they often do less than they did, or were meant to do, when they were new.

Its operating system (Unix, or one of its Open Source workalikes) began life in the late 1960s to serve multiple users simultaneously from a single machine.

The big breakthroughs came to an end when computers started to become commodities in the 1970s and the failures became legion. For example, computer firms have all struggled to produce reliable operating systems (the software that co-ordinates a computer’s components and essential processes). Today’s OS’s are nearly all based on one written in his scanty spare time by a programmer called Ken Thompson in 1969 (Unix, and its free GNU/Linux successors)

Computer disk drives have been the workhorses of the computer revolution; “cloud computing” would be unthinkable without the billions of them that fill data-centres. The promised “weightless economy” is now one of the world’s greatest producers of greenhouse gases. In 2010, Greenpeace found that the internet’s global population of servers was consuming more electricity than Germany or even India. How did this happen? Largely thanks to the demands of on-line commercial competition and advertising.

Other manisfestations of GAME A tell similar tales. In 1958 (after the USSR had shocked the West by launching the world’s first man-made satellite, Sputnik) Oxford’s John Jewkes carried out a wide-ranging study of industrial innovation. He found that the USSR, another form of GAME A, wasn’t in fact much good at innovation, but that capitalist GAME A firms weren’t much better. In aviation, plastics, photography, transport and every other industry he looked at, innovation had slowed to a crawl the moment big firms had adopted them, at best making only tiny incremental improvements on already-proven technologies.

Where GAME A excels is in their tendency to “run away” with any technology that they eventually decide to adopt, forcing it into every possible nook and cranny of economic life that might yield even a short-term profit, and stifling other technologies that might threaten the profit stream — hence the winner-take-all character of technological change in capitalist economies.

Technologies that start out full of promise turn into juggernauts. The better the technology, the worse its impact: And here’s the paradox. Every new technology aims to achieve more with less. Yet the moment the forces of economic competition get involved, the equation is thrown into reverse. The more efficient the technology, the bigger its environmental impact.

The problem was first defined by the economist William Stanley Jevons in 1865. The British Government had asked him to examine why, although steam engines were becoming more efficient with each passing year, requiring less and less coal to do the same amounts of work, Britain seemed to be getting through its coal reserves at a faster and ever faster rate. Jevons’s verdict was that this was indeed a genuine phenomenon, probably an intractable one, and not confined to steam technologies: where there is price competition, greater efficiency can lead to greater demand, which negates any savings in energy consumption.

Since then, many economists have tried to show that Jevons was wrong, or that better technology would eventually, as it were, achieve “escape velocity”, and start delivering the hoped-for reductions: a phenomenon known as “decoupling”. This is often the basis of “green growth” arguments, but actual instances of decoupling are very hard to find. For example, LED lights seem miraculously efficient. They need minuscule amounts of electricity in use — but they use enormous amounts in manufacture, to which must be added the environmental and human costs of extracting and refining the raw materials that go into them, packaging them into affordable but attractive products and shipping them, all of which are borne far from the point of use.


A progress trap is the condition human societies experience when, in pursuing progress through human ingenuity, they inadvertently introduce problems they do not have the resources or political will to solve. The error is often to extrapolate from what appears to work well on a small scale to a larger scale, which depletes natural resources and causes environmental degradation. Large-scale implementation also tends to be subject to diminishing returns. This prevents further progress and sometimes leads to societal collapse.

In a progress trap, those in positions of authority are unwilling to make changes necessary for future survival. If they were to do so they would need to sacrifice their current status and political power at the top of a hierarchy. They may also be unable to raise public support and the necessary economic resources, even if they try.

In the early stone age, improved hunting techniques in vulnerable areas caused the extinction of many prey species, leaving the enlarged populace without an adequate food supply. The only apparent alternative, agriculture, also proved to be a progress trap. Salination, deforestation, erosion and urban sprawl led to disease, malnutrition and so forth, hence shorter lives. A new source of natural resources can provide a reprieve. The European discovery and exploitation of the “New World” is one example of this, but seems unlikely to be repeated today. Present global civilization has covered the planet to such an extent there are no new resources in sight.

Almost any sphere of technology can prove to be a progress trap, as in the example of medicine and its possibly inadequate response to the drawbacks of the high-density agricultural practices (e.g. factory farming) it has enabled. Wright uses weapon technology gradually reaching the threat of total nuclear destruction to illustrate this point. Ultimately, Wright strives to counter at least the Victorian notion of “modernity” as unconditionally a good thing.

Behavioral causes

Besides vested interests and socioeconomic compliance, individual behavior is a significant contributing factor to progress traps, which are not limited to technology. This can be verified in terms of new information from the neurosciences, notably lateralization of brain function, where the short-term goals of a man-made world are increasingly favored over long-term global interests. His study of this shows how institutions and societies can become committed to an exclusive form of technocratic rationalism. In this scenario, humans diverge from a default interdependence with nature with the result that short-term technical preoccupations slowly inhibit creativity and long-term problem solving, thus compromising long-term interests.


If you see a cow from the Lascaux cave paintings, dated something like 17,000 BC you see that that cow is unsurpassed in translating observed nature into an elegant and refined understanding of the essence of a particular animal. It is as sophisticated in terms of drawing as any cow in the history of art. Of the many different kinds of drawn or painted cows none are better than that Lascaux cow. They are just different. 16th century Dutch painters like Aelbert Cuyp painted cows very beautifully, and Picasso’s much more abstracted bulls might, in their way, be as good as the Lascaux cow, but I can’t say any of those depictions of a cow are better than the one painted 19,000 years ago.

If we were to talk about progress in art, we would have to posit the perfectibility of forms. This was worked out by Plato just as he argues that no particular work of art, but only the ideal form of beauty, can be perfectly beautiful, or that no particular cow, but only the ideal form of “cow”, can be beautiful.

What does “being more like a form” involve? The forms, so far as this is compatible with there being more than one of them, exhibit that metaphysical perfection. Whereas particular works of art — a particular painting, for example — may fade away, the form of Beauty, Beauty-in-itself, Plato tells us, can never be destroyed, nor can the form of Cow-in-itself or the form of Goodness-in-itself. The forms, that is, are eternal — or, at the very least, everlasting.

That ordinary things — particulars — pass away is, he suggests, a necessary consequence of the fact that they are complex, composite, changeable. The forms, therefore, as eternal, must be simple, indivisible, unchangeable. The soul is “like the forms” precisely in respect to these metaphysical perfections.

IN THE LAST CENTURY, VERY OFTEN the concept of “progress” was projected upon the arts as a measurement of quality: “good art” was “progressive art.” If an artist did not commit some “groundbreaking” artistic deed, his work was considered worthless. The discovery of perspective by Bruneleschi in the 15th century was also something like progress, as was the “sfumato” brushwork developed by Leonardo da Vinci.

Surely we can all point to a few achievements here and there, such as linear perspective, which have enhanced our knowledge and added to our ability to create augmented fidelity to life. But expression, artistic vision, the quality of execution has never been dependent upon the physical means of an art form: Vermeer has not been superseded in terms of artistic quality by Picasso or Pollock, Bach not by Mahler or Boulez, Michelangelo not by Giacometti or Moore, Palladio not by Gropius or Le Corbusier. The mimetic properties of Greek and Roman sculpture are as good as any figurative sculpture created in the 21st century, and in many cases they’re superior.

Forms crop up, become highly valued, and then fade away as cultural tides shift. Byzantine icon painting was concerned with the depiction of saints in heaven, not mimesis, as such their forms privilege stylization and expression. Their ancestors, on the other hand, wanted realism in their sculpture, and perhaps, as the Fayum funerary images suggest, their painting as well. French Impressionists, mirroring developments on the political front, set their work in opposition to the dominant trends of the day, elevating the freedom and spontaneity.

The Modernist illusion of progress is in effect a historical accident brought on by a cultural belief in a theory of it. What looks like the empirically verifiable progression from representation to abstraction, is simply a reflection of a cultural attitude that comes to expect progress in all walks of life. Progress becomes a self-fulfilling prophecy. The reality is that the look of 20th century painting merely reflects cultural changes in the same way Byzantine icon painting represented a cultural shift in the 12th century. No more, no less.

For comparison if you put up a slide of Picasso’s Woman With a Fan, from 1905, and show what I pointed out in the Lascaux cave painting, that makes clear that the legs on the far side of the cow come from behind the cow’s body, is precisely the same as what Picasso did to spatially separate the forearm of his woman from her torso. In both cases there is a lightening of color behind what is in front to interrupt the otherwise continuous contour surrounding the body.

If you look at a Cycladic sculpture head from about 2,500 BC and talk about the kind of abstraction the Cycladic sculptures have and then mention Constantin Brâncuși whose abstract figurative sculptures might make the best 20th century comparison. The Cycladic heads have no eyes, which is highly unlikely for any kind of depiction of a head in almost any culture. I have no idea why that was, or how a cave man could draw with the intelligence we can see in the Lascaux paintings. If you look at an Ancient Roman marble portrait heads from the first century BC to the the end of the first century AD which are remarkable in their realism, and are great portraits. And then you look the wax encaustic Fayum portrait heads, from the Coptic Egyptians dated from about 160 AD. The best of those are as beautiful as any portrait of any period, anywhere. Ife and Yoruba terra cotta and bronze heads from West Africa in the 12–14th centuries are also unsurpassed in realism and elegance. And they are surprisingly advanced compared to what was being made in Europe in the same period.

If we accept that there is no progress in art, and that the kind of brilliance had emerged independently in isolated cultures many centuries and continents apart, we must be looking at archetypal imagery. And perhaps there is an explanation for that intelligence that suggests that we, in our own culture, are less able to create the beauty and the stylistic and symbolic meaning we see in those examples than were the artist-craftsmen of those ancient civilizations, and the so called “primitive” cultures. And maybe that is because we in the West are lost in a confusion of our own making, starting in childhood when we were fed a visual diet of cartoons, Muppets, electronic games, factory made toys, sitcoms with canned laughter, and the rest.

We are exposed to a profusion of things of mediocre quality and an over-abundance of choices. Fast food and fast other things replaced what could be more meaningful. I could say more inspirational, but that would suggest that there was an intention there that just wasn’t there.

Alan Feltus

The French painter Paul Gauguin suffered acutely from cosmological vertigo induced by the work of Darwin and other Victorian scientists. In the 1890s, Gauguin ran away from Paris, family, and stockbroking career to paint native girls in the tropics.But it seems he could not escape from himself, despite great efforts to do so with the help of drink and opium. At the bottom of his disquiet lay a longing to find what he called the “savage” — primordial man, humanity in the raw, the elusive essence of our kind.

Although there is no progress in art in the sense of art getting better through the centuries, what we artists can do is make art that is from within ourselves. It may take years to find what we can believe to be our own tendencies and interests and ideas about making art but in general we won’t learn about our personal vision by means of an intellectual approach. Rather we observe our own tendencies within the art we produce, and paying close attention to what does and what does not hold our interest.

The obligation to be “modern” closes off the arsenal of means that developed in the past, the result being that the range of possibilities becomes ever narrower. And in the end, all available material means seem to be “exhausted,” since the artist looks upon the material level as the most important one.

The modernist composer György Ligeti said in an interview that he felt imprisoned between, on one hand, the past, and on the other, modernism — the avant-garde which he himself had helped into being but which he felt he had somehow to transcend, because “progress” meant to him having to “go forward” all the time on the line of historical development. For Ligeti, modernism had become petrified into a mentality which had to be “overcome,” had to be “surpassed.