"The art of politics, under democracy, is simply the art of ringing it. Two branches reveal themselves. There is the art of the demagogue, and there is the art of what may be called, by a shot-gun marriage of Latin and Greek, the demaslave. They are complementary, and both of them are degrading to their practitioners. The demagogue is one who preaches doctrines he knows to be untrue to men he knows to be idiots. The demaslave is one who listens to what these idiots have to say and then pretends that he believes it himself." ~ H.L. Mencken
Chaology and Market-Based Economics
Column by L.K. Samuels.
Exclusive to STR
The article below contains excerpts from L.K. Samuels’ new book, In Defense of Chaos: The Chaology of Politics, Economics and Human Action.
Market chaologists and complexity economists recognize Adam Smith’s “invisible hand” metaphor as the earliest reference to how economic systems spontaneously emerge within society. In this way, many of the tenets of chaos theory and complexity science clearly validate the autonomous and self–assembly characteristics of market–based, laissez–faire (“let it be”) economics, a term co–introduced to the English world by Benjamin Franklin in the book Principles of Trade.1
Adam Smith and a diverse array of market–based economists were on the right track when they suggested that people create the most wealth and cooperation when they are set free to act as self-governing agents. This is exactly what John H. Holland demonstrated during his search for ways to model the human brain.
One of the world’s first Ph.D.’s in computer science and a predominant figurehead at the Santa Fe Institute, Holland wanted to mimic some of the processes he had observed in evolution so as to allow computers to evolve artificial intelligence. He was searching to “harness the mechanisms of evolution” and to “breed” software programs that “solved problems even when no person can fully understand their structure.”2
When Holland set out to create his algorithms, he learned that MIT had already attempted to create a “command–and–control” computer model for an artificial intelligence project. It not only failed, but was considered completely unworkable. Thinking outside the box, Holland modified the computer program by making each digital brain “synapse” into an economic unit. “Individual synapses were paid off for solving problems, with a ‘bucket brigade’ to distribute the rewards to participating units.”3 The program performed extremely well. By the 1970s, Holland’s “genetic algorithms” had provided scientific proof that self–interest and the profit motive are powerful self–organizing forces. But more than that, it showed that systems did best when based on bottom–up, decentralized structures in which individual agents determine their own courses of action.
Holland’s key achievement was to introduce the idea of market bidding processes and cost estimates in order to generate artificial-intelligence learning. “We used competition as the vehicle for credit assignment,” Holland wrote in 1986. They also treated rules as intermediates who self–organized capital, suppliers, payments, consumers, and bids.4 With the help of cellular automata, he started with a population of random individuals. From there, every new generation was evaluated from a fitness perspective. Individual agents adapted, imitated, learned, and replicated in order to search out problems and optimize solutions. But the secret to genetic algorithms’ success is that solutions are allowed to emerge and evolve, rather than being engineered or calculated. Inspired by evolutionary biology, Holland’s algorithms are now used by a majority of Fortune 500 companies to solve data–fitting, trend–spotting, worker–scheduling, and budgeting problems, among others.
In his book Adaptation in Natural and Artificial Systems, Holland also showed that genetic algorithms strive for more than just fitness. He knew that strict competition does not always produce the best results, and he sought ways to strike a balance between exploration and exploitation.5 He understood that an exploitation strategy carries hidden costs. To exploit is to use up resources and time that could be spent discovering truly novel strategies. Holland contended that “improvements come from trying new, risky things.” So exploitative methods could actually be detrimental to success. He was looking for healthy doses of both cooperation and competition—a winning scenario, in his eyes. Even Adam Smith understood this balance over 200 years ago, writing in The Theory of Moral Sentiment that “Nothing pleases us more than to observe in other men a fellow–feeling with all the emotions of our own breast.” This is a roundabout way of saying, according to economist Daniel Klein, “man yearns for coordinated sentiment like he yearns for food in his belly.” 6
What Holland demonstrated mathematically was the concept of “spontaneous order” that was championed by economist Friedrich A. Hayek in his 1960 book, The Constitution of Liberty. Hayek had theorized that order would flow naturally out of a market economy separated from the state. In Hayek’s view, if markets were left to themselves, they would “spontaneously optimize the wishes and desires of its participants—even though all are only pursuing their own self–interest.”7 In accordance with self–organizing principles, the pursuit of self–interest would create systems far more stable, resilient, and efficient than would any so–called scientific planning imposed by administrative fiat.
By the 1990s, other computerized scenarios with “digital organisms” yielded similar conclusions. Stuart Kauffman and John Holland worked together and in conjunction with the Santa Fe Institute, the premier center for studying complexity science. They designed a program, called “Patches,” to see how groups of individuals solve problems when information is limited. The computerized landscape was set up with blind agents assigned to congregate around the highest peak. The experimental results proved successful in a one–peak world, but what about when the topography was more complex—with oddly shaped hills and irregular mountains that shifted randomly? The results: The agents were thwarted by this, and failed to reach the highest point. When the agents were allowed to exchange information, they were able to locate the highest peak.
These experiments affirmed the efficacy of self–organizing systems that emerge at the edge of chaos—a point “somewhere between a rigid order that is unresponsive to new information, and a system that is so overloaded with new information that it dissolves into chaos.”8
1 George Whaley and Benjamin Franklin, Principles of Trade, 1774. However, Franklin gave full credit to Whaley for writing a book in which they both had a hand. Laissez–faire is French for “Let (people) do (as they think best).” In ancient Chinese culture this noninterference concept was known as wu—wei, or “active not–doing.”
2 John H. Holland, “Genetic Algorithms: Computer programs that 'evolve' in ways that resemble natural selection can solve complex problems even their creators do not fully understand,” Scientific American, July 1992, p. 66–72.
3 William Tucker, “Complex Questions.” Reason, January 1996, pp. 34–38.
4 John H. Holland, “Escaping Brittleness: The Possibilities of General Purpose Machine Learning Algorithms Applied to Parallel Rule–Based Systems.” In Machine Learning II, R.S. Michalski, J.G. Carbonell, and T.M. Mitchell (eds.), Los Altos, CA: Morgan Kaufmann, 1986, pp.593–623.
5 John H. Holland, Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence, Cambridge, MA: The MIT Press, 1992.
6 Daniel B. Klein, “The People’s Romance: Why People Love Government,” The Independent Review, vol. 10, no. 1, Summer 2005.
7 F.A. Hayek, The Constitution of Liberty, Chicago: University of Chicago Press, 1978, originally published in 1960.
8 William Tucker, “An Ownership Society Evolves: Who says individualized accounts are a better way to solve social problems? The laws of nature,” The American Enterprise, Vol. 16, No. 2, p. 30, March 2005.