"[T]here are, at bottom, basically two ways to order social affairs, Coercively, through the mechanisms of the state -- what we can call political society. And voluntarily, through the private interaction of individuals and associations -- what we can call civil society. ... In a civil society, you make the decision. In a political society, someone else does. ... Civil society is based on reason, eloquence, and persuasion, which is to say voluntarism. Political society, on the other hand, is based on force." ~ Ed Crane
Column by Glen Allport.
Exclusive to STR
The dangers inherent in superintelligent machines have inspired any number of science fiction books, films, and short stories. From Asimov's I, Robot series (and the 2004 movie of the same name starring Will Smith) to James Cameron's Terminator and its sequels, from Johnny Depp's recent Transcendence to the classic 2001: A Space Odyssey, intelligent machines have caused unexpected havoc and often death – megadeath in Terminator's case – either as side-effects of following positive goals or through outright maliciousness caused by unforeseen conflicts between what the machine wants and what humans need to survive.
You'd think that with so many popular, high-profile stories about the potential dangers of AI (artificial intelligence), researchers, programmers, engineers, entrepreneurs, and military planners would be exercising extreme caution in their attempts to create broad, human-level artificial intelligence. You would especially think that building terminator drones and robots, designed to be intelligent AND autonomous – designed to kill people without a human in the decision-making loop – would be so obviously stupid and dangerous that no one, not even people as bat-shit crazy as those who built and then USED atomic bombs in World War II, would actually make such things as “terminator drones and robots.” But it turns out people just aren't very smart, apparently, because they're doing exactly that, right now (details are in James Barrat's Our Final Invention: Artificial Intelligence and the End of the Human Era).
Is that really a problem? Yes, and a serious one. Increasing numbers of smart and frightened people believe mankind will probably not survive the coming of machine superintelligence. Nick Bostrom, in Superintelligence: Paths, Dangers, Strategies opines that “Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct. Superintelligence is a challenge for which we are not ready now and will not be ready for a long time. We have little idea when the detonation will occur, though if we hold the device to our ear, we can hear a faint ticking sound.”
Of course, the people creating a machine superintelligence (or a broad, human-level intelligence that might improve its own code, even without being told to) could take precautions – although many seem not to be doing so, and indeed most, according to James Barrat and others, are seemingly unaware of the dangers.
If you were among the more thoughtful AI/ASI [artificial super intelligence] researchers, you might create your new smarter-than-us intelligence within a disconnected computer environment, with no link to the Internet or to other computers. Barrat describes how laughably ineffective that would likely be: “Now, really put yourself in the ASI's shoes. Imagine awakening in a prison guarded by mice. Not just any mice, but mice you could communicate with.”
Barrat discusses what might follow in detail, but you already know the outcome: even before the mice get scammed into letting the ASI out of the box with the promise of protecting micekind from the evil cat nation – which is surely building an ASI of its own – the mice would probably be toast.
As will we, absent incredibly good fortune. Not because the ASI would be hostile to humans; not because it would hate us or even necessarily fear us, but simply because it would no more care about us or consult us about its plans than we care about or consult mice before plowing their burrows under to plant corn. The inherent drives of an ASI would eventually (meaning, in ASI-time, at any moment) cause it to appropriate materials that we humans have important uses for: our bodies, for example. Assuming the ASI has created nano tech that can make use of whatever molecules it finds at hand to build the things (such as programmable matter) it needs, keeping human bodies and those numerous other things we need for life separate from the feedstock being transformed by the nano-assemblers might not be something the ASI would worry about or even take notice of.
Superintelligent machines won't so much become our “robot overlords” as they will something further beyond us, and more alien to us, than we are to ants. We share much (a stunning amount, actually) of our DNA with ants and other “lower” creatures, and as biological creatures that have evolved on the same planet, we share significant physical needs and low-level motivations. Machine intelligence will share essentially nothing with us, and even careful programming is unlikely to ensure long-term safety. Has any complex system, any program larger than the old DOS “copy” command, any modern computerized device, ever NOT crashed or glitched or performed an action that its own creators would have been surprised at? I adore Apple's products and I respect Apple's programmers, but even with millions of people beta testing Apple hardware and software, problems are common. They don't cause human extinction, but unexpected problems occur with every new release of hardware, of operating systems, and often with smaller pieces of software as well. Superintelligent machines will be no different except for the potential to create extinction-level events in a world where nearly everything is computerized and in most cases connected to the Internet.
How long do we have? No one knows, in part because AI/ASI researchers are all over the globe in dozens of nations, and many efforts are being conducted in secrecy. Chillingly, much of the funding (perhaps the majority) is coming from military sources such as DARPA, with, one can only believe, the explicit intent to use machine intelligence to kill the enemy or to disable or destroy the enemy's infrastructure – which would, as we have seen in Iraq and elsewhere, kill thousands, millions, or (in, say, the United States or China) perhaps hundreds of millions of civilians. Another reason we can only guess at the time remaining in the Human Era is that for many types of thinking, machines are millions of times faster than humans and they never sleep. At the point where turning an AI into something a thousand times smarter than Einstein might take a human team decades, the AI itself might do the same job in minutes – and do it in the background, without people even noticing what was happening.
We can say this with certainty: Time grows short. “IBM Chip Processes Data Similar to the Way Your Brain Does” was published in Technology Review a few days ago (August 7, 2014). It would take a largish number of those chips to create something with human-level general intelligence, but as I survey the human world, I increasingly perceive mouse-like characteristics in place of the grander qualities we have always insisted on seeing in our kind. Compared to what is coming, mice-versus-humans seems an almost level paring.
For more detail, I recommend the previously mentioned Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat. Pleasingly well-written, well-thought, and heavily researched, Barrat's book is an enjoyable read, aside from the mounting terror it creates.
Nick Bostrom's Superintelligence: Paths, Dangers, Strategies is more technical and spends much time on genetic and other methods of human intelligence amplification – a path I fear we won't have time to follow for long, Ray Kurzweil's cheerleading notwithstanding – but is also quite readable and thought provoking.