Superintelligence

Column by Glen Allport.

Exclusive to STR

The dangers inherent in superintelligent machines have inspired any number of science fiction books, films, and short stories. From Asimov's I, Robot series (and the 2004 movie of the same name starring Will Smith) to James Cameron's Terminator and its sequels, from Johnny Depp's recent Transcendence to the classic 2001: A Space Odyssey, intelligent machines have caused unexpected havoc and often death – megadeath in Terminator's case – either as side-effects of following positive goals or through outright maliciousness caused by unforeseen conflicts between what the machine wants and what humans need to survive.
 
You'd think that with so many popular, high-profile stories about the potential dangers of AI (artificial intelligence), researchers, programmers, engineers, entrepreneurs, and military planners would be exercising extreme caution in their attempts to create broad, human-level artificial intelligence. You would especially think that building terminator drones and robots, designed to be intelligent AND autonomous – designed to kill people without a human in the decision-making loop – would be so obviously stupid and dangerous that no one, not even people as bat-shit crazy as those who built and then USED atomic bombs in World War II, would actually make such things as “terminator drones and robots.” But it turns out people just aren't very smart, apparently, because they're doing exactly that, right now (details are in James Barrat's Our Final Invention: Artificial Intelligence and the End of the Human Era).
 
Is that really a problem? Yes, and a serious one. Increasing numbers of smart and frightened people believe mankind will probably not survive the coming of machine superintelligence. Nick Bostrom, in Superintelligence: Paths, Dangers, Strategies opines that “Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct. Superintelligence is a challenge for which we are not ready now and will not be ready for a long time. We have little idea when the detonation will occur, though if we hold the device to our ear, we can hear a faint ticking sound.”
 
Of course, the people creating a machine superintelligence (or a broad, human-level intelligence that might improve its own code, even without being told to) could take precautions – although many seem not to be doing so, and indeed most, according to James Barrat and others, are seemingly unaware of the dangers.
 
If you were among the more thoughtful AI/ASI [artificial super intelligence] researchers, you might create your new smarter-than-us intelligence within a disconnected computer environment, with no link to the Internet or to other computers. Barrat describes how laughably ineffective that would likely be: “Now, really put yourself in the ASI's shoes. Imagine awakening in a prison guarded by mice. Not just any mice, but mice you could communicate with.”
 
Barrat discusses what might follow in detail, but you already know the outcome: even before the mice get scammed into letting the ASI out of the box with the promise of protecting micekind from the evil cat nation – which is surely building an ASI of its own – the mice would probably be toast.
 
As will we, absent incredibly good fortune. Not because the ASI would be hostile to humans; not because it would hate us or even necessarily fear us, but simply because it would no more care about us or consult us about its plans than we care about or consult mice before plowing their burrows under to plant corn. The inherent drives of an ASI would eventually (meaning, in ASI-time, at any moment) cause it to appropriate materials that we humans have important uses for: our bodies, for example. Assuming the ASI has created nano tech that can make use of whatever molecules it finds at hand to build the things (such as programmable matter) it needs, keeping human bodies and those numerous other things we need for life separate from the feedstock being transformed by the nano-assemblers might not be something the ASI would worry about or even take notice of.
 
Superintelligent machines won't so much become our “robot overlords” as they will something further beyond us, and more alien to us, than we are to ants. We share much (a stunning amount, actually) of our DNA with ants and other “lower” creatures, and as biological creatures that have evolved on the same planet, we share significant physical needs and low-level motivations. Machine intelligence will share essentially nothing with us, and even careful programming is unlikely to ensure long-term safety. Has any complex system, any program larger than the old DOS “copy” command, any modern computerized device, ever NOT crashed or glitched or performed an action that its own creators would have been surprised at? I adore Apple's products and I respect Apple's programmers, but even with millions of people beta testing Apple hardware and software, problems are common. They don't cause human extinction, but unexpected problems occur with every new release of hardware, of operating systems, and often with smaller pieces of software as well. Superintelligent machines will be no different except for the potential to create extinction-level events in a world where nearly everything is computerized and in most cases connected to the Internet.
 
How long do we have? No one knows, in part because AI/ASI researchers are all over the globe in dozens of nations, and many efforts are being conducted in secrecy. Chillingly, much of the funding (perhaps the majority) is coming from military sources such as DARPA, with, one can only believe, the explicit intent to use machine intelligence to kill the enemy or to disable or destroy the enemy's infrastructure – which would, as we have seen in Iraq and elsewhere, kill thousands, millions, or (in, say, the United States or China) perhaps hundreds of millions of civilians. Another reason we can only guess at the time remaining in the Human Era is that for many types of thinking, machines are millions of times faster than humans and they never sleep. At the point where turning an AI into something a thousand times smarter than Einstein might take a human team decades, the AI itself might do the same job in minutes – and do it in the background, without people even noticing what was happening.
 
We can say this with certainty: Time grows short. “IBM Chip Processes Data Similar to the Way Your Brain Does” was published in Technology Review a few days ago (August 7, 2014). It would take a largish number of those chips to create something with human-level general intelligence, but as I survey the human world, I increasingly perceive mouse-like characteristics in place of the grander qualities we have always insisted on seeing in our kind. Compared to what is coming, mice-versus-humans seems an almost level paring.
 
For more detail, I recommend the previously mentioned Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat. Pleasingly well-written, well-thought, and heavily researched, Barrat's book is an enjoyable read, aside from the mounting terror it creates.
 
Nick Bostrom's Superintelligence: Paths, Dangers, Strategies is more technical and spends much time on genetic and other methods of human intelligence amplification – a path I fear we won't have time to follow for long, Ray Kurzweil's cheerleading notwithstanding – but is also quite readable and thought provoking.

8
Your rating: None Average: 8 (3 votes)
Glen Allport's picture
Columns on STR: 111

Glen Allport co-authored The User's Guide to OS/2 from Compute! Books and is the author of The Paradise Paradigm: On Creating a World of Compassion, Freedom, and Prosperity.

Comments

Douglas Herman's picture

Glen,
   Your thought-provoking column jogged my memory. Waaayyyy back to 1959, and one of the first AI references on TV.  To say Rod Serling was ahead of his time is to say Albert Einstein was a pretty good scientist.
  "The Lonely" is an episode set on an asteroid. Looks like Death Valley but a pretty bleak wasteland. Solitary confinement. Check the slim budget Serling had to adapt to. The costumes are retro SciFi to be kind.
  

The Twilight Zone S01E07 The Lonely

Glen Allport's picture

Thanks for the link! Yeah, Serling was amazing every episode of Twilight Zone looked to have a budget of about $1.98 but many of them are classics -- really thoughtful and insightful.

tomcat's picture

Depends all on how you define "artificial intelligence".

An Improved-faster calculating- version of
todays computer could, for example be installed in an
Android where it mimicks perfectly human behavior and
emotions.Nevertheless it is still the the most complex
version of an oldfashioned record player/Grammophon.
Pseudo-autonom but no real independence.

For the horror-scenarios from these movies
to happen it would be necessary for a Calculator
to become self-aware, thus thinking instead of only
(very fast)calculating.

Nobody even theoretically knows what
creates this "Cogito ergo sum" state
of mind in a human being let alone in a machine.

To get in a conflict with the humans such a
artificial beeing wouldnt have to be
evil or reckless like in these Movies.
That could very well be the part
of the humans to play.
You deny such a beeing the right
of freedom and existence- Guess what will
happen next ?

Quote "Golem"/Stanislaw Lem:
"The highest Intellect can not be the lowest slave"

Good reading about this Theme:
Stanislaw Lem(Solaris)->Mostly Stories from
the 1960ies and early 70ies.
(The Golem-Supercomputer-> Self-Awareness;
"THe Invincible"->Swarm intelligence;
"Dr. Diagoras"->Experiments with self organizing AI)
The Jules Verne of Cybernetics.

Glen Allport's picture

Thanks for the thoughts and the titles, Tomcat. 
It may not be that a machine needs self-awareness to become a danger; bad programming of computers (on purpose and otherwise) has already caused harm and even death, and an extremely powerful computer with access to the internet (and thus to every connected thing on the planet, assuming it's even more clever at such access than human hackers are) could cause real horrors just by following its programming in unexpected ways -- a common trope, of course, as in I, Robot.
I enjoy well-written apocalyptic AI stories, but the humans usually win or achieve parity with the AI in the end, in some fashion. Barrat's Our Final Invention is the first book that really made me think about how unlikely that is. It truly frightened me. Humans, including most AI researchers and especially military-related ones (who on Earth would want to connect autonomous AI with modern weaponry?) are in fantasy land on this topic, much the way people expecting the coercive State to be benevolent are. Barrat makes the point (quoting others) that AI will almost by definition be psychopathic in its behavior: no real empathy possible because no shared biological structures, histories, motivations, or needs. 
And we keep racing closer. Wired has a story about some of Siri's creators developing  an AI that can re-write its own code on the fly to achieve its goals. Can't imagine anything going wrong with that!
http://www.wired.com/2014/08/viv/
 

Douglas Herman's picture

Glen,
   You saw the tiny R2D2 type servers being used in China? 20-30K each. Service techies soon in demand to fix 'em?

Robot Restaurant: Robots cook food and wait ... - Daily Mail

Glen Allport's picture

Yes, Douglas. Lots of service tech jobs coming, until the robots take them away! And then there's this: Clever USAF captain wants intelligent, autonomous drones to replace manned fighters. The concept art looks very cool. Wonder if they'll have any problems with those?
http://www.dailymail.co.uk/news/article-2723466/The-laser-armed-stealth-...