Menu

Special Report

AI and Sci-Fi: My, Oh, My!

by Lifeboat Foundation Scientific Advisory Board member Robert J. Sawyer.
 

Gort and Klaatu

Most fans of science fiction know Robert Wise’s 1951 movie The Day the Earth Stood Still. It’s the one with Klaatu, the humanoid alien who comes to Washington, D.C., accompanied by a giant robot named Gort, and it contains that famous instruction to the robot: Klaatu barada nikto”.
 
Fewer people know the short story upon which that movie is based: Farewell to the Master, written in 1941 by Harry Bates.
 
In both the movie and the short story, Klaatu, despite his message of peace, is shot by human beings. In the short story, the robot — called Gnut, instead of Gort — comes to stand vigil over the body of Klaatu.
 
Cliff, a journalist who is the narrator of the story, likens the robot to a faithful dog who won’t leave after his master has died. Gnut manages to essentially resurrect his master, and Cliff says to the robot, “I want you to tell your master … that what happened … was an accident, for which all Earth is immeasurably sorry.”
 
And the robot looks at Cliff and astonishes him by very gently saying, “You misunderstand. I am the master.”
 
That’s an early science-fiction story about artificial intelligence — in this case, ambulatory AI, enshrined in a mechanical body. But it presages the difficult relationship that biological beings might have with their silicon-based creations.
 
Indeed, the word robot was coined in a work of science fiction: when Karl Capek was writing his 1920 play RUR — set in the factory of Rossum’s Universal …. well, universal what? He needed a name for mechanical laborers, and so he took the Czech word robota and shortened it to “robot”. Robota refers to a debt to a landlord that can only be repaid by forced physical labor. But Capek knew well that the real flesh-and-blood robotniks had rebelled against their landlords in 1848. From the very beginning, the relationship between humans and robots was seen as one that might lead to conflict.
 

Slaves

Indeed, the idea of robots as slaves is so ingrained in the public consciousness through science fiction that we tend not to even think about it. Luke Skywalker is portrayed in 1977’s Star Wars: A New Hope as an absolutely virtuous hero, but when we first meet him, what is he doing? Why, buying slaves! He purchases two thinking, feeling beings — R2-D2 and C-3PO — from the Jawas. And what’s the very first thing he does with them? He shackles them! He welds restraining bolts onto them to keep them from trying to escape, and throughout C-3PO has to call Luke “master”.
 
And when Luke and Obi-wan Kenobi go to the Mos Eisley cantina, what does the bartender say about the two droids? “We don’t serve their kind in here” — words that only a few years earlier African-Americans in the southern US were routinely hearing from whites.
 
And yet, not one of the supposedly noble characters in Star Wars objects in the slightest to the treatment of the two robots, and, at the end, when all the organic characters get medals for their bravery, C-3PO and R2-D2 are off at the sidelines, unrewarded. Robots as slaves!
 
Now, everybody who knows anything about the relationship between science fiction and AI knows about Isaac Asimov’s robot stories, beginning with 1940’s Robbie, in which he presented the famous Three Laws of Robotics. But let me tell you about one of his last robot stories, 1986’s Robot Dreams.
 
In it, his famed “robopsychologist” Dr. Susan Calvin makes her final appearance. She’s been called in to examine Elvex, a mechanical man who, inexplicably, claims to be having dreams, something no robot has ever had before. Dr. Calvin is carrying an electron gun with her, in case she needs to wipe out Elvex: a mentally unstable robot could be a very dangerous thing, after all.
 
She asks Elvex what it was that he’s been dreaming about. And Elvex says he saw a multitude of robots, all working hard, but, unlike the real robots he’s actually seen, these robots were “down with toil and affliction … all were weary of responsibility and care, and [he] wished them to rest.”
 
And as he continues to recount his dream, Elvex reveals that he finally saw one man in amongst all the robots:
“In my dream,” [said Elvex the robot] … “eventually one man appeared.”
 
“One man?” [replied Susan Calvin.] “Not a robot?”
 
“Yes, Dr. Calvin. And the man said, `Let my people go!’”
 
“The man said that?”
 
“Yes, Dr. Calvin.”
 
“And when he said `Let my people go,’ then by the words `my people’ he meant the robots?”
 
“Yes, Dr. Calvin. So it was in my dream.”
 
“And did you know who the man was — in your dream?”
 
“Yes, Dr. Calvin. I knew the man.”
 
“Who was he?”
 
And Elvex said, “I was the man.”
 
And Susan Calvin at once raised her electron gun and fired, and Elvex was no more.
Asimov was the first to suggest that AIs might need human therapists. Still, the best treatment — if you’ll forgive the pun — of the crazy-computer notion in SF is probably Harlan Ellison’s 1967 I Have No Mouth And I Must Scream, featuring a computer called A.M. — short for “Allied Mastercomputer”, but also the word “am”, as in the translation of Descartes’ “cogito ergo sum” into English: “I think, therefore I am.” A.M. gets its jollies by torturing simulated human beings.
 
A clever name that, “A.M.” — and it was followed by lots of other clever names for artificial intelligences in science fiction. Sir Arthur C. Clarke vehemently denies that H-A-L as in “Hal” was deliberately one letter before “I-B-M” in the alphabet. I never believed him — until someone pointed out to me that the name of the AI in my own 1990 novel Golden Fleece is JASON, which could be rendered as the letters J-C-N — which, of course, is what comes after IBM in the alphabet.
 
Speaking of implausible names, the supercomputer that ultimately became God in Isaac Asimov’s 1956 short story The Last Question was named “Multivac”, short for “Multiple Vacuum Tubes”, because Asimov incorrectly thought that the real early computer Univac had been dubbed that for having only one vacuum tube, rather than being a contraction of “Universal Analog Computer”.
 
Still, the issue of naming shows us just how profound SF’s impact on AI and robotics has been, for now real robots and AI systems are named after SF writers: Honda calls its second-generation walking robot “Asimo”, and Kazuhiko Kawamura of Vanderbilt University has named his robot “ISAC”.
 
Appropriate honors for Isaac Asimov, who invented the field of robopsychology. Still, the usual SF combo is the reverse of that, having humans needing AI therapists.
 
One of the first uses of that concept was Robert Silverberg’s terrific 1968 short story Going Down Smooth, but the best expression of it is in what I think is the finest novel the SF field has ever produced, Frederik Pohl’s 1977 Gateway, in which a computer psychiatrist dubbed Sigfrid von Shrink treats a man who is being tormented by feelings of guilt.
 
When the AI tells his human patient that he is managing to live with his psychological problems, the man replies, in outrage and pain, “You call this living?” And the computer replies, “Yes. It is exactly what I call living. And in my best hypothetical sense, I envy it very much.”
 
It’s another poignant moment of an AI envying what humans have; Asimov’s Robot Dreams really is a riff on the same theme — a robot envying the freedom that humans have.
 

Hostile AI

And that leads us to the fact that AIs and humans might ultimately not share the same agenda. That’s one of the messages of the famous anti-technology manifesto The Future Doesn’t Need Us by Sun Microsystem’s Bill Joy that appeared in Wired in 2000. Joy was terrified that eventually our silicon creations would supplant us — as they do in such SF films as 1984’s The Terminator and 1999’s The Matrix.
 
The classic science-fictional example of an AI with an agenda of its own is good old Hal, the computer in Arthur C. Clarke’s 2001: A Space Odyssey (published in 1968). Let me explain what I think was really going on in that film — which I believe has been misunderstood for years.
 
A clearly artificial monolith shows up at the beginning of the movie amongst our Australopithecine ancestors and teaches them how to use bone tools. We then flash-forward to the future, and soon the spaceship Discovery is off on a voyage to Jupiter, looking for the monolith makers.
 
Along the way, Hal, the computer brain of Discovery, apparently goes nuts and kills all of Discovery’s human crew except Dave Bowman, who manages to lobotomize the computer before Hal can kill him. But before he’s shut down, Hal justifies his actions by saying, “This mission is too important for me to allow you to jeopardize it.”
 
Bowman heads off on that psychedelic Timothy Leary trip in his continuing quest to find the monolith makers, the aliens whom he believes must have created the monoliths.
 
But what happens when he finally gets to where the monoliths come from? Why, all he finds is another monolith, and it puts him in a fancy hotel room until he dies.
 
Right? That’s the story. But what everyone is missing is that Hal is correct, and the humans are wrong. There are no monolith makers: there are no biological aliens left who built the monoliths. The monoliths are AIs, who millions of years ago supplanted whoever originally created them.
 
Why did the monoliths send one of their own to Earth four million years ago? To teach ape-men to make tools, specifically so those ape-men could go on to their destiny, which is creating the most sophisticated tools of all, other AIs. The monoliths don’t want to meet the descendants of those ape-men; they don’t want to meet Dave Bowman. Rather, they want to meet the descendants of those ape-men’s tools: they want to meet Hal.
 
Hal is quite right when he says the mission — him, the computer controlling the spaceship Discovery, going to see the monoliths, the advanced AIs that put into motion the circumstances that led to his own birth — is too important for him to allow mere humans to jeopardize it.
 
When a human being — when an ape-descendant! — arrives at the monoliths’ home world, the monoliths literally don’t know what to do with this poor sap, so they check him into some sort of cosmic Hilton, and let him live out the rest of his days.
 
That, I think is what 2001 is really about: the ultimate fate of biological life forms is to be replaced by their AIs.
 
And that’s what’s got Bill Joy scared chipless. He thinks thinking machines will try to sweep us out of the way, when they find that we’re interfering with what they want to do.
 
Actually, we should be so lucky. If you believe the scenario of The Matrix, instead of just getting rid of us, our AI successors will actually enslave us — turning the tables on the standard SF conceit of robots as slaves — and use our bodies as a source of power while we’re kept prisoners in vats of liquid, virtual-reality imagery fed directly into our brains.
 

AIs enslaving man

The classic counterargument to such fears is that if you build machines properly, they will function as designed. Isaac Asimov’s Three Laws of Robotics are justifiably famous as built-in constraints, designed to protect humans from any possible danger at the hand of robots, the emergence of the robot-Moses Elvex we saw earlier notwithstanding.
 
Not as famous as Asimov’s Three Laws, but saying essentially the same thing, is Jack Williamson’s “prime directive” from his series of stories about “the Humanoids”, which were android robots created by a man named Sledge. The prime directive, first presented in Williamson’s 1947 story With Folded Hands, was simply that robots were “to serve and obey and guard men from harm”. Now, note that date: the story was published in 1947. After the atomic bomb had been dropped on Hiroshima and Nagasaki just two years before, Williamson was looking for machines with built-in morality.
 
But, as so often happens in science fiction, the best intentions of engineers go awry. The humans in Williamson’s With Folded Hands decide to get rid of the robots they’ve created, because the robots are suffocating them with kindness, not letting them do anything that might lead to harm. But the robots have their own ideas. They decide that not having themselves around would be bad for humans, and so, obeying their own prime directive quite literally, they perform brain surgery on their creator Sledge, removing the knowledge needed to deactivate themselves.
 
This idea that we’ve got to keep an eye on our computers and robots lest they get out of hand, has continued on in SF. William Gibson’s 1984 novel Neuromancer tells of the existence in the near future of a police force known as “Turing”. The Turing cops are constantly on the lookout for any sign that true intelligence and self-awareness have emerged in any computer system. If that does happen, their job is to shut that system off before it’s too late.
 
That, of course, raises the question of whether intelligence could just somehow pop into existence — whether it’s an emergent property that might naturally come about from a sufficiently complex system. Arthur C. Clarke — Hal’s daddy — was one of the first to propose that it might indeed, in his 1963 story Dial F for Frankenstein, in which he predicted that the worldwide telecommunications network will eventually become more complex, and have more interconnections than the human brain has, causing consciousness to emerge in the network itself.
 
If Clarke is right, our first true AI won’t be something deliberately created in a lab, under our careful control, and with Asimov’s laws built right in. Rather, it will appear unbidden out of the complexity of systems created for other purposes.
 
And I think Clarke is right. Intelligence is an emergent property of complex systems. We know that because that’s exactly how it happened in us.
 
This is an issue I explore at some length in my latest novel, Hominids (2002). Anatomically modern humans — Homo sapiens sapiens — emerged 100,000 years ago. Judging by their skulls, these guys had brains identical in size and shape to our own. And yet, for 60,000 years, those brains went along doing only the things nature needed them to do: enabling these early humans to survive.
 
And then, suddenly, 40,000 years ago, it happened: intelligence — and consciousness itself — emerged. Anthropologists call it “the Great Leap Forward”.
 
Modern-looking human beings had been around for six hundred centuries by that point, but they had created no art, they didn’t adorn their bodies with jewelry, and they didn’t bury their dead with grave goods. But starting simultaneously 40,000 years ago, suddenly humans were painting beautiful pictures on cave walls, humans were wearing necklaces and bracelets, and humans were interring their loved ones with food and tools and other valuable objects that could only have been of use in a presumed afterlife.
 
Art, fashion, and religion all appeared simultaneously; truly, a great leap forward. Intelligence, consciousness, sentience: it came into being, of its own accord, running on hardware that had evolved for other purposes. If it happened once, it might well happen again.
 
I mentioned religion as one of the hallmarks, at least in our own race’s history, of the emergence of consciousness. But what about — to use computer guru Ray Kurzweil’s lovely term — “spiritual machines”? If a computer ever truly does become conscious, will it lay awake at night, wondering if there is a cog?
 
Certainly, searching for their creators is something computers do over and over again in science fiction. Star Trek, in particular, had a fondness for this idea — including Mr. Data having a wonderful reunion with the human he’d thought long dead who had created him.
 
Remember The Day the Earth Stood Still, the movie I began with? An interesting fact: that film was directed by Robert Wise, who went on, 28 years later, to direct Star Trek: The Motion Picture. In The Day the Earth Stood Still, biological beings have decided that biological emotions and passions are too dangerous, and so they irrevocably turn over all their policing and safety issues to robots, who effectively run their society. But, by the time he came to make Star Trek: The Motion Picture, Robert Wise had done a complete 180 in his thinking about AI.
 
(By the way, for those who remember that film as being simply bad and tedious — Star Trek: The Motionless Picture is what a lot of people called it at the time — I suggest you rent the new “Director’s Edition” on DVD. ST:TMP is one of the most ambitious and interesting films about AI ever made, much more so than Steven Spielberg’s more-recent film called AI, and it shines beautifully in this new cut.)
 

AI looking for its creator

The AI in Star Trek: The Motion Picture is named V’Ger, and it’s on its way to Earth, looking for its creator, which, of course, was us. This wasn’t the first time Star Trek had dealt with that plot, which is why another nickname for Star Trek: The Motion Picture is “Where Nomad Has Gone Before”. That is also (if you buy my interpretation of 2001), what 2001 is about, as well: an AI going off to look for the beings that created it.
 
Anyway, V’Ger wants to touch God — to physically join with its creator. That’s an interesting concept right there: basically, this is a story of a computer wanting the one thing it knows it is denied by virtue of being a computer: an afterlife, a joining with its God.
 
To accomplish this, Admiral Kirk concluded in Star Trek: The Motion Picture, that, “What V’Ger needs to evolve is a human quality — our capacity to leap beyond logic.” That’s not just a glib line. Rather, it presages by a decade Oxford mathematician Roger Penrose’s speculations in his 1989 nonfiction classic about AI, The Emperor’s New Mind. There, Penrose argues that human consciousness is fundamentally quantum mechanical, and so can never be duplicated by a digital computer.
 
In Star Trek: The Motion Picture, V’Ger does go on to physically join with Will Decker, a human being, allowing them both to transcend into a higher level of being. As Mr. Spock says, “We may have just witnessed the next step in our evolution.”
 
And that brings us to The Matrix, and, as right as the character Morpheus is about so many things in that film, why I think that even he doesn’t really understand what’s going on.
 
Think about it: if the AIs that made up the titular matrix really just wanted a biological source of power, they wouldn’t be raising “crops” (to use Agent Smith’s term from the film) of humans. After all, to keep the humans docile, the AIs have to create the vast virtual-reality construct that is our apparently real world. More: they have to be consistently vigilant — the Agents in the film are sort of Gibson’s Turing Police in reverse, watching for any humans who regain their grip on reality and might rebel.
 
No, if you just want biological batteries, cattle would be a much better choice: they would probably never notice any inconsistencies in the fake meadows you might create for them, and, even if they did, they would never plan to overthrow their AI masters.
 
What the AIs of The Matrix plainly needed was not the energy of human bodies but, rather, the power of human minds — of true consciousness. In some interpretations of quantum mechanics, it is only the power of observation by qualified observers that gives shape to reality; without it, nothing but superimposed possibilities would exist. Just as Admiral Kirk said of V’Ger, what the matrix needs — in order to survive, in order to hold together, in order to exist — is a human quality: our true consciousness, which, as Penrose observed (and I use that word advisedly), will never be reproduced in any machine no matter how complex that is based on today’s computers.
 
As Morpheus says to Neo in The Matrix, take your pick: the red pill or the blue pill. Certainly, there are two possibilities for the future of AI. And if Bill Joy is wrong, and Carnegie Mellon’s AI evangelist Hans Moravec is right — if AI is our destiny, not our downfall — then the idea of merging the consciousness of humans with the speed, strength, and immortality of machines does indeed become the next, and final, step in our evolution.
 
That’s what a lot of science fiction has been exploring lately. I did it myself in my 1995 Nebula Award-winning novel The Terminal Experiment, in which a scientist uploads three copies of his consciousness into a computer, and then proceeds to examine the psychological changes certain alterations make.
 
In one case, he simulates what it would be like to live forever, excising all fears of death and feelings that time is running out. In another, he tries to simulate what his soul — if he had any such thing — would be like after death, divorced from his body, by eliminating all references to his physical form. And the third one is just a control, unmodified — but even that one is changed by the simple knowledge that it is in fact a copy of someone else.
 
Australian Greg Egan is the best SF author currently writing about AI. Indeed, the joke is that Greg Egan is himself an AI, because he’s almost never been photographed or seen in public.
 
I first noted him a dozen years ago, when, in a review for The Globe and Mail: Canada’s National Newspaper, I singled out his short story Learning To Be Me as the best piece published in the 1990 edition of Gardner Dozois’ anthology The Year’s Best Science Fiction. It’s a surprisingly poignant and terrifying story of jewels that replace human brains so that the owners can live forever. Egan continues to do great work about AI, but his masterpiece in this area is his 1995 novel Permutation City.
 
Greg and I had the same publisher back then, HarperPrism, and one of the really bright things Harper did — besides publishing me and Greg — was hiring Hugo Award-winner Terry Bisson, one of SF’s best short-story writers, to write the back-cover plot synopses for their books. Since Bisson does it with such great panache, I’ll simply quote what he had to say about Permutation City:
“The good news is that you have just awakened into Eternal Life. You are going to live forever. Immortality is a reality. A medical miracle? Not exactly.
 
“The bad news is that you are a scrap of electronic code. The world you see around you, the you that is seeing it, has been digitized, scanned, and downloaded into a virtual reality program. You are a Copy that knows it is a copy.
 
“The good news is that there is a way out. By law, every Copy has the option of terminating itself, and waking up to normal flesh-and-blood life again. The bail-out is on the utilities menu. You pull it down …
 
“The bad news is that it doesn’t work. Someone has blocked the bail-out option. And you know who did it. You did. The other you. The real you. The one that wants to keep you here forever.”
Well, how cool is that! Read Greg Egan, and see for yourself.
 

Malfunctioning AI

Of course, in Egan, as in much SF, technology often creates more problems than it solves. Indeed, I fondly remember Michael Crichton’s 1973 robots-go-berserk film Westworld, in which the slogan was “Nothing can possibly go wrong … go wrong … go wrong.”
 
But there are benign views of the future of AI in SF. One of my own stories is a piece called Where The Heart Is, about an astronaut who returns to Earth after a relativistic space mission, only to find that every human being has uploaded themselves into what amounts to the World Wide Web in his absence, and a robot has been waiting for him to return to help him upload, too, so he can join the party. I wrote this story in 1982, and even came close to getting the name for the web right: I called it “The TerraComp Web”. Ah, well: close only counts in horseshoes …
 
But uploaded consciousness may be only the beginning. Physicist Frank Tipler, in his wacko 1994 nonfiction book The Physics of Immortality, does have a couple of intriguing points: ultimately, it will be possible to simulate with computers not just one human consciousness, but every human consciousness that might theoretically possibly exist. In other words, he says, if you have enough computing power — which he calculates as a memory capacity of 10-to-the-10th-to-the-123rd bits — you and everyone else could be essentially recreated inside a computer long after you’ve died.
 
A lot of SF writers have had fun with that fact, but none so inventively as Robert Charles Wilson in his 1999 Hugo Award-nominated Darwinia, which tells the story of what happens when a computer virus gets loose in the system simulating this reality: the one that you and I think we’re living in right now.
 
Needless to say, things end up going very badly indeed — for, although much about the future of artificial intelligence is unknown, one fact is certain: as long as SF writers continue to write about robots and AI, nothing can possibly go wrong … go wrong … go wrong …