Menu

Special Report

Preparing for our Posthuman Future of Artificial Intelligence

by Lifeboat Foundation Scientific Advisory Board member David Brin, Ph.D. First published in 2016.
 

Our posthuman future

 

By exploring the recent books on the dilemmas of AI and Human Augmentation, how can we better prepare for (and understand) the posthuman future?

“Each generation imagines itself to be more intelligent than the one that went before it, and wiser than the one that comes after it.” – George Orwell

What will happen as we enter the era of human augmentation, artificial intelligence and government-by-algorithm? James Barrat, author of Our Final Invention, said: “Coexisting safely and ethically with intelligent machines is the central challenge of the twenty-first century.”

A lot of folks are earnestly exploring the topic. “Will scientists soon be able to create supercomputers that can read a newspaper with understanding, or write a news story, or create novels, or even formulate laws?” asks J. Storrs Hall in Beyond AI: Creating the Conscience of the Machine (2007). “And if machine intelligence advances beyond human intelligence, will we need to start talking about a computer’s intentions?” Sharing this concern, SpaceX/Tesla entrepreneur Elon Musk has joined with YCombinator founder Sam Altman to establish OpenAI, an endeavor that aims to keep artificial intelligence research – and its products – accountable by maximizing transparency and openness.

Among the most-worried is Swiss author Gerd Leonhard, whose new book Technology Vs. Humanity: The Coming Clash Between Man and Machine, coins an interesting term, "androrithm", to contrast with the algorithms that are implemented in every digital calculating engine or computer. Some foresee algorithms ruling the world with the inexorable automaticity of reflex, and Leonhard asks: "Will we live in a world where data and algorithms triumph over androrithms… i.e., all that stuff that makes us human?"


Will we live in a world where data and algorithms triumph over androrithms… i.e., all that stuff that makes us human?

Will we see the explosive or exponential transitions predicted by Vernor Vinge, who gave “singularity” its modern meaning, or as championed by Google chief technologist Ray Kurzweil? Day-in, day-out, we are only somewhat aware of rapid change, since we swim along inside its current. But Leonhard illustrates how swiftly a singularity crisis may come on, by referring to a line from Ernest Hemingway’s The Sun Also Rises:

“How did you go bankrupt?”

“Two ways. Gradually and then suddenly.”

Comments Leonhard:

“Exponentially and the “gradually then suddenly” phenomenon are essential to understand when creating our future… Increasingly, we will see humble beginnings of a huge opportunity or threat. And then, all of a sudden, it is either gone and forgotten or it is here, now, and much bigger than imagined. Think of solar energy, digital currencies, or autonomous vehicles: All took a long time to play out, but all of a sudden, they’re here and they’re roaring. Those who adapt too slowly or fail to foresee the pivot points will suffer the consequences.”

He adds: “wait and see is very likely going to mean waiting to become irrelevant.”

Leonhard expresses urgency for civilization to apply humanist values to the coming transition. Unlike Francis Fukayama, whose Our Posthuman Future exudes loathing for tech-driven disruption of old ways and urges renunciation, Leonhard accepts that major changes are inevitable and won’t be all-bad. He is friendly to many in the “humanity-plus” community and shows an awareness of science fiction (SF) as a medium for scenario exploration.

(I do find it troubling that so many pundits give nods toward SF, yet seem to have read nothing since William Gibson’s Neuromancer, whose simplistic preachings and redolent cynicism now seem rather quaint, unhelpful and long in the tooth. That perennial citation is starting to seem perfunctory, even discrediting.)

Nevertheless, after a very interesting first portion, Technology Vs. Humanity thereupon devolves into the kind of repetitious proselytization that can be distilled into two sentences:

  • We should all try to retain mastery over mechanisms that cannot ever have any ethical constraints of their own.
  • All that we hold dear will be doomed, unless we consistently, forcefully, and perpetually apply upon our tools moral standards that have served humanity to this point.

That is quite a double-barreled onus! A prospective task that seems — peering ahead across future generations — rather exhausting.

About the book: Artificial intelligence. Cognitive computing. The Singularity. Digital obesity. Printed food. The Internet of Things. The death of privacy. The end of work-as-we-know-it, and radical longevity: The imminent clash between technology and humanity is already rushing towards us. What moral values are you prepared to stand up for — before being human alters its meaning forever? Before it’s too late, we must stop and ask the big questions: How do we embrace technology without becoming it? When it happens — gradually, then suddenly — the machine era will create the greatest watershed in human life on Earth.

Exploring analogous territory (and equipped with a very similar cover) Heartificial Intelligence by John C. Havens also explores the looming prospect of all-controlling algorithms and smart machines, diving into questions and proposals that overlap with Leonhard. “We need to create ethical standards for the artificial intelligence usurping our lives and allow individuals to control their identity, based on their values,” Havens writes.

Mark Anderson of the Strategic News Service pondered the onrush of devices that might meddle in our minds and hearts:

“Frank Lloyd Wright is rumored to have once boasted that he could design a house which…could lead the inhabitants to fall in love, or to get divorced. If this was even partly true of building architecture…then what of the architecture of those who will be holding, and reacting to, our innermost secrets? How will a new user know that she is using a bot with bad performance statistics? Should there be different levels of ethical certification for bots involved with selling shoes on Amazon, compared to counseling or doing Watson-like medical diagnoses?”

Making a virtue of the hand we Homo sapiens are dealt, Havens maintains: “Our frailty is one of the key factors that distinguish us from machines.”

Which seems intuitive till you recall that almost no mechanism in history has ever worked for as long, as resiliently or consistently, as a healthy 70 year old human being has, recovering from countless shocks and adapting to innumerable surprising changes. Still, he makes a strong (if obvious) point that “the future of happiness is dependent on teaching our machines what we value most.”

 

The Optimists Strike Back!

In sharp contrast to those worriers is Ray Kurzweil’s The Age of Spiritual Machines: When Computers Exceed Human Intelligence, which posits that our cybernetic children will be as capable as our biological ones, at one key and central aptitude — learning from both parental instruction and experience how to play well with others.

This will be especially likely if (as I posit in Existence) AI researchers come to a too-long delayed realization — that we know of only one way that intelligence ever actually came about in this universe: through upbringing in human homes. Through interfacing with the world relentlessly in the physical, personal, and cultural feedback loops of childhood. Indeed — and here’s an irony — this is the only scenario under which the urgings of Leonhard and Havens and so many others have even a remote chance of coming true.

Well, there is one other way, elucidated in Robin Hanson’s new book: The Age of Em. In that startlingly original and well-thought-out tome, Hanson wagers that AI can only happen in the near term by emulating the brain activity and working minds of actual, living humans. Such doppeled copies — (a little like e-versions of my dittoes, in Kiln People) — might proliferate in “matrix” style software worlds, spawning billions, trillions and even quadrillions of copies, all of them based upon a selection of original human beings. Originals whose own versions of human morality and spirituality become templates to pass down the line. Hence — according to Hanson — such cyber-emulated descendants would be inherently capable of ethics, since they are based on us… though they might later veer into new cultures as different from ours as Shogun-era Japan was from the Yanamamo, or Aztecs, or Tibetans, or attendees at Burning Man.

 

A Failed Prescription


Alicia Vikander in Ex Machina

Gerd Leonhard seems aware, at least surficially, that culture makes a difference. Moreover he sniffs, scenting danger in optimism:

"…To me, it is clear that technological determinism and a global version of the “California ideology” (as in “Why don’t we just invent our way out of this, have fun, make lots of money while improving the lives of billions of people with these amazing new technologies?”) could prove to be just as lazy — and dangerous — as Luddism."

A former resident of Silicon Valley, Leonhard is welcome to his opinion. Though I also find it ironic. For example, he preaches that STEM educations should be accompanied by exposure to humanities and ethics and all that, in order to generate innovators who are also grounded in history and values…

…while appearing to ignore the plain fact that that is exactly what happens in Californian schools and especially that state’s glorious universities, far more than anywhere else on the planet. Indeed, it is only in North America that all universities fully implement a fourth year in their baccalaureate programs, consisting of “breadth requirements”, so that science and engineering types must take a full year of humanistic courses… while arts, humanities or other “soft” majors must imbibe enough science survey classes to foster at least marginally aware citizens.

(Proof of this? The U.S. almost always scores among the top three in “adult science literacy” and often number one. I explain this elsewhere, so don’t let your head explode with cognitive dissonance.)

In his book Machines of Loving Grace, John Markoff writes, “The best way to answer the hard questions about control in a world full of smart machines is by understanding the values of those who are actually building these systems.” It is an open question whether the yin or the yang side of Silicon Valley culture… or else the new, state controlled tech centers in China, for example… take this obligation down paths of responsibility.

Gerd Leonhard coins a term: "Exponential Humanism. Through this philosophy, I believe we can find a balanced way forward that will allow us to both embrace technology but not become technology in the process." Nor do I disagree with the general desideratum. The conversation he calls-for is essential!

Alas, Leonhard then goes on to present checklists, then more checklists, of things we ought to do and/or not-do, in order to retain our humanity, control and values. Take this agenda as a sample:

I propose that we devise a test that gauges all new scientific and technological breakthroughs according to questions such as:

  • Does this idea violate the human rights of anyone involved?
  • Does this idea substitute human relationships with machine relationships?
  • Does this idea put efficiency over humanity?
  • Does this idea put economics and profits over the most basic human ethics?
  • Does this idea automate something that should not be automated?

I don’t mind checklists, and these certainly contain wisdom. But Leonhard offers no details about how to pass and enforce such rules. By worldwide consensus among those who read Technology vs. Humanity? By legislation? Orwellian fiat? Nor does he speak of enforcement; what is to be done about dissenters or those who reject renunciation?

 

A Method That Is Truly Human


Does technology have ethics?

Again and again, from techno skeptics like Leonhard and Havens and so many others, we hear that “technology has no ethics.”

Well, I am not so sure about that. Nor is Kurzweil, whose Age of Spiritual Machines suggests otherwise. Or Kevin Kelly, whose What Technology Wants and The Inevitable propose simple process solutions to the dilemma of encouraging decent outcomes and behavior. Nor Peter Diamandis, whose Abundance impudently forecasts a post-scarcity future, when spectacularly wealthy citizens can partner with cyber entities and explore values together. Nor Isaac Asimov, who foresaw robots caring deeply about moral issues, over the long stretch of time.

But let’s go along with Havens and Leonhard and accept the premise that “technology has no ethics.” In that case, the answer is simple.

Then don’t rely on ethics! Certainly evangelization has not had the desired effect — fostering good and decent behavior where it matters most — in the past. Seriously, I will give a cookie to the first modern pundit I come across who ponders human history, taking perspective from the long ages of brutal, feudal darkness endured by our ancestors.

Across all of those harsh millennia, people could sense that something was wrong. Cruelty and savagery, tyranny and unfairness vastly amplified the already unsupportable misery of disease and grinding poverty. Hence, well-meaning men and women donned priestly robes and… preached!

They lectured and chided. They threatened damnation and offered heavenly rewards. Their intellectual cream concocted incantations of either faith or reason, or moral suasion. From Hindu and Buddhist sutras to polytheistic pantheons to Judeao-Christian-Muslim laws and rituals, we have been urged to behave better by sincere finger-waggers since time immemorial. Until finally, a couple of hundred years ago, some bright guys turned to all the priests and prescribers and asked a simple question:

“How’s that working out for you?”

In fact, while moralistic lecturing might sway normal people a bit toward better behavior, it never affects the worst human predators, parasites, and abusers — just as it won’t divert the most malignant machines. Indeed, moralizing often empowers them, offering ways to rationalize exploiting others. Even Asimov’s fabled robots — driven and constrained by his checklist of unbendingly benevolent, humano-centric Three Laws — eventually get smart enough to become lawyers. Whereupon they proceed to interpret the embedded ethical codes however they want. (See how I resolve this in Foundation’s Triumph.)

And yet, preachers never stopped. Nor should they; ethics are important! But more as a metric tool, revealing to us how we’re doing. How we change. For decent people, ethics are the mirror in which we evaluate ourselves and hold ourselves accountable. And that realization was what led to a new technique. Something enlightenment pragmatists decided to try, a couple of centuries ago. A trick, a method, that enabled us at last to rise above a mire of kings and priests and scolds. The secret sauce of our success is —

— accountability. Creating a civilization that is flat and open and free enough – empowering so many that predators and parasites may be confronted by the entities who most care about stopping predation, their victims. One in which politicians and elites see their potential range of actions limited by law and by the scrutiny of citizens.

Does this newer method work as well as it should? Hell no!

Does it work better than every single other system ever tried, including those filled to overflowing with moralizers? Better than all of them combined? By light years?

Yes, indeed.


Will robots learn to think morally?

We may not be, by nature, highly moral creatures. But we do know how to be persnickety. Suspicious. Judgmental. Accusatory. Demanding. Those we do with spectacular skill and passion. And while these traits often wrought vileness, in hierarchies of old, we have harnessed them into arenas wherein positive sum, win-win outcomes pour forth, catching and staunching many evils. Detecting and amplifying so many good things.

Moreover, this may be the proper way to deal with ethics-deficient technology. As citizens and users, we need to stay judgmental, applying accountability via markets, democracy, science, and courts — and public opinion — upon those companies and cyber entities who behave in ways we find unethical. Or inhuman. The specifics of implementation will change, with time. (We’ll need new, technological tools for applying accountability.) But this is the way that Ray Kurzweil’s vaunted singularity machines will learn to be “spiritual”. The kind and friendly ones will do better than their unethical competitors… because the good guy machines will have us — the Olde Race — as allies against the meanie-bots. And yes, it might boil down to just that.

Alas, the glory of our era — this technique that underlies our positive-sum games — seems so poorly understood that many of our best minds never grasp the method in its essence, believing instead that we’ll cross the minefield ahead by chiding.

Gerd Leonhard, in Technology vs. Humanity, offers us a Hegelian dialectic of sorts. Between two dismal theses — the blithe techno-transcendentalism of Ray Kurzweil and the renunciatory nostalgia of Francis Fukayama — Leonhard rightly pleads for caution, for a middle-ground synthesis, though leaning a bit toward Fukayama. Leonhard frets over plans to embrace and incorporate tech-prosthetics into human existence. “Because it would be a reduction, not an expansion, of who we are, it would no longer be empowerment but enslavement…"

To which I must reply: how the heck do you know that?

 

Conclusion

All of them, spanning the current spectrum of discourse from Kurzweil and Peter Diamandis to Leonhard and Havens all the way to Fukayama and religious fundamentalists, seem bent on making grand declarations. Yet, those who would lay down lists of demands and prescriptions make a shared assumption, the same one proclaimed by Plato and so many other dogmatists: that they know the way of things better than our descendants will!

Recall the quotation from George Orwell that opened this article: “Each generation imagines itself to be more intelligent than the one that went before it, and wiser than the one that comes after it.” Shall we then demand that our children and grandchildren — perhaps a bit augmented and smarter than us, but certainly vastly more knowledgeable — ought to follow blueprints that we lay down? Like Cro-Magnon hunters telling us never to forget rituals for propitiating the mammoth spirits? Or bronze age herdsmen telling us how to make love?

Ben Franklin and his apprentices led a conspiracy against kings and priests, crafting systems of accountability not in order to tell their descendants how to live, but in order to leave those later citizens the widest range of options. It is that flexibility — wrought by free speech, open inquiry, due process and above all reciprocal accountability — that lent us our most precious sovereign power. To learn from mistakes and try new things, innovating along a positive-sum flow called progress.

We did not need specifics from the Founders; indeed, it proved desperately important for later generations to toss out many of their crude biases! Nor will our heirs need or benefit from explicit lists and prescriptions laid down by well-meaning authors in 2016. Because they will be both smarter and wiser than us, or we’ll have failed.

Will they be smarter and wiser in part because of technology? That seems likely. Might they have solved many of the quandaries that fret us… only to encounter others that we cannot imagine? Also very likely.

Might some of our practical and moral decisions right now either aid or impede that growth? Of course. That is why I bother to engage this topic and read all these earnestly sincere tomes about the future!

But our job is not to delineate or prescribe. It is to find enough of the errors and calamities in advance, cancel those we can, and build enough virtuous cycles so that our children may stand on our shoulders, doing and achieving and pondering and making ethical decisions for their own time. Doing all of that both clumsily and brilliantly. And then yammering too much advice at their own heirs.