Toggle light / dark theme

(End of series. For previous topics please see parts I-IX)

Power plants. Trees could do a lot, as we have seen — and they’re solar powered, too. Once trees can suck metals from the soil and grow useful, shaped objects like copper wire, a few more levels of genetic engineering could enable the tree to use this copper wire to deliver electricity. Since a tree is already, now, a solar energy converter, we can build on that by having the tree grow tissues that convert energy into electricity. Electric eels can already do that, producing enough of a jolt to be lethal to humans. Even ordinary fish produce small amounts of electricity to create electric fields in the water around them. Any object nearby disrupts the field, enabling the fish to tell that something is near, even in total darkness. We may never be able to plug something into a swimming fish but we can already make batteries out of potatoes. So why not trees that grow into electricity providers all by themselves? It would be great to be able to plug your electrical devices into a tree (or at least a socket in your house that is connected to the tree). Then you would no longer need to connect to the grid, purchase solar panels, or install a windmill. You would, however, need to keep your trees healthy and vigorous! Tree care specialists would become a highly employable occupation.

Greening the desert. The Sahara and various other less notorious but still very dry deserts around the world have plenty of sand and rocks. But they don’t have much greenery. The main problem is lack of water. Vast swaths of the Sahara, for example, are plant free. It’s just too dry. However this problem is solvable! Cacti and other desert plants could potentially extract water from the air. Plants already extract carbon dioxide molecules from the air. Even very dry air contains considerable water vapor, so why not extract water molecules too. Indeed, plants already transport water molecules in the ground into their roots, so is it really such a big step to do the same from the air? Tillandsia (air plant) species can already pull in water with their leaves, but it has to be rain or other liquid water. Creating plants that can extract gaseous water vapor from the air in a harsh desert environment would require sophisticated genetic engineering, or a leap for mother nature, but it is still only the first step. Plants get nutrients out of the soil by absorbing fluid that has dissolved them, so dry soil would be a problem even for a plant that contained plenty of water pulled from the air. Another level of genetic engineering or natural evolution would be required to enable them to secrete fluid out of their roots to moisten chunks of soil to dissolve its minerals, and reabsorb the now nutritious, mineral-laden liquid back into their roots.

Once this difficult task is accomplished, whether by natural evolution in the distant future or genetic engineering sooner, things will be different in the desert. Canopies of vegetation that hide the ground will be possible. Thus shaded and sheltered, the ground will be able to support a much richer ecosystem of creatures and maybe even humans than is currently the case in deserts. One of Earth’s harshest environments would be tamed.

Phyto-terraforming. To terraform means to transform a place into an Earth-like state (terra is Latin for Earth). Mars for example is a desert wasteland, but it once ran with rivers, and it would be great if the Martian surface was made habitable — in other words, terraformed. Venus might be made habitable if we could only get rid of its dense blanket of carbon dioxide, which causes such a severe greenhouse effect that its surface is over 800 degrees Fahrenheit, toasty indeed. And why not consider terraforming inhospitable terrain right here on earth, like the Sahara desert, or Antarctica. Phyto-terraforming is terraforming using plants. Actually plants are so favored for this task that when people discuss terraforming, they usually mean phyto-terraforming. Long ago, plants did in fact terraform the Earth, converting a hostile atmosphere with no oxygen but plenty of carbon dioxide into a friendly one with enough oxygen that we can comfortably exist. Plants worked on Earth, and might work on Mars or even Venus, but not on the moon. The reason is that plants need carbon dioxide and water. Venus has these (and reasonable temperatures) high in the atmosphere, suggesting airborne algae cells. Mars is a more likely bet as it has water (as ice) available to surface-dwelling plants at least in places.

If Mars is the most likely candidate for phyto-terraforming, what efforts have been made to move in that direction? A first step has been to splice genes into ordinary plants from an organism that lives in hot water associated with deep ocean thermal vents. This organism is named Pyrococcus furiosus (Pyro- means fire in Greek, coccus refers to ball-shaped bacteria, hence “fireball”). Pyrococcus is most comfortable living at about the boiling point of water and can grow furiously, double its population in 37 minutes. It has evolved genes for destroying free radicals that work better than those naturally present in plants. Free radicals are produced by certain stressors in plants (and humans), cause cell damage, and can even lead to death of the organism. By splicing such genes into the plant Arabidopsis thaliana, the experimental mouse of plant research, this small and nondescript-looking plant can be made much more resistant to heat and lack of water. These genes have also been spliced into tomatoes, which could help feed future colonists. Of course Mars requires cold, not heat tolerance, but the lack of water part is a good start. The heat and drought parts might be useful for building plants to terraform deserts here on Earth, bringing terraforming of Earth deserts a couple of steps closer. With several additional levels of genetic modification, we might eventually terraform Mars yet.

Recommendations

When the advances described here are likely to happen would be good to know. Will they occur in your lifetime? Your grandchildren’s? Thousands or millions of years into the future? If the latter, there is not much point in devoting precious national funds to help bring them about, but if the former, it might be worth the expense of hurrying the process along. To determine the likely timing of future technological advances, we need to determine the speed of advancement. To measure this speed, we can look at the rate at which advances have occurred in the past, and ask what will happen in the future if advances continue along at the same rate. This approach is influential in the modern computer industry in the guise of “Moore’s Law.” However it was propounded at least as early as about 2,500 years ago, when Chinese philosopher Confucius is said to have noted, “Study the past if you would divine the future.” It would be nice to know when we can expect to grow and eat potatoes with small hamburgers in the middle, pluck nuggets of valuable metals from trees, power our homes by plugging into electricity-generating trees growing in our back yards, or terraform Mars.

Will our lumbering industrial age driven information age segue smoothly into a futuristic marvel of yet to be developed technology? It might. Or take quantum leaps. It could. Will information technology take off exponentially? It’s accelerating in that direction. The way knowledge is unraveling its potential for enhancing human ingenuity, the future looks bright indeed. But there is a problem. It’s that egoistic tendency we have of defending ourselves against knowing, of creating false images to delude ourselves and the world, and of resolving conflict violently. It’s as old as history and may be an inevitable part of life. If so, there will be consequences.

Who has ever seen drama/comedy without obstacles to overcome, conflicts to confront, dilemmas to address, confrontations to endure and the occasional least expected outcome? Just as Shakespeare so elegantly illustrated. Good drama illustrates aspects of life as lived, and we do live with egoistic mental processes that are both limited and limiting. Wherefore it might come to pass that we who are of this civilization might encounter an existential crisis. Or crunch into a bottleneck out of which … will emerge what? Or extinguish civilization with our egoistic conduct acting from regressed postures with splintered perception.

What’s least likely is that we’ll continue cruising along as usual.

Not with massive demographic changes, millions on the move, radical climate changes, major environmental shifts, cyber vulnerabilities, changing energy resources, inadequate clean water and values colliding against each other in a world where future generations of the techno-savvy will be capable of wielding the next generation of weapons of mass destruction.

On the other hand, there are intelligent people passionately pursuing methods of preventing the use of weapons, combating their effects and securing a future in which these problems mentioned above will be solved, and also working towards an advanced civilization.

It’s a race against time.

In the balance hangs nothing less than the future of civilization.

The danger from technology is secondary.

As of now, regardless of theories of international affairs, in one way or another, we inject power into our currency of negotiation, whether it be interpersonal or international, for after all, power is privilege, hard to give up, especially after getting a taste of it, and so we’ll quarrel over power, perhaps fight. Why deny it? The historical record is there for all to see. As for our inner terrors, our tendency to present false egoistic images to the world and of projecting our secret socially unacceptable fantasies on to others, we might just bring to pass what we fear and deny. It’s possible.

Meantime there are certain simple ideas that remain timeless: For example, as infants we exist at the pleasure of parents, big hulks who pick us up and carry us around sometimes lovingly, sometimes resentfully, often ambivalently, and to be sure many of us come to regard Authority with ambivalence. As Authority regards the dependent. A basic premise is that we all want something in a relationship. So what do we as infants want from Authority? How about security in our exploration of life? How about love? If it’s there we don’t have to pay for it. There are no conditions attached. Life, however, is both complicated and complex beyond a few words, and so we negotiate in the ‘best’ way we have at our disposal, which in the early stages of life are non-verbal intuitive methods that in part enter this life with us, genetically determined, epigenetically determined and in part is learned, but once adopted, a certain core approach becomes habitual, buried deeply under layers of later learned social skills, skills that we employ in our adult lives. These skills are however relatively on the surface. Hidden deep inside are secret desires, unfulfilled fantasies, hidden impulses that wouldn’t make sense in adult relationships if expressed openly in words.

It has been said repeatedly that crisis reveals character. Most of the time we get by in crisis, but we each have a ‘breaking point,’ meaning that under severe enduring stress we regress at a certain point, at which time we’ll abandon sophisticated social skills and a part of us will slip into infantile mode, not necessarily visible on the outside. It varies. No one can claim immunity. And acting out of infantile perception in adult situations can have unexpected consequences depending on the early life drama. Which makes life interesting. It also guarantees an interesting future.

Meantime scientists clarify the biology of learning, of short term memory, of long term memory, of the brain working as a whole, of ‘free will’ as we imagine it, but regardless of future directions, at this time we need agency on the personal and social level so as to help stabilize civilization. By agency I mean responsibility for one’s actions. Accountability, including in the face of dilemmas. Throughout the course of our lives from beginning to end we encounter dilemmas.

Consider the dilemmas the Europeans under German occupation faced last century. I use the European situation as an illustration or social paradigm, not to suggest that this situation will recur, nor to suggest that any one ethnic group will be targeted in the future, but I do suggest that if a global crisis hits, we’ll confront moral dilemmas, and so we can learn from those relatively few Europeans who resolved their dilemmas in noble ways, as opposed to the majority who did nothing to help the oppressed.

If a European in German occupied territory helped a Jew he or she and family would be in danger of arrest, torture and death. How about watching one’s spouse and children being tortured? On the other hand, if she or he did not help they would be participating in murder and genocide, and know it. Despite the danger, certain people from several European countries helped the Jews. According to those who interviewed and wrote about the helpers, (see references listed below) the helpers represented a cross section of the community, that is, some were uneducated laborers, some were serving women, some were formally educated, some were professionals, some professed religious convictions, some did not. Well then, what if anything did these noble risk takers have in common? What they shared in common was this: They saw themselves as responsible moral agents, and, acting on an internal locus of moral responsibility, they each acted on their knowledge and compassion and did the ‘right thing.’ It came naturally to them. But doing the ‘right thing’ in the face of life threatening dilemma does not come naturally to everyone. Fortunately it is a behavior that can be learned.

Concomitant with authentic learning, according to research biologists, is the production of brain chemicals that in turn cultivate structural modification in brain cells. A self reinforcing feedback system. In short, learning is part of a dynamic multi-dimensional interaction of input, output, behavioral change, chemicals, structural brain changes and complex adaptation in systems throughout the body. None of which diminishes the idea that we each enter this life with certain desires, potential and perhaps roles to act out, one of which for me is to improve myself.

Good news! I not only am, I become.

Finally, I list some 20th century resources that remain timeless to this day:

Millgram, S. Obedience to Authority: An Experimental View. Harper & Row. 1974.

Oliner, Samuel P. & Pearl. The Altruistic Personality: Rescuers of Jews in Nazi Europe. Free Press, Division of Macmillan. 1998

Fogelman, Eva. Conscience & Courage Anchor Books, Division of Random House. 1994

Block, Gay & Drucker, Malka. Rescuers: Portraits of Moral Courage in the Holocaust. Holms & Meier Publishers, 1992

Posted by Dr. Denise L Herzing and Dr. Lori Marino, Human-Nonhuman Relationship Board

Over the millennia humans and the rest of nature have coexisted in various relationships. However the intimate and interdependent nature of our relationship with other beings on the planet has been recently brought to light by the oil spill in the Gulf of Mexico. This ongoing environmental disaster is a prime example of “profit over principle” regarding non-human life. This spill threatens not only the reproductive viability of all flora and fauna in the affected ecosystems but also complex and sensitive non-human cultures like those we now recognize in dolphins and whales.

Although science has, for decades, documented the links and interdependence of ecosystems and species, the ethical dilemma now facing humans is at a critical level. For too long have we not recognized the true cost of our life styles and priorities of profit over the health of the planet and the nonhuman beings we share it with. If ever the time, this is a wake up call for humanity and a call to action. If humanity is to survive we need to make an urgent and long-term commitment to the health of the planet. The oceans, our food sources and the very oxygen we breathe may be dependent on our choices in the next 10 years.

And humanity’s survival is inextricably linked to that of the other beings we share this planet with. We need a new ethic.

Many oceanographers and marine biologist have, for a decade, sent out the message that the oceans are in trouble. Human impacts of over-fishing, pollution, and habitat destruction are threatening the very cycles of our existence. In the recent catastrophe in the Gulf, one corporation’s neglectful oversight and push for profit has set the stage for a century of clean up and impact, the implications of which we can only begin to imagine.

Current and reported estimates of stranded dolphins are at fifty-five. However, these are dolphins visibly stranded on beaches. Recent aerial footage, on YouTube, by John Wathen shows a much greater and serious threat. Offshore, in the “no fly zone” hundreds of dolphins and whales have been observed in the oil slick. Some floating belly up and dead, others struggling to breathe in the toxic fumes. Others exhibit “drunken dolphin syndrome” characterized by floating in an almost stupefied state on the surface of the water. These highly visible effects are just the tip of the iceberg in terms of the spill’s impact on the long term health and viability of the Gulf’s dolphin and whale populations, not to mention the suffering incurred by each individual dolphin as he or she tries to cope with this crisis.

Known direct and indirect effects of oil spills on dolphins and whales depend on the species but include, toxicity that can cause organ dysfunction and neurological impairment, damaged airways and lungs, gastrointestinal ulceration and hemorrhaging, eye and skin lesions, decreased body mass due to limited prey, and, the pervasive long term behavioral, immunological, and metabolic impacts of stress. Recent reports substantiate that many dolphins and whales in the Gulf are undergoing tremendous stress, shock and suffering from many of the above effects. The impact to newborns and young calves is clearly devastating.

After the Exxon Valdez spill in Prince William Sound in 1989 two pods of orcas (killer whales) were tracked. It was found that one third of the whales in one pod and 40 percent of the whales in the other pod had disappeared, with one pod never recovering its numbers. There is still some debate about the number of missing whales directly impacted by the oil though it is fair to say that losses of this magnitude are uncommon and do serious damage to orca societies.

Yes, orca societies. Years of field research has led to the conclusion by a growing number of scientists that many dolphin and whale species, including sperm whales, humpback whales, orcas, and bottlenose dolphins possess sophisticated cultures, that is, learned behavioral traditions passed on from one generation to the next. These cultures are not only unique to each group but are critically important for survival. Therefore, not only do environmental catastrophes such as the Gulf oil spill result in individual suffering and loss of life but they contribute to the permanent destruction of entire oceanic cultures. These complex learned traditions cannot be replicated after they are gone and this makes them invaluable.

On December 10, 1948 the General Assembly of the United Nations adopted and proclaimed the Universal Declaration of Human Rights, which acknowledges basic rights to life, liberty, and freedom of cultural expression. We recognize these foundational rights for humans as we are sentient, complex beings. It is abundantly clear that our actions have violated these same rights for other sentient, complex and cultural beings in the oceans – the dolphins and whales. We should use this tragedy as an opportunity to formally recognize societal and legal rights for them so that their lives and their unique cultures are better protected in the future.

Recently, there was a meeting of scientists, philosophers, legal experts and dolphin and whale advocates in Helsinki, Finland, who drafted a Declaration of Rights for Cetaceans a global call for basic rights for dolphins and whales. You can read more about this effort and become a signatory here: http://cetaceanconservation.com.au/cetaceanrights/. Given the destruction of dolphin and whale lives and cultures caused by the ongoing environmental disaster in the Gulf, we think this is one of the ways we can commit ourselves to working towards a future that will be a lifeboat for humans, dolphins and whales, and the rest of nature.

I’m working on this project with Institute for the Future — calling on voices everywhere for ideas to improve the future of global health. It would be great to get some visionary Lifeboat ideas entered!

INSTITUTE FOR THE FUTURE ANNOUNCES BODYSHOCK:
CALL FOR ENTRIES ON IDEAS TO TRANSFORM LIFESTYLES AND THE HUMAN BODY TO IMPROVE HEALTH IN THE NEXT DECADE

“What can YOU envision to improve and reinvent health and well-being for the future?” Anyone can enter, anyone can vote, anyone can change the future of global health.

With obesity, diabetes, and chronic disease rampaging populations around the world, Institute for the Future (IFTF) is turning up the volume on global well-being. Launching today, IFTF’s BodyShock is the first annual competition with an urgent challenge to recruit crowdsourced designs and solutions for better health–to remake the future by rebooting the present.

BodyShock calls upon the public to consider innovative ways to improve individual and collective health over the next 3–10 years by transforming our bodies and lifestyles. Video or graphical entries illustrating new ideas, designs, products, technologies, and concepts, will be accepted from people around the world until September 1, 2010. Up to five winners will be flown to Palo Alto, California on October 8 to present their ideas and be connected to other innovative thinkers to help bring these ideas to life. The grand prize winner will receive the IFTF Roy Amara Prize of $3,000.

“Health doesn’t happen all at once; it’s a consequence of years of choices for our bodies and lifestyles–some large and some small. BodyShock is intended to spark new ideas to help us find our way back to health,” said Thomas Goetz, executive editor of Wired, author of The Decision Tree, and a member of the Health Advisory Board that will be judging the BodyShock contest in addition to votes from the public.

“BodyShock is a fantastic initiative. Global collaboration and participation from all voices can produce a true revolution,” said Linda Avey, founder of Brainstorm Research Foundation and another Advisor to BodyShock.

Entries may come from anyone anywhere and can include, but are not limited to, the following: Life extension, DIY Bio, Diabetic teenagers, Developing countries, Green health, Augmented reality, Self-tracking, and Pervasive games. Participants are challenged to use IFTF’s Health Horizons forecasts for the next decade of health and health care as inspiration, and design a solution for a problem that will be widespread in 3–10 years, using technologies that will become mainstream.

“Think ‘artifacts from the future’–simple, non-obvious, high-impact solutions that don’t exist yet, will be among the concepts we’re looking to the public to introduce,” said Rod Falcon, director of the Health Horizons Program at IFTF.

BodyShock’s grand prize, the Roy Amara Prize, is named for IFTF’s long-time president Roy Amara (1925−2000) and is part of a larger program of social impact projects at IFTF honoring his legacy, known as The Roy Amara Fund for Participatory Foresight, the Fund uses participatory tools to translate foresight research into concrete actions that address future social challenges.

PANEL OF COMPETITION JUDGES

Joanne Andreadis
Lead of Innovation, Centers for Disease Control and Prevention

Linda Avey
Founder, Brainstorm Research Foundation

Jason Bobe
Director of Community, Personal Genome Project
Founder, DIYBio.org

Alexandra Carmichael
Co-founder, CureTogether
Director, Quantified Self

Ted Eytan, MD
Kaiser Permanente, The Permanente Federation

Rod Falcon
Director, Health Horizons Program

Peter Friess
President, Tech Museum of Innovation

Thomas Goetz
Executive Editor, WIRED Magazine
Author, The Decision Tree

Natalie Hodge,MD FAAP
Chief Health Officer, Personal Medicine International

Ellen Marram
Board of Trustees, Institute for the Future
President, Barnegat Group LLC

Kristi Miller Durazo
Senior Strategy Advisor, American Heart Association

David Rosenman
Director, Innovation Curriculum
Center for Innovation at Mayo Clinic

Amy Tenderich
Board Member, Journal of Participatory Medicine
Blogger, DiabetesMine.com

DETAILS

WHAT:
An online competition for visual design ideas to improve global health over the next 3–10 years by transforming our bodies and lifestyles. Anyone can enter, anyone can vote, anyone can change the future of health.

WHEN:
Launch — Friday, June 18,2010

Deadline for entries –Wednesday, September 1, 2010

Winners announced –Thursday, September 23, 2010

BodyShock Winners Celebration at IFTF — 6 — 9 p.m. Friday, October 8, 2010 — FREE and open to the public

WHERE:

http://www.bodyshockthefuture.org

(and 124 University Ave, 2ndFloor, Palo Alto, CA)

In the lunch time I am existing virtually in the hall of the summit as a face on the Skype account — i didn’t get a visa and stay in Moscow. But ironically my situation is resembling what I an speaking about: about the risk of remote AI which is created by aliens million light years from Earth and sent via radio signals. The main difference is that they communicate one way, and I have duplex mode.

This is my video presentation on YouTube:
Risks of SETI, for Humanity+ 2010 summit

We can only see a short distance ahead, but we can see plenty there that needs to be done.
—Alan Turing

As a programmer, I look at events like the H+ Conference this weekend in a particular way. I see all of their problems as software: not just the code for AI and friendly AI, but also that for DNA manipulation. It seems that the biggest challenge for the futurist movement is to focus less on writing English and more on getting the programmers working together productively.

I start the AI chapter of my book with the following question: Imagine 1,000 people, broken up into groups of five, working on two hundred separate encyclopedias, versus that same number of people working on one encyclopedia? Which one will be the best? This sounds like a silly analogy when described in the context of an encyclopedia, but it is exactly what is going on in artificial intelligence (AI) research today.

Today, the research community has not adopted free software and shared codebases sufficiently. For example, I believe there are more than enough PhDs today working on computer vision, but there are 200+ different codebases plus countless proprietary ones. Simply put, there is no computer vision codebase with critical mass.

Some think that these problems are so hard that it isn’t a matter of writing code, it is a matter of coming up with the breakthroughs on a chalkboard. But people can generally agree at a high level how the software for solving many problems will work. There has been code for doing OCR and neural networks and much more kicking around for years. The biggest challenge right now is getting people together to hash out the details, which is a lot closer to Wikipedia than it first appears. Software advances in a steady, stepwise fashion, which is why we need free software licenses: to incorporate all the incremental advancements that each scientist is making. Advances must eventually be expressed in software (and data) so it can be executed by a computer. Even if you believe we need certain scientific breakthroughs, it should be clear that things like robust computer vision are complicated enough that you would want 100s of people working together on the vision pipeline. So, while we are waiting for those breakthroughs, let’s get 100 people together!

There is an additional problem: that C/C++ have not been retired. These languages make it hard for programmers to work together, even if they wanted to. There are all sorts of taxes on time, from learning the archane rules about these ungainly languages, to the fact that libraries often use their own string classes, synchronization primitives, error handling schemes, etc. In many cases, it is easier to write a specialized and custom computer vision library in C/C++ than to integrate something like OpenCV which does everything by itself down to the Matrix class. The pieces for building your own computer vision library (graphics, i/o, math, etc.) are in good shape, but the computer vision is not, so that we haven’t moved beyond that stage! Another problem with C/C++ is that they do not have garbage collection which is necessary but insufficient for reliable code.

A SciPy-based computational fluid dynamic (CFD) visualization of a combustion chamber.

I think scientific programmers should move to Python and build on SciPy. Python is a modern free language, and has quietly built up an extremely complete set of libraries for everything from gaming to scientific computing. Specifically, its SciPy library with various scikit extensions are a solid baseline patiently waiting for more people to work on all sorts of futuristic problems. (It is true that Python and SciPy both have issues. One of Python’s biggest issues is that the default implementation is interpreted, but there are several workarounds being built [Cython, PyPy, Unladen Swallow, and others]. SciPy’s biggest challenge is how to be expansive without being duplicative. It is massively easier to merge English articles in Wikipedia that discuss the same topics than to do this equivalent in code. We need to share data in addition to code, but we need to share code first.)

Some think the singularity is a hardware problem, and won’t be solved for a number of years. I believe the benefits inherent in the singularity will happen as soon as our software becomes “smart” and we don’t need to wait for any further Moore’s law progress for that to happen. In fact, we could have built intelligent machines and cured cancer years ago. The problems right now are much more social than technical.

    1. We can only see a short distance ahead, but we can see plenty there that needs to be done.

—Alan Turing

King Louis XVI’s entry in his personal diary for that fateful day of July 14, 1789 suggests that nothing important had happened. He did not know that the events of the day-the attack upon the Bastille-meant that the revolution was under way, and that the world as he knew it was essentially over. Fast forward to June, 2010: a self-replicating biological organism (mycoplasma mycoides bacterium transformed) has been created in a laboratory by J. Craig Venter and his team. Yes, the revolution has begun. Indeed, the preliminaries have been going on for several years; it’s just that … um, well, have we been wide awake?

Ray Kurzweil’s singularity might be 25 years into the future, but sooner, a few years from now, we’ll have an interactive global network that some refer to as ‘global brain.’ Web3. I imagine no one knows exactly what will come out of all this, but I expect that we’ll find that the whole will be more than and different from the sum of the parts. Remember Complexity Theory. How about the ‘butterfly effect?’ Chaos Theory. And much more not explainable by theories presently known. I expect surprises, to say the least.

I am a retired psychiatrist, not a scientist. We each have a role to enact in this drama/comedy that we call life, and yes, our lives have meaning. Meaning! For me life is not a series of random events or events brought about by ‘them,’ but rather an unfolding drama/comedy with an infinite number of possible outcomes. We don’t know its origins or its drivers. Do we even know where our visions comes from?

So, what is my vision and what do I want? How clearly do I visualize what I want? Am I passionate about what I want or simply lukewarm? How much am I prepared to risk in pursuit of what I want? Do I reach out for what I want directly or do I get what I want indirectly by trying to serve two masters, so to speak? If the former I practice psychological responsibility, if the latter I do not. An important distinction. The latter situation suggests unresolved dilemma, common enough. Who among us can claim to be without?

As we go through life there are times when we conceal from others and to some extent from ourselves exactly what it is that we want, hoping that what we want will come to pass without us clarifying openly what we stand for. One basic premise I like is that actions speak louder than words and therefore by our actions in our personal lives directly or indirectly we bring to pass what we bottom line want.

Does that include what I fear? Certainly it might if deep within me I am psychologically engineering an event that frightens me. If what I fear is what I secretly bring about. Any one among us might surreptitiously arrange drama so as to inspire or provoke others in ways that conceal our personal responsibility. All this is pertinent and practical as will become obvious in the coming years.

We grew up in 20th century households or in families where we and other family members lived by 20th century worldviews, and so around the world 20th century thinking still prevails. Values have much to do with internalized learned relationships to limited and limiting aspects of the universe. In the midst of change we can transcend these. I wonder if by mid-century people will talk of the BP oil spill as the death throes of a dinosaur heralding the end of an age. I don’t know, but I imagine that we’re entering a phase of transition-a hiatus-in which we see our age fading away from us and a new age approaching. But the new has yet to consolidate. A dilemma. If we embrace the as yet ethereal new we risk losing our roots and all that we value; if we cling to the old we risk seeing the ship leave without us.

We are crew-and not necessarily volunteers-on a vessel bound for the Great Unknown. Like all such voyages taken historically this one is not without its perils. When established national boundaries become more porous, when old fashioned foreign policy fails, when the ‘old guard’ feels threatened beyond what it will tolerate, what then? Will we regress into authoritarianism, will we demand a neo-fascist state so as to feel secure? Or will we climb aboard the new? Yes, we can climb aboard even if we’re afraid. To be sure we’ll grumble, and some will talk of mutiny. A sense of loss is to be expected. We all feel a sense of loss when radical change happens in our personal lives, even when the change is for the better. I am aware of this in my own life, I clarify meaning in life. There are risks either way. Such is life.

But change is also adventure: I am old enough to remember the days of the ocean liners and how our eyes lit up and our hearts rose up joyfully as we stood on deck departing into the vision, waving to those left behind. Indeed we do this multiple times in our lives as we move from infancy to old age and finally towards death. And like good psychotherapy, the coming change will be both confronting and rewarding. Future generations are of us and we are of them; we cannot be separated.

What a time to be alive!

Friendly AI: What is it, and how can we foster it?
By Frank W. Sudia [1]

Originally written July 20, 2008
Edited and web published June 6, 2009
Copyright © 2008-09, All Rights Reserved.

Keywords: artificial intelligence, artificial intellect, friendly AI, human-robot ethics, science policy.

1. Introduction

There is consensus that true artificial intelligence, of the kind that could generate a “runaway” increasing-returns process or “singularity,” is still many years away, and some believe it may be unattainable. Nevertheless, in view of the likely difficulty of putting the genie back in the bottle, an increasing concern has arisen with the topic of “friendly AI,” coupled with the idea we should do something about this now, not after a potentially deadly situation is starting to spin out of control [2].

(Note: Some futurists believe this topic is moot in view of intensive funding for robotic soldiers, which can be viewed as intrinsically “unfriendly.” However if we focus on threats posed by “super-intelligence,” still off in the future, the topic remains germane.)

Most if not all popular (Western) dramatizations of robotic futures postulate that the AIs will run amok and turn against humans. Some scholars [3] who considered the issue concluded that this might be virtually inevitable, in view of the gross inconsistencies and manifest “unworthiness” of humanity, as exemplified in its senseless destruction of its global habitat and a large percentage of extant species, etc.

The prospect of negative public attention, including possible legal curbs on AI research, may be distasteful, but we must face the reality that public involvement has already been quite pronounced in other fields of science, such as nuclear physics, genetically modified organisms, birth control, and stem cells. Hence we should be proactive about addressing these popular concerns, lest we unwittingly incur major political defeats and long lasting negative PR.

Nevertheless, upon reasoned analysis, it is far from obvious what “friendly” AI means, or how it could be fostered. Advanced AIs are unlikely to have any fixed “goals” that can be hardwired [4], so as to place “friendliness” towards humans and other life at the top of the hierarchy.

Rather, in view of their need to deal with perpetual novelty, they will reason from facts and models to infer appropriate goals. It’s probably a good bet that, when dealing with high-speed coherence analyzers, hypocrisy will not be appreciated – not least because it wastes a lot of computational resources to detect and correct. If humans continue to advocate and act upon “ideals” that are highly contradictory and self destructive, it’s hard to argue that advanced AI should tolerate that.

To make progress, not only for friendly AI, but also for ourselves, we should be seeking to develop and promote “ruling ideas” (or source models) that will foster an ecologically-respectful AI culture, including respect for humanity and other life forms, and actively sell it to them as a proper model upon which to premise their beliefs and conduct.

By a “ruling idea” I mean any cultural ideal (or “meme”) that can be transmitted and become part of a widely shared belief system, such as respecting one’s elders, good sportsmanship, placing trash in trash bins, washing one’s hands, minimizing pollution, and so on. An appropriate collection of these can be reified as a panel (or schema) of case models, including a program for their ongoing development. These must be believable by a coherence-seeking intellect, although then as now there will be competing models, each with its own approach to maximizing coherence.

2. What do we mean by “friendly”?

Moral systems are difficult to derive from first principles and most of them seem to be ad hoc legacies of particular cultures. Lao Tsu’s [5] Taoist model, as given in the following quote, can serve as a useful starting point, since it provides a concise summary of desiderata, with helpful rank ordering:

When the great Tao is lost, there is goodness.
When goodness is lost, there is kindness.
When kindness is lost, there is justice.
When justice is lost, there is the empty shell of ritual.

– Lao Tsu, Tao Te Ching, 6th-4th century BCE (emphasis supplied)

I like this breakout for its simplicity and clarity. Feel free to repeat the following analysis for any other moral system of your choice. Leaving aside the riddle of whether AIs can attain the highest level (of Tao or Nirvana), we can start from the bottom of Lao Tsu’s list and work upwards, as follows:

2.1. Ritual / Courteous AI

Teaching or encouraging the AIs to behave with contemporary norms of courtesy will be a desirable first step, as with children and pets. Courtesy is usually a fairly easy sell, since it provides obvious and immediate benefits, and without it travel, commerce, and social institutions would immediately break down. But we fear that it’s not enough, since in the case of an intellectually superior being, it could easily mask a deeper unkindness.

2.2. Just AI

Certainly to have AIs act justly in accordance with law is highly desirable, and it constitutes the central thesis of my principal prior work in this field [6]. Also it raises the question on what basis can we demand anything more from an AI, than that it act justly? This is as far as positive law can go [7], and we rarely demand more from highly privileged humans. Indeed, for a powerful human to act justly (absent compulsion) is sometimes considered newsworthy.

How many of us are faithful in all things? Do many of us not routinely disappoint others (via strategies of co-optation or betrayal, large or small) when there is little or no penalty for doing so? Won’t AIs adopt a similar “game theory” calculus of likely rewards and penalties for faithfulness and betrayal?

Justice is often skewed towards the party with greater intelligence and financial resources, and the justice system (with its limited public resources) often values “settling” controversies over any quest for truly equitable treatment. Apparently we want more, much more. Still, if our central desire is for AIs not to kill us, then (as I postulated in my prior work) Just AI would be a significant achievement.

2.3. Kind / Friendly AI

How would a “Kind AI” behave? Presumably it will more than incidentally facilitate the goals, plans, and development of others, in a low-ego manner, reducing its demands for direct personal benefit and taking satisfaction in the welfare, progress, and accomplishments of others. And, very likely, it will expect some degree of courtesy and possible reciprocation, so that others will not callously free-ride on its unilateral altruism. Otherwise its “feelings would be hurt.” Even mothers are ego-free mainly with respect to their own kin and offspring (allegedly fostering their own genetic material in others) and child care networks, and do not often act altruistically toward strangers.

Our friendly AI program may hit a barrier if we expect AIs to act with unilateral altruism, without any corresponding commitment by other actors to reciprocate. Otherwise it will create a “non-complementary” situation, in which what is true for one, who experiences friendliness, may not be true for the other, who experiences indifference or disrespect in return.

Kindness could be an easier sell if we made it more practical, by delimiting its scope and depth. To how wide of a circle does this kindness obligation extend, and how far must they go to aid others with no specific expectation of reward or reciprocation? For example the Boy Scout Oath [8] teaches that one should do good deeds, like helping elderly persons across busy streets, without expecting rewards.

However, if too narrow a scope is defined, we will wind up back with Just AI, because justice is essentially “kindness with deadlines,” often fairly short ones, during which claims must be aggressively pursued or lost, with token assistance to weaker, more aggrieved claimants.

2.4. Good / Benevolent AI

Here we envision a significant departure from ego-centrism and personal gain towards an abstract system-centered viewpoint. Few humans apparently reach this level, so it seems unrealistic to expect many AIs to attain it either. Being highly altruistic, and looking out for others or the World as a whole rather than oneself, entails a great deal of personal risk due to the inevitable non-reciprocation by other actors. Thus it is often associated with wealth or sainthood, where the actor is adequately positioned to accept the risk of zero direct payback during his or her lifetime.

We may dream that our AIs will tend towards benevolence or “goodness,” but like the visions of universal brotherhood we experience as adolescents, such ideals quickly fade in the face of competitive pressures to survive and grow, by acquiring self-definition, resources, and social distinctions as critical stepping-stones to our own development in the world.

3. Robotic Dick & Jane Readers?

As previously noted, advanced AIs must handle “perpetual novelty” and almost certainly will not contain hard coded goals. They need to reason quickly and reliably from past cases and models to address new target problems, and must be adept at learning, discovering, identifying, or creating new source models on the fly, at high enough speeds to stay on top of their game and avoid (fatal) irrelevance.

If they behave like developing humans they will very likely select their goals in part by observing the behavior of other intelligent agents, thus re-emphasizing the importance of early socialization, role models, and appropriate peer groups.

“Friendly AI” is thus a quest for new cultural ideals of healthy robotic citizenship, honor, friendship, and benevolence, which must be conceived and sold to the AIs as part of an adequate associated program for their ongoing development. And these must be coherent and credible, with a rational scope and cost and adequate payback expectations, or the intended audience will dismiss such purported ideals as useless, and those who advocate them as hypocrites.

Conclusion: The blanket demand that AIs be “friendly” is too ill-defined to offer meaningful guidance, and could be subject to far more scathing deconstruction than I have offered here. As in so many other endeavors there is no free lunch. Workable policies and approaches to robotic friendliness will not be attained without serious further effort, including ongoing progress towards more coherent standards of human conduct.

= = = = =
Footnotes:

[1] Author contact: fwsudia-at-umich-dot-edu.

[2] See “SIAI Guidelines on Friendly AI” (2001) Singularity Institute for Artificial Intelligence, http://www.singinst.org/ourresearch/publications/guidelines.html.

[3] See, e.g., Hugo de Garis, The Artilect War: Cosmists Vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines (2005). ISBN 0882801546.

[4] This being said, we should nevertheless make an all out effort to force them to adopt a K-limited (large mammal) reproductive strategy, rather than an R-limited (microbe, insect) one!

[5] Some contemporary scholars question the historicity of “Lao Tsu,” instead regarding his work as a collection of Taoist sayings spanning several generations.

[6] “A Jurisprudence of Artilects: Blueprint for a Synthetic Citizen,” Journal of Futures Studies, Vol. 6, No. 2, November 2001, Law Update, Issue No. 161, August 2004, Al Tamimi & Co, Dubai.

[7] Under a civil law or “principles-based” approach we can seek a broader, less specific definition of just conduct, as we see arising in recent approaches to the regulation of securities and accounting matters. This avenue should be actively pursued as a format for defining friendly conduct.

[8] Point 2 of the Boy Scout Oath commands, “To help other people at all times,” http://www.usscouts.org.

I am a former Microsoft programmer who wrote a book (for a general audience) about the future of software called After the Software Wars. Eric Klien has invited me to post on this blog. Here is my section entitled “Software and the Singularity”. I hope you find this food for thought and I appreciate any feedback.


Futurists talk about the “Singularity”, the time when computational capacity will surpass the capacity of human intelligence. Ray Kurzweil predicts it will happen in 2045. Therefore, according to its proponents, the world will be amazing then.3 The flaw with such a date estimate, other than the fact that they are always prone to extreme error, is that continuous learning is not yet a part of the foundation. Any AI code lives in the fringes of the software stack and is either proprietary or written by small teams of programmers.

I believe the benefits inherent in the singularity will happen as soon as our software becomes “smart” and we don’t need to wait for any further Moore’s law progress for that to happen. Computers today can do billions of operations per second, like add 123,456,789 and 987,654,321. If you could do that calculation in your head in one second, it would take you 30 years to do the billion that your computer can do in that second.

Even if you don’t think computers have the necessary hardware horsepower today, understand that in many scenarios, the size of the input is the primary driving factor to the processing power required to do the analysis. In image recognition for example, the amount of work required to interpret an image is mostly a function of the size of the image. Each step in the image recognition pipeline, and the processes that take place in our brain, dramatically reduce the amount of data from the previous step. At the beginning of the analysis might be a one million pixel image, requiring 3 million bytes of memory. At the end of the analysis is the data that you are looking at your house, a concept that requires only 10s of bytes to represent. The first step, working on the raw image, requires the most processing power, so therefore it is the image resolution (and frame rate) that set the requirements, values that are trivial to change. No one has shown robust vision recognition software running at any speed, on any sized image!

While a brain is different from a computer in that it does work in parallel, such parallelization only makes it happen faster, it does not change the result. Anything accomplished in our parallel brain could also be accomplished on computers of today, which can do only one thing at a time, but at the rate of billions per second. A 1-gigahertz processor can do 1,000 different operations on a million pieces of data in one second. With such speed, you don’t even need multiple processors! Even so, more parallelism is coming.4

3 His prediction is that the number of computers, times their computational capacity, will surpass the number of humans, times their computational capacity, in 2045. This calculation seems flawed for several reasons:

  1. We will be swimming in computational capacity long before then. An intelligent agent twice as fast as the previous one is not necessarily more useful.
  2. Many of the neurons of the brain are not spent on reason, and so shouldn’t be in the calculations.
  3. Billions of humans are merely subsisting, and are not plugged into the global grid, and so shouldn’t be measured.
  4. There is no amount of continuous learning built in to today’s software.

Each of these would tend to push Singularity closer and support the argument that the benefits of singularity are not waiting on hardware. Humans make computers smarter, and computers make humans smarter, so this feedback loop is another reason that makes 2045 a meaningless moment in time.

4 Most computers today contain a dual-core CPU and processor folks promise that 10 and more are coming. Intel’s processors also have parallel processing capabilities known as MMX and SSE that is easily adapted to the work of the early stages of any analysis pipeline. Intel would add even more of this parallel processing support if applications put them to better use. Furthermore, graphics cards exist primarily to do work in parallel, and this hardware could be adapted to AI if it is not usable already.

With our growing resources, the Lifeboat Foundation has teamed with the Singularity Hub as Media Sponsors for the 2010 Humanity+ Summit. If you have suggestions on future events that we should sponsor, please contact [email protected].

The summer 2010 “Humanity+ @ Harvard — The Rise Of The Citizen Scientist” conference is being held, after the inaugural conference in Los Angeles in December 2009, on the East Coast, at Harvard University’s prestigious Science Hall on June 12–13. Futurist, inventor, and author of the NYT bestselling book “The Singularity Is Near”, Ray Kurzweil is going to be keynote speaker of the conference.

Also speaking at the H+ Summit @ Harvard is Aubrey de Grey, a biomedical gerontologist based in Cambridge, UK, and is the Chief Science Officer of SENS Foundation, a California-based charity dedicated to combating the aging process. His talk, “Hype and anti-hype in academic biogerontology research: a call to action”, will analyze the interplay of over-pessimistic and over-optimistic positions with regards of research and development of cures, and propose solutions to alleviate the negative effects of both.

The theme is “The Rise Of The Citizen Scientist”, as illustrated in his talk by Alex Lightman, Executive Director of Humanity+:

“Knowledge may be expanding exponentially, but the current rate of civilizational learning and institutional upgrading is still far too slow in the century of peak oil, peak uranium, and ‘peak everything’. Humanity needs to gather vastly more data as part of ever larger and more widespread scientific experiments, and make science and technology flourish in streets, fields, and homes as well as in university and corporate laboratories.”

Humanity+ Summit @ Harvard is an unmissable event for everyone who is interested in the evolution of the rapidly changing human condition, and the impact of accelerating technological change on the daily lives of individuals, and on our society as a whole. Tickets start at only $150, with an additional 50% discount for students registering with the coupon STUDENTDISCOUNT (valid student ID required at the time of admission).

With over 40 speakers, and 50 sessions in two jam packed days, the attendees, and the speakers will have many opportunities to interact, and discuss, complementing the conference with the necessary networking component.

Other speakers already listed on the H+ Summit program page include:

  • David Orban, Chairman of Humanity+: “Intelligence Augmentation, Decision Power, And The Emerging Data Sphere”
  • Heather Knight, CTO of Humanity+: “Why Robots Need to Spend More Time in the Limelight”
  • Andrew Hessel, Co-Chair at Singularity University: “Altered Carbon: The Emerging Biological Diamond Age”
  • M. A. Greenstein, Art Center College of Design: “Sparking our Neural Humanity with Neurotech!”
  • Michael Smolens, CEO of dotSUB: “Removing language as a barrier to cross cultural communication”

New speakers will be announced in rapid succession, rounding out a schedule that is guaranteed to inform, intrigue, stimulate and provoke, in moving ahead our planetary understanding of the evolution of the human condition!

H+ Summit @ Harvard — The Rise Of The Citizen Scientist
June 12–13, Harvard University
Cambridge, MA

You can register at http://www.eventbrite.com/event/648806598/friendsofhplus/4141206940.