Toggle light / dark theme

Abstract

J. Storrs Hall’s Weather Machine is a relatively simple nanofabricated machine system with significant consequences in politics and ethics.

After a brief technical description, this essay analyzes the ends, means, and circumstances of a feasible method of controlling the weather, and includes some predictions regarding secondary effects.


Article

When a brilliant person possesses a fertile imagination and significant technical expertise, he or she is likely to imagine world-changing inventions. J. Storrs Hall is the epitome of those geniuses, and his Utility Fog [1] and Space Pier [2] are brilliant engineering designs that will change the world once they are reduced to practice. His most recent invention is the Weather Machine [3], which has been examined by none other than Robert Freitas and found to be technically reasonable—-though Freitas may have found an improved method for climate control that avoids some of the problems discussed below [4].

The Hall Weather Machine is a thin global cloud consisting of small transparent balloons that can be thought of as a programmable and reversible greenhouse gas because it shades or reflects the amount of sunlight that hits the upper stratosphere. These balloons are each between a millimeter and a centimeter in diameter, made of a few-nanometer thick diamondoid membrane. Each balloon is filled with hydrogen to enable it to float at an altitude of 60,000 to 100,000 feet, high above the clouds. It is bisected by an adjustable sheet, and also includes solar cells, a small computer, a GPS receiver to keep track of its location, and an actuator to occasionally (and relatively slowly) move the bisecting membrane between vertical and horizontal orientations. Just like with a regular high-altitude balloon, the heavier control and energy storage systems would be on the bottom of the balloon to automatically set the vertical axis without requiring any energy. The balloon would also have a water vapor/hydrogen generator system for altitude control, giving it the same directional navigation properties that an ordinary hot-air balloon has when it changes altitudes to take advantage of different wind directions at different altitudes.

Four versions of balloons are possible, depending on nature of the bisecting membrane.

  • Version 1. Transparent/Opaque: The bisecting membrane is opaque, and rotates from the horizontal to the vertical in order to control the amount of solar radiation that it allows through (the membrane might be replaced by a immobile liquid crystal that has two basic states: transparent and opaque).
  • Version 2. Emissivity Control: The membrane is white on one side, black on the other. When it is horizontal, either side can be presented upwards; white to scatter the solar radiation into space, black to absorb it into the upper atmosphere.
  • Version 3. Reflection Control: The membrane is black on one side, with a reflective metallic coating on the other. This can direct solar energy in specific directions to increase the effectiveness of solar farms, or to steer hurricanes. Another feature of this version is that it enables the multiple reflection of light from sunlit to dark areas.
  • Version 4. Advanced Photon Control: The balloon would be filled with an aerogel-density metamaterial that could not only control reflectivity via diffraction, but also control the frequency and phase of outgoing photons (with or without stimulated emission). Technically, designing and controlling these kinds of balloons would be a magnitude or two more complex than the earlier versions.

What is impressive about the Weather Machine is that by controlling a tenth of one percent of solar radiation is enough to force global climate in any direction we want. One percent is enough to change regional climate, and ten percent is enough for serious weather control.


The Problems

Every human-designed system has unintended bugs, and may cause negative consequences. That is why we have professional engineering societies, non-profit standards organizations, and government bureaucracies—to help protect the public. There is, therefore, some concern that the Weather Machine will accidentally cause catastrophic weather. However, given the accuracy of weather predictions and global warming models, and the slow movement of masses of air, and the fact that humans are in the loop (and in an emergency, could use a failsafe mode to force all the balloons to drop from the sky), the danger of accidental harm is minimal. At any rate, this article is more concerned with the ethical issues, with accidental unintended consequences to be examined elsewhere.

Many people would be happy to stop global warming, though others (currently living in Siberia or Iceland) might be happier without brutally cold winters. This level of climate control raises some problematic issues that may pit one group of people against another. The intended results could be taken care of the same way we normally take of similar issues in a representative democracy—we vote. This sounds nice, except that we’re not just talking about the United States (or any single nation), but the entire world. And we all know how well the United Nations handles its affairs. Perhaps deciding whether or not we want global warming is a small enough decision that the U.N. can handle it. If not, we can always rely on the world government that evil geniuses want to run, and that conspiracy theorists worry about.

Within the USA, trial lawyers would be especially interested in unintended effects, including trivial ones like rain on parades, or more serious ones like floods and tornadoes. The tremendous inefficiency of this legal nightmare might be meliorated by a “weather tax” that would fund a program to recompense people who are willing to put up with bad weather.

The more advanced versions of balloons are problematic because then the Weather Machine wouldn’t just control the intensity of solar and terrestrial radiation, but could also redirect and concentrate energy. In addition to increasing the effectiveness of solar farms, this would give more powerful and precise control over the weather. Unfortunately, energy concentration is exactly the capability that transforms the Weather Machine into an awesome weapon of mass destruction. Concentrated solar energy has not been used much since 212 BCE [5] when Archimedes used it to set fire the Roman ships that were attacking his city-state of Syracuse. However, the global coordination of the reflective Weather Machine allows bouncing concentrated solar energy around the globe, making it possible to set cities on fire. By fire, I mean the type of fire caused by dropping a nuclear bomb per second for as long as you want. The potential for abuse is rather large.

The most advanced version of the balloon is even better or worse—it contains an aerogel-density (i.e. extremely light and porous) programmable metamaterial that controls the frequency, direction, and phase of the reflected or transmitted radiation. Fully deployed, such a Weather Machine could become a planet-sized telescope—or laser. Small portions of such a system could be used as an effective missile defense system. Configured as a planetary laser, it might be able to defend Earth against stray asteroids such as Apopois, which is due for a flyby in 2029 (and might impact in 2036—especially if some terrorist group places an ion motor on it). Also, a planetary laser could push fairly large rockets rather quickly to Alpha Centari. But if you thought Version 3 was a weapon of mass destruction, Version 4 makes them, and the Transformers look like children’s toys (No wait—that’s what they are ). Optical divergence (currently 1 miliradian for commercially available lasers) will not keep planets from shooting at each other and frying them in their orbits, but the lack of energy density will—unless the balloons can store energy. On the other hand, even primitive laser focusing mechanisms will work fine for lunar infighting.

Given the almost unimaginable weaponization of the Hall Weather Machine, an important reaction is to ask if there any defenses against them. There are two types: those that attack the control algorithms (i.e. cyberware attacks) and those that physically attack the balloons, such as swarms of hunter-killer balloons or larger high-flying “carnivores”. In addition, there are some de-weaponization strategies that will be discussed below.


Ethical Issues

In some ways, ethics is like engineering–solving big problems is most easily done by splitting the problem in to smaller pieces. This means that the best way to determine the ethics of any action (such as building and operating a weather machine) is to determine the ethical considerations of each of the ends, means, and circumstances.

As far as “ends” are concerned, the weather machine passes with flying colors, if nothing else because it can fix global warming (or impending ice ages). Depending on a number of variables, we might even increase the number of nice weekends and increase the biome sizes of certain species.

One counter to these benefits claims that by controlling the weather we would be playing God and that the Weather Machine is equivalent to eating from the Tree of Knowledge of Good and Evil. In my view, if God didn’t like us messing with technology, then He should have let us know a long time ago. At any rate, the Bible doesn’t speak against technology per se. Admittedly, the Bible’s tower of Babel story does condemn the pride and arrogance that may result from technology, but that is another story.

A non-theistic (but just as religious) counter to the main intent of the weather machine is made by deep ecology environmentalists. They often claim that controlling the weather is unnatural, that Mother Nature bats last, or that the very idea of weather control is the reason that the global human population should be reduced to the low millions. These sort of arguments represent metaphysical differences regarding the value of individual human beings and the stewardship role we should have with the environment, and I’m not sure how we can address those issues in a book, much less in 3,500 words or less.

The “means” judges the actual methods used to control the climate and the weather. In this case, modulating the Sun’s energy with many small, high-altitude balloons seems ethically neutral. Even the transformation of a 100 million tons of carbon into diamondoid balloons is ethically neutral (unless one gets the carbon from the living bodies of endangered animals, pre-born fetuses, ethnic minorities, or other humans). By some viewpoints, the sequestering of 100 million tons of atmospheric carbon would be considered virtuous (except that this particular sequestration makes the global warming problem go away, to be possibly replaced by bigger ones).

The ethical analysis leaves “circumstances” as the remaining issue, and here is where things get complicated. Circumstances include things like unintended (especially foreseeable) and secondary consequences, such as whether the means or the end may lead to other evils. In general, a consequentialist argument would likely accept some small risk of some harm, and might accept mechanisms (like lawsuits or something more efficient) to provide feedback to fix any inequities. But this is where things get really complicated.

The first possibility, and most often raised, is that building and operating the Weather Machine might result in severe, unpredictable, unintended consequences. There are a few classes of these consequences, the most obvious centered on out-of-control superstorms or droughts. After all, we aren’t that great at predicting hurricane paths. On the other hand, this is because hurricane paths are inherently unstable—precisely because we don’t have any weather control. If we take a car out to the Bonneville salt flats, tie a car’s steering wheel absolutely straight, and then put a brick on the pedal, we cannot predict whether it will eventually circle left or right. But we allow cars on the road all the time precisely because we have such good feedback and control systems (well, except when they’re getting home late on a Saturday night).

Increased predictability would ameliorate the unintended weather problem, and could be reached by using altitude control (and differently-directed winds) for the balloons to remain over a particular piece of land. Then many tests could be run better predict possible harms and to lower the risk of them ever happening. In general, almost all accidental problems caused by a misbehaving Weather Machine (including computer viruses, rogue controllers, broken balloons, and the environmental toxicology of a million tons of inert diamond falling all over the earth) can be ameliorated by good design, adequate testing, and accurate modeling [6].

Others classes of severe, unintended consequences are secondary effects in the environment, the world economy, politics, and other areas. For example, by successfully moving heat from the tropics to the northern areas, we might turn off the Gulf Stream and other important ocean currents? How will the stock market react to California constantly selling it’s bad weather to Michigan? How will a totalitarian tropical country react if Iceland buys 20% of their neighbors’ sunlight for a much higher price than for theirs?

A second possibility is that the Weather Machine is impossible, and working on it may be a waste of money that could be better spend on more worthwhile projects. Given our knowledge of physics, however, this is unlikely. A caveat is that it will be a race to 2030, when diamond mechanosynthesis should be able to crank out the 100 million tons (the equivalent of 100 miles of freeway) of diamond balloons, and when the worst-case scenarios predict the beginning of serious negative effects of anthropogenic carbon [7]) . Will the Hall Weather Machine be built in time to stop Florida from being inundated by the ocean? The answer depends on when nanosystems will achieve top-down bootstrapping or bottom-up Turing equivalence (which is a technical topic for another time).

A third possibility—if the balloons are not location-controllable—might occur if a nation doesn’t want a foreign nation’s balloons over its territory. The obvious hostile response would be to build hunter-killer balloons to destroy any invaders, as this seems to be permitted by current concepts of sovereignty. Such an arms race could (and probably will) escalate ad infinitum, but open source hardware and software might help prevent it. Any military or intelligence personnel (of any country) would freak at the idea of handing the keys to a weapon of mass destruction to the public, but that may be the only viable solution if the control algorithm works using genetic or market mechanisms — maybe like American Idol or Wikipedia. After all, distributed systems should have distributed control systems. Imagine the balloons controlled by many different radio frequencies with a many different authentication algorithms with open source software. Unfortunately, if such public control is our solution against weather weaponization, we will still need to worry about the “tragedy of commons” and “not in my backyard” secondary effects.

There are other issues of international policy. Suppose we want more sunlight in the Dakotas for growing crops. We could buy it from poor tropical countries, or take it from international ocean territories, where it might affect other countries. Depending on the state of the art and it’s acceleration, but especially at the beginning, it is likely that only rich countries will be able to build Weather Machines. More certainly, only rich countries will be able to fund the early experiments to understand what large numbers of balloons will actually do.

Some might object that knowledge is free and can travel anywhere via the Internet. This is true, but consider the BP disaster. Technical expertise on underwater drilling is international; marine science is international; the disaster receiving tons of press coverage; and yet there is large disagreement within the largely free scientific community about the importance of the spill, how long it will take to clean up, etc. In contrast, connecting a large base of nanofactories to the Internet will enable the global spread of atomically-precise physical devices (such as balloons) in seconds, whether or not the experiments are ever done.

A fourth possibility is that the Weather Machine could be used as a weapon of mass inconvenience—a means of unjust coercion by making possible the threat of bad weather. But the ethics of this application use the same principles as the ethics regarding weapons of mass destruction. I have already pointed out the possible use of the Weather Machine as a weapon—the ethical issues surrounding the more advanced versions of the Weather Machine are basically the same as those concerning weapons of mass destruction, though amplified somewhat by their power (tens of megatons of TNT equivalents per second) and precision of control (+/- one degree Fahrenheit).

Fifth, there is the possibility that psychologically, being in control of the weather is not good for developing character. What if human beings are supposed to cower in their caves when lightning and blizzards strike? After all, that is how we evolved, and there are many things we enjoy that are bad for us [8]. Perhaps having so much control and power over the vicissitudes of life is psychologically bad for us. For evidence, look at the rates of depression in advanced nations.

Finally, what is the cost of not building a Weather Machine? If the cost drops low enough, some nation with the chutzpah will build one. And if they are at all successful, the rest of the world will jump in. But what will the cost be if they design it wrong?

Are the Ethics of the Hall Weather Machine Relevant?

The main problem with thinking about the ethics of the Hall Weather Machine is that by the time we can build 100 million tons of atomically precise anything, controlling the weather is going to be the least of our problems. This is because the nanotechnology revolution will bring about a new set of big, hairy problems—some of which I’ve written about elsewhere [9][10], but I fear that most of them we can’t even imaging yet.

May we live in interesting times!

Tihamer Toth-Fejel, MS
General Dynamics Advanced Information Systems
Michigan Research and Development Center

Acknowledgements

Thanks to James Bach and Chris Dodsworth for valuable contributions.



Footnotes

[1] J. Storrs Hall, Utility Fog: The Stuff that Dreams are Made Of, http://autogeny.org/Ufog.html

[2] J. Storrs Hall, The Space Pier: A hybrid Space-launch Tower concept, http://autogeny.org/tower/tower.html

[3] J. Storrs Hall, The Weather Machine, (transcript from Global Catastrophic Risks 2008 conference, posted by Jeriaska on December 20th, 2008), http://www.acceleratingfuture.com/people-blog/?p=2637

[4] Robert A. Freitas, Diamond Trees (Tropostats): A Molecular Manufacturing Based System for Compositional Atmospheric Homeostasis, 2010 IMM Report 43, 10 February 2010; http://www.imm.org/Reports/rep043.pdf

[5] Before the Christian Era smile

[6] The details will be examined elsewhere (as time permits).

[7] Coincidentally, it is also when the USA Social Security System is supposed to collapse.

[8] “The killer app for medical nanotechnology will be compensating for poor lifestyle choices like overeating and indiscriminate sex—i.e. diabetes II and AIDS” — a grad student at the 2010 Gordon Conference on Nanostructure Fabrication.

[9] T. Toth-Fejel, “Humanity and Nanotechnology”. National Catholic Bioethics Quarterly, V4N2, Summer 2004.

[10] T. Toth-Fejel, “A Few Lesser Implications of Nanofactories: Global Warming is the least of our Problems.” Nanotechnology Perceptions, March 2009.

(End of series. For previous topics please see parts I-IX)

Power plants. Trees could do a lot, as we have seen — and they’re solar powered, too. Once trees can suck metals from the soil and grow useful, shaped objects like copper wire, a few more levels of genetic engineering could enable the tree to use this copper wire to deliver electricity. Since a tree is already, now, a solar energy converter, we can build on that by having the tree grow tissues that convert energy into electricity. Electric eels can already do that, producing enough of a jolt to be lethal to humans. Even ordinary fish produce small amounts of electricity to create electric fields in the water around them. Any object nearby disrupts the field, enabling the fish to tell that something is near, even in total darkness. We may never be able to plug something into a swimming fish but we can already make batteries out of potatoes. So why not trees that grow into electricity providers all by themselves? It would be great to be able to plug your electrical devices into a tree (or at least a socket in your house that is connected to the tree). Then you would no longer need to connect to the grid, purchase solar panels, or install a windmill. You would, however, need to keep your trees healthy and vigorous! Tree care specialists would become a highly employable occupation.

Greening the desert. The Sahara and various other less notorious but still very dry deserts around the world have plenty of sand and rocks. But they don’t have much greenery. The main problem is lack of water. Vast swaths of the Sahara, for example, are plant free. It’s just too dry. However this problem is solvable! Cacti and other desert plants could potentially extract water from the air. Plants already extract carbon dioxide molecules from the air. Even very dry air contains considerable water vapor, so why not extract water molecules too. Indeed, plants already transport water molecules in the ground into their roots, so is it really such a big step to do the same from the air? Tillandsia (air plant) species can already pull in water with their leaves, but it has to be rain or other liquid water. Creating plants that can extract gaseous water vapor from the air in a harsh desert environment would require sophisticated genetic engineering, or a leap for mother nature, but it is still only the first step. Plants get nutrients out of the soil by absorbing fluid that has dissolved them, so dry soil would be a problem even for a plant that contained plenty of water pulled from the air. Another level of genetic engineering or natural evolution would be required to enable them to secrete fluid out of their roots to moisten chunks of soil to dissolve its minerals, and reabsorb the now nutritious, mineral-laden liquid back into their roots.

Once this difficult task is accomplished, whether by natural evolution in the distant future or genetic engineering sooner, things will be different in the desert. Canopies of vegetation that hide the ground will be possible. Thus shaded and sheltered, the ground will be able to support a much richer ecosystem of creatures and maybe even humans than is currently the case in deserts. One of Earth’s harshest environments would be tamed.

Phyto-terraforming. To terraform means to transform a place into an Earth-like state (terra is Latin for Earth). Mars for example is a desert wasteland, but it once ran with rivers, and it would be great if the Martian surface was made habitable — in other words, terraformed. Venus might be made habitable if we could only get rid of its dense blanket of carbon dioxide, which causes such a severe greenhouse effect that its surface is over 800 degrees Fahrenheit, toasty indeed. And why not consider terraforming inhospitable terrain right here on earth, like the Sahara desert, or Antarctica. Phyto-terraforming is terraforming using plants. Actually plants are so favored for this task that when people discuss terraforming, they usually mean phyto-terraforming. Long ago, plants did in fact terraform the Earth, converting a hostile atmosphere with no oxygen but plenty of carbon dioxide into a friendly one with enough oxygen that we can comfortably exist. Plants worked on Earth, and might work on Mars or even Venus, but not on the moon. The reason is that plants need carbon dioxide and water. Venus has these (and reasonable temperatures) high in the atmosphere, suggesting airborne algae cells. Mars is a more likely bet as it has water (as ice) available to surface-dwelling plants at least in places.

If Mars is the most likely candidate for phyto-terraforming, what efforts have been made to move in that direction? A first step has been to splice genes into ordinary plants from an organism that lives in hot water associated with deep ocean thermal vents. This organism is named Pyrococcus furiosus (Pyro- means fire in Greek, coccus refers to ball-shaped bacteria, hence “fireball”). Pyrococcus is most comfortable living at about the boiling point of water and can grow furiously, double its population in 37 minutes. It has evolved genes for destroying free radicals that work better than those naturally present in plants. Free radicals are produced by certain stressors in plants (and humans), cause cell damage, and can even lead to death of the organism. By splicing such genes into the plant Arabidopsis thaliana, the experimental mouse of plant research, this small and nondescript-looking plant can be made much more resistant to heat and lack of water. These genes have also been spliced into tomatoes, which could help feed future colonists. Of course Mars requires cold, not heat tolerance, but the lack of water part is a good start. The heat and drought parts might be useful for building plants to terraform deserts here on Earth, bringing terraforming of Earth deserts a couple of steps closer. With several additional levels of genetic modification, we might eventually terraform Mars yet.

Recommendations

When the advances described here are likely to happen would be good to know. Will they occur in your lifetime? Your grandchildren’s? Thousands or millions of years into the future? If the latter, there is not much point in devoting precious national funds to help bring them about, but if the former, it might be worth the expense of hurrying the process along. To determine the likely timing of future technological advances, we need to determine the speed of advancement. To measure this speed, we can look at the rate at which advances have occurred in the past, and ask what will happen in the future if advances continue along at the same rate. This approach is influential in the modern computer industry in the guise of “Moore’s Law.” However it was propounded at least as early as about 2,500 years ago, when Chinese philosopher Confucius is said to have noted, “Study the past if you would divine the future.” It would be nice to know when we can expect to grow and eat potatoes with small hamburgers in the middle, pluck nuggets of valuable metals from trees, power our homes by plugging into electricity-generating trees growing in our back yards, or terraform Mars.

Will our lumbering industrial age driven information age segue smoothly into a futuristic marvel of yet to be developed technology? It might. Or take quantum leaps. It could. Will information technology take off exponentially? It’s accelerating in that direction. The way knowledge is unraveling its potential for enhancing human ingenuity, the future looks bright indeed. But there is a problem. It’s that egoistic tendency we have of defending ourselves against knowing, of creating false images to delude ourselves and the world, and of resolving conflict violently. It’s as old as history and may be an inevitable part of life. If so, there will be consequences.

Who has ever seen drama/comedy without obstacles to overcome, conflicts to confront, dilemmas to address, confrontations to endure and the occasional least expected outcome? Just as Shakespeare so elegantly illustrated. Good drama illustrates aspects of life as lived, and we do live with egoistic mental processes that are both limited and limiting. Wherefore it might come to pass that we who are of this civilization might encounter an existential crisis. Or crunch into a bottleneck out of which … will emerge what? Or extinguish civilization with our egoistic conduct acting from regressed postures with splintered perception.

What’s least likely is that we’ll continue cruising along as usual.

Not with massive demographic changes, millions on the move, radical climate changes, major environmental shifts, cyber vulnerabilities, changing energy resources, inadequate clean water and values colliding against each other in a world where future generations of the techno-savvy will be capable of wielding the next generation of weapons of mass destruction.

On the other hand, there are intelligent people passionately pursuing methods of preventing the use of weapons, combating their effects and securing a future in which these problems mentioned above will be solved, and also working towards an advanced civilization.

It’s a race against time.

In the balance hangs nothing less than the future of civilization.

The danger from technology is secondary.

As of now, regardless of theories of international affairs, in one way or another, we inject power into our currency of negotiation, whether it be interpersonal or international, for after all, power is privilege, hard to give up, especially after getting a taste of it, and so we’ll quarrel over power, perhaps fight. Why deny it? The historical record is there for all to see. As for our inner terrors, our tendency to present false egoistic images to the world and of projecting our secret socially unacceptable fantasies on to others, we might just bring to pass what we fear and deny. It’s possible.

Meantime there are certain simple ideas that remain timeless: For example, as infants we exist at the pleasure of parents, big hulks who pick us up and carry us around sometimes lovingly, sometimes resentfully, often ambivalently, and to be sure many of us come to regard Authority with ambivalence. As Authority regards the dependent. A basic premise is that we all want something in a relationship. So what do we as infants want from Authority? How about security in our exploration of life? How about love? If it’s there we don’t have to pay for it. There are no conditions attached. Life, however, is both complicated and complex beyond a few words, and so we negotiate in the ‘best’ way we have at our disposal, which in the early stages of life are non-verbal intuitive methods that in part enter this life with us, genetically determined, epigenetically determined and in part is learned, but once adopted, a certain core approach becomes habitual, buried deeply under layers of later learned social skills, skills that we employ in our adult lives. These skills are however relatively on the surface. Hidden deep inside are secret desires, unfulfilled fantasies, hidden impulses that wouldn’t make sense in adult relationships if expressed openly in words.

It has been said repeatedly that crisis reveals character. Most of the time we get by in crisis, but we each have a ‘breaking point,’ meaning that under severe enduring stress we regress at a certain point, at which time we’ll abandon sophisticated social skills and a part of us will slip into infantile mode, not necessarily visible on the outside. It varies. No one can claim immunity. And acting out of infantile perception in adult situations can have unexpected consequences depending on the early life drama. Which makes life interesting. It also guarantees an interesting future.

Meantime scientists clarify the biology of learning, of short term memory, of long term memory, of the brain working as a whole, of ‘free will’ as we imagine it, but regardless of future directions, at this time we need agency on the personal and social level so as to help stabilize civilization. By agency I mean responsibility for one’s actions. Accountability, including in the face of dilemmas. Throughout the course of our lives from beginning to end we encounter dilemmas.

Consider the dilemmas the Europeans under German occupation faced last century. I use the European situation as an illustration or social paradigm, not to suggest that this situation will recur, nor to suggest that any one ethnic group will be targeted in the future, but I do suggest that if a global crisis hits, we’ll confront moral dilemmas, and so we can learn from those relatively few Europeans who resolved their dilemmas in noble ways, as opposed to the majority who did nothing to help the oppressed.

If a European in German occupied territory helped a Jew he or she and family would be in danger of arrest, torture and death. How about watching one’s spouse and children being tortured? On the other hand, if she or he did not help they would be participating in murder and genocide, and know it. Despite the danger, certain people from several European countries helped the Jews. According to those who interviewed and wrote about the helpers, (see references listed below) the helpers represented a cross section of the community, that is, some were uneducated laborers, some were serving women, some were formally educated, some were professionals, some professed religious convictions, some did not. Well then, what if anything did these noble risk takers have in common? What they shared in common was this: They saw themselves as responsible moral agents, and, acting on an internal locus of moral responsibility, they each acted on their knowledge and compassion and did the ‘right thing.’ It came naturally to them. But doing the ‘right thing’ in the face of life threatening dilemma does not come naturally to everyone. Fortunately it is a behavior that can be learned.

Concomitant with authentic learning, according to research biologists, is the production of brain chemicals that in turn cultivate structural modification in brain cells. A self reinforcing feedback system. In short, learning is part of a dynamic multi-dimensional interaction of input, output, behavioral change, chemicals, structural brain changes and complex adaptation in systems throughout the body. None of which diminishes the idea that we each enter this life with certain desires, potential and perhaps roles to act out, one of which for me is to improve myself.

Good news! I not only am, I become.

Finally, I list some 20th century resources that remain timeless to this day:

Millgram, S. Obedience to Authority: An Experimental View. Harper & Row. 1974.

Oliner, Samuel P. & Pearl. The Altruistic Personality: Rescuers of Jews in Nazi Europe. Free Press, Division of Macmillan. 1998

Fogelman, Eva. Conscience & Courage Anchor Books, Division of Random House. 1994

Block, Gay & Drucker, Malka. Rescuers: Portraits of Moral Courage in the Holocaust. Holms & Meier Publishers, 1992

Posted by Dr. Denise L Herzing and Dr. Lori Marino, Human-Nonhuman Relationship Board

Over the millennia humans and the rest of nature have coexisted in various relationships. However the intimate and interdependent nature of our relationship with other beings on the planet has been recently brought to light by the oil spill in the Gulf of Mexico. This ongoing environmental disaster is a prime example of “profit over principle” regarding non-human life. This spill threatens not only the reproductive viability of all flora and fauna in the affected ecosystems but also complex and sensitive non-human cultures like those we now recognize in dolphins and whales.

Although science has, for decades, documented the links and interdependence of ecosystems and species, the ethical dilemma now facing humans is at a critical level. For too long have we not recognized the true cost of our life styles and priorities of profit over the health of the planet and the nonhuman beings we share it with. If ever the time, this is a wake up call for humanity and a call to action. If humanity is to survive we need to make an urgent and long-term commitment to the health of the planet. The oceans, our food sources and the very oxygen we breathe may be dependent on our choices in the next 10 years.

And humanity’s survival is inextricably linked to that of the other beings we share this planet with. We need a new ethic.

Many oceanographers and marine biologist have, for a decade, sent out the message that the oceans are in trouble. Human impacts of over-fishing, pollution, and habitat destruction are threatening the very cycles of our existence. In the recent catastrophe in the Gulf, one corporation’s neglectful oversight and push for profit has set the stage for a century of clean up and impact, the implications of which we can only begin to imagine.

Current and reported estimates of stranded dolphins are at fifty-five. However, these are dolphins visibly stranded on beaches. Recent aerial footage, on YouTube, by John Wathen shows a much greater and serious threat. Offshore, in the “no fly zone” hundreds of dolphins and whales have been observed in the oil slick. Some floating belly up and dead, others struggling to breathe in the toxic fumes. Others exhibit “drunken dolphin syndrome” characterized by floating in an almost stupefied state on the surface of the water. These highly visible effects are just the tip of the iceberg in terms of the spill’s impact on the long term health and viability of the Gulf’s dolphin and whale populations, not to mention the suffering incurred by each individual dolphin as he or she tries to cope with this crisis.

Known direct and indirect effects of oil spills on dolphins and whales depend on the species but include, toxicity that can cause organ dysfunction and neurological impairment, damaged airways and lungs, gastrointestinal ulceration and hemorrhaging, eye and skin lesions, decreased body mass due to limited prey, and, the pervasive long term behavioral, immunological, and metabolic impacts of stress. Recent reports substantiate that many dolphins and whales in the Gulf are undergoing tremendous stress, shock and suffering from many of the above effects. The impact to newborns and young calves is clearly devastating.

After the Exxon Valdez spill in Prince William Sound in 1989 two pods of orcas (killer whales) were tracked. It was found that one third of the whales in one pod and 40 percent of the whales in the other pod had disappeared, with one pod never recovering its numbers. There is still some debate about the number of missing whales directly impacted by the oil though it is fair to say that losses of this magnitude are uncommon and do serious damage to orca societies.

Yes, orca societies. Years of field research has led to the conclusion by a growing number of scientists that many dolphin and whale species, including sperm whales, humpback whales, orcas, and bottlenose dolphins possess sophisticated cultures, that is, learned behavioral traditions passed on from one generation to the next. These cultures are not only unique to each group but are critically important for survival. Therefore, not only do environmental catastrophes such as the Gulf oil spill result in individual suffering and loss of life but they contribute to the permanent destruction of entire oceanic cultures. These complex learned traditions cannot be replicated after they are gone and this makes them invaluable.

On December 10, 1948 the General Assembly of the United Nations adopted and proclaimed the Universal Declaration of Human Rights, which acknowledges basic rights to life, liberty, and freedom of cultural expression. We recognize these foundational rights for humans as we are sentient, complex beings. It is abundantly clear that our actions have violated these same rights for other sentient, complex and cultural beings in the oceans – the dolphins and whales. We should use this tragedy as an opportunity to formally recognize societal and legal rights for them so that their lives and their unique cultures are better protected in the future.

Recently, there was a meeting of scientists, philosophers, legal experts and dolphin and whale advocates in Helsinki, Finland, who drafted a Declaration of Rights for Cetaceans a global call for basic rights for dolphins and whales. You can read more about this effort and become a signatory here: http://cetaceanconservation.com.au/cetaceanrights/. Given the destruction of dolphin and whale lives and cultures caused by the ongoing environmental disaster in the Gulf, we think this is one of the ways we can commit ourselves to working towards a future that will be a lifeboat for humans, dolphins and whales, and the rest of nature.

I’m working on this project with Institute for the Future — calling on voices everywhere for ideas to improve the future of global health. It would be great to get some visionary Lifeboat ideas entered!

INSTITUTE FOR THE FUTURE ANNOUNCES BODYSHOCK:
CALL FOR ENTRIES ON IDEAS TO TRANSFORM LIFESTYLES AND THE HUMAN BODY TO IMPROVE HEALTH IN THE NEXT DECADE

“What can YOU envision to improve and reinvent health and well-being for the future?” Anyone can enter, anyone can vote, anyone can change the future of global health.

With obesity, diabetes, and chronic disease rampaging populations around the world, Institute for the Future (IFTF) is turning up the volume on global well-being. Launching today, IFTF’s BodyShock is the first annual competition with an urgent challenge to recruit crowdsourced designs and solutions for better health–to remake the future by rebooting the present.

BodyShock calls upon the public to consider innovative ways to improve individual and collective health over the next 3–10 years by transforming our bodies and lifestyles. Video or graphical entries illustrating new ideas, designs, products, technologies, and concepts, will be accepted from people around the world until September 1, 2010. Up to five winners will be flown to Palo Alto, California on October 8 to present their ideas and be connected to other innovative thinkers to help bring these ideas to life. The grand prize winner will receive the IFTF Roy Amara Prize of $3,000.

“Health doesn’t happen all at once; it’s a consequence of years of choices for our bodies and lifestyles–some large and some small. BodyShock is intended to spark new ideas to help us find our way back to health,” said Thomas Goetz, executive editor of Wired, author of The Decision Tree, and a member of the Health Advisory Board that will be judging the BodyShock contest in addition to votes from the public.

“BodyShock is a fantastic initiative. Global collaboration and participation from all voices can produce a true revolution,” said Linda Avey, founder of Brainstorm Research Foundation and another Advisor to BodyShock.

Entries may come from anyone anywhere and can include, but are not limited to, the following: Life extension, DIY Bio, Diabetic teenagers, Developing countries, Green health, Augmented reality, Self-tracking, and Pervasive games. Participants are challenged to use IFTF’s Health Horizons forecasts for the next decade of health and health care as inspiration, and design a solution for a problem that will be widespread in 3–10 years, using technologies that will become mainstream.

“Think ‘artifacts from the future’–simple, non-obvious, high-impact solutions that don’t exist yet, will be among the concepts we’re looking to the public to introduce,” said Rod Falcon, director of the Health Horizons Program at IFTF.

BodyShock’s grand prize, the Roy Amara Prize, is named for IFTF’s long-time president Roy Amara (1925−2000) and is part of a larger program of social impact projects at IFTF honoring his legacy, known as The Roy Amara Fund for Participatory Foresight, the Fund uses participatory tools to translate foresight research into concrete actions that address future social challenges.

PANEL OF COMPETITION JUDGES

Joanne Andreadis
Lead of Innovation, Centers for Disease Control and Prevention

Linda Avey
Founder, Brainstorm Research Foundation

Jason Bobe
Director of Community, Personal Genome Project
Founder, DIYBio.org

Alexandra Carmichael
Co-founder, CureTogether
Director, Quantified Self

Ted Eytan, MD
Kaiser Permanente, The Permanente Federation

Rod Falcon
Director, Health Horizons Program

Peter Friess
President, Tech Museum of Innovation

Thomas Goetz
Executive Editor, WIRED Magazine
Author, The Decision Tree

Natalie Hodge,MD FAAP
Chief Health Officer, Personal Medicine International

Ellen Marram
Board of Trustees, Institute for the Future
President, Barnegat Group LLC

Kristi Miller Durazo
Senior Strategy Advisor, American Heart Association

David Rosenman
Director, Innovation Curriculum
Center for Innovation at Mayo Clinic

Amy Tenderich
Board Member, Journal of Participatory Medicine
Blogger, DiabetesMine.com

DETAILS

WHAT:
An online competition for visual design ideas to improve global health over the next 3–10 years by transforming our bodies and lifestyles. Anyone can enter, anyone can vote, anyone can change the future of health.

WHEN:
Launch — Friday, June 18,2010

Deadline for entries –Wednesday, September 1, 2010

Winners announced –Thursday, September 23, 2010

BodyShock Winners Celebration at IFTF — 6 — 9 p.m. Friday, October 8, 2010 — FREE and open to the public

WHERE:

http://www.bodyshockthefuture.org

(and 124 University Ave, 2ndFloor, Palo Alto, CA)

In the lunch time I am existing virtually in the hall of the summit as a face on the Skype account — i didn’t get a visa and stay in Moscow. But ironically my situation is resembling what I an speaking about: about the risk of remote AI which is created by aliens million light years from Earth and sent via radio signals. The main difference is that they communicate one way, and I have duplex mode.

This is my video presentation on YouTube:
Risks of SETI, for Humanity+ 2010 summit

We can only see a short distance ahead, but we can see plenty there that needs to be done.
—Alan Turing

As a programmer, I look at events like the H+ Conference this weekend in a particular way. I see all of their problems as software: not just the code for AI and friendly AI, but also that for DNA manipulation. It seems that the biggest challenge for the futurist movement is to focus less on writing English and more on getting the programmers working together productively.

I start the AI chapter of my book with the following question: Imagine 1,000 people, broken up into groups of five, working on two hundred separate encyclopedias, versus that same number of people working on one encyclopedia? Which one will be the best? This sounds like a silly analogy when described in the context of an encyclopedia, but it is exactly what is going on in artificial intelligence (AI) research today.

Today, the research community has not adopted free software and shared codebases sufficiently. For example, I believe there are more than enough PhDs today working on computer vision, but there are 200+ different codebases plus countless proprietary ones. Simply put, there is no computer vision codebase with critical mass.

Some think that these problems are so hard that it isn’t a matter of writing code, it is a matter of coming up with the breakthroughs on a chalkboard. But people can generally agree at a high level how the software for solving many problems will work. There has been code for doing OCR and neural networks and much more kicking around for years. The biggest challenge right now is getting people together to hash out the details, which is a lot closer to Wikipedia than it first appears. Software advances in a steady, stepwise fashion, which is why we need free software licenses: to incorporate all the incremental advancements that each scientist is making. Advances must eventually be expressed in software (and data) so it can be executed by a computer. Even if you believe we need certain scientific breakthroughs, it should be clear that things like robust computer vision are complicated enough that you would want 100s of people working together on the vision pipeline. So, while we are waiting for those breakthroughs, let’s get 100 people together!

There is an additional problem: that C/C++ have not been retired. These languages make it hard for programmers to work together, even if they wanted to. There are all sorts of taxes on time, from learning the archane rules about these ungainly languages, to the fact that libraries often use their own string classes, synchronization primitives, error handling schemes, etc. In many cases, it is easier to write a specialized and custom computer vision library in C/C++ than to integrate something like OpenCV which does everything by itself down to the Matrix class. The pieces for building your own computer vision library (graphics, i/o, math, etc.) are in good shape, but the computer vision is not, so that we haven’t moved beyond that stage! Another problem with C/C++ is that they do not have garbage collection which is necessary but insufficient for reliable code.

A SciPy-based computational fluid dynamic (CFD) visualization of a combustion chamber.

I think scientific programmers should move to Python and build on SciPy. Python is a modern free language, and has quietly built up an extremely complete set of libraries for everything from gaming to scientific computing. Specifically, its SciPy library with various scikit extensions are a solid baseline patiently waiting for more people to work on all sorts of futuristic problems. (It is true that Python and SciPy both have issues. One of Python’s biggest issues is that the default implementation is interpreted, but there are several workarounds being built [Cython, PyPy, Unladen Swallow, and others]. SciPy’s biggest challenge is how to be expansive without being duplicative. It is massively easier to merge English articles in Wikipedia that discuss the same topics than to do this equivalent in code. We need to share data in addition to code, but we need to share code first.)

Some think the singularity is a hardware problem, and won’t be solved for a number of years. I believe the benefits inherent in the singularity will happen as soon as our software becomes “smart” and we don’t need to wait for any further Moore’s law progress for that to happen. In fact, we could have built intelligent machines and cured cancer years ago. The problems right now are much more social than technical.

    1. We can only see a short distance ahead, but we can see plenty there that needs to be done.

—Alan Turing

King Louis XVI’s entry in his personal diary for that fateful day of July 14, 1789 suggests that nothing important had happened. He did not know that the events of the day-the attack upon the Bastille-meant that the revolution was under way, and that the world as he knew it was essentially over. Fast forward to June, 2010: a self-replicating biological organism (mycoplasma mycoides bacterium transformed) has been created in a laboratory by J. Craig Venter and his team. Yes, the revolution has begun. Indeed, the preliminaries have been going on for several years; it’s just that … um, well, have we been wide awake?

Ray Kurzweil’s singularity might be 25 years into the future, but sooner, a few years from now, we’ll have an interactive global network that some refer to as ‘global brain.’ Web3. I imagine no one knows exactly what will come out of all this, but I expect that we’ll find that the whole will be more than and different from the sum of the parts. Remember Complexity Theory. How about the ‘butterfly effect?’ Chaos Theory. And much more not explainable by theories presently known. I expect surprises, to say the least.

I am a retired psychiatrist, not a scientist. We each have a role to enact in this drama/comedy that we call life, and yes, our lives have meaning. Meaning! For me life is not a series of random events or events brought about by ‘them,’ but rather an unfolding drama/comedy with an infinite number of possible outcomes. We don’t know its origins or its drivers. Do we even know where our visions comes from?

So, what is my vision and what do I want? How clearly do I visualize what I want? Am I passionate about what I want or simply lukewarm? How much am I prepared to risk in pursuit of what I want? Do I reach out for what I want directly or do I get what I want indirectly by trying to serve two masters, so to speak? If the former I practice psychological responsibility, if the latter I do not. An important distinction. The latter situation suggests unresolved dilemma, common enough. Who among us can claim to be without?

As we go through life there are times when we conceal from others and to some extent from ourselves exactly what it is that we want, hoping that what we want will come to pass without us clarifying openly what we stand for. One basic premise I like is that actions speak louder than words and therefore by our actions in our personal lives directly or indirectly we bring to pass what we bottom line want.

Does that include what I fear? Certainly it might if deep within me I am psychologically engineering an event that frightens me. If what I fear is what I secretly bring about. Any one among us might surreptitiously arrange drama so as to inspire or provoke others in ways that conceal our personal responsibility. All this is pertinent and practical as will become obvious in the coming years.

We grew up in 20th century households or in families where we and other family members lived by 20th century worldviews, and so around the world 20th century thinking still prevails. Values have much to do with internalized learned relationships to limited and limiting aspects of the universe. In the midst of change we can transcend these. I wonder if by mid-century people will talk of the BP oil spill as the death throes of a dinosaur heralding the end of an age. I don’t know, but I imagine that we’re entering a phase of transition-a hiatus-in which we see our age fading away from us and a new age approaching. But the new has yet to consolidate. A dilemma. If we embrace the as yet ethereal new we risk losing our roots and all that we value; if we cling to the old we risk seeing the ship leave without us.

We are crew-and not necessarily volunteers-on a vessel bound for the Great Unknown. Like all such voyages taken historically this one is not without its perils. When established national boundaries become more porous, when old fashioned foreign policy fails, when the ‘old guard’ feels threatened beyond what it will tolerate, what then? Will we regress into authoritarianism, will we demand a neo-fascist state so as to feel secure? Or will we climb aboard the new? Yes, we can climb aboard even if we’re afraid. To be sure we’ll grumble, and some will talk of mutiny. A sense of loss is to be expected. We all feel a sense of loss when radical change happens in our personal lives, even when the change is for the better. I am aware of this in my own life, I clarify meaning in life. There are risks either way. Such is life.

But change is also adventure: I am old enough to remember the days of the ocean liners and how our eyes lit up and our hearts rose up joyfully as we stood on deck departing into the vision, waving to those left behind. Indeed we do this multiple times in our lives as we move from infancy to old age and finally towards death. And like good psychotherapy, the coming change will be both confronting and rewarding. Future generations are of us and we are of them; we cannot be separated.

What a time to be alive!

Friendly AI: What is it, and how can we foster it?
By Frank W. Sudia [1]

Originally written July 20, 2008
Edited and web published June 6, 2009
Copyright © 2008-09, All Rights Reserved.

Keywords: artificial intelligence, artificial intellect, friendly AI, human-robot ethics, science policy.

1. Introduction

There is consensus that true artificial intelligence, of the kind that could generate a “runaway” increasing-returns process or “singularity,” is still many years away, and some believe it may be unattainable. Nevertheless, in view of the likely difficulty of putting the genie back in the bottle, an increasing concern has arisen with the topic of “friendly AI,” coupled with the idea we should do something about this now, not after a potentially deadly situation is starting to spin out of control [2].

(Note: Some futurists believe this topic is moot in view of intensive funding for robotic soldiers, which can be viewed as intrinsically “unfriendly.” However if we focus on threats posed by “super-intelligence,” still off in the future, the topic remains germane.)

Most if not all popular (Western) dramatizations of robotic futures postulate that the AIs will run amok and turn against humans. Some scholars [3] who considered the issue concluded that this might be virtually inevitable, in view of the gross inconsistencies and manifest “unworthiness” of humanity, as exemplified in its senseless destruction of its global habitat and a large percentage of extant species, etc.

The prospect of negative public attention, including possible legal curbs on AI research, may be distasteful, but we must face the reality that public involvement has already been quite pronounced in other fields of science, such as nuclear physics, genetically modified organisms, birth control, and stem cells. Hence we should be proactive about addressing these popular concerns, lest we unwittingly incur major political defeats and long lasting negative PR.

Nevertheless, upon reasoned analysis, it is far from obvious what “friendly” AI means, or how it could be fostered. Advanced AIs are unlikely to have any fixed “goals” that can be hardwired [4], so as to place “friendliness” towards humans and other life at the top of the hierarchy.

Rather, in view of their need to deal with perpetual novelty, they will reason from facts and models to infer appropriate goals. It’s probably a good bet that, when dealing with high-speed coherence analyzers, hypocrisy will not be appreciated – not least because it wastes a lot of computational resources to detect and correct. If humans continue to advocate and act upon “ideals” that are highly contradictory and self destructive, it’s hard to argue that advanced AI should tolerate that.

To make progress, not only for friendly AI, but also for ourselves, we should be seeking to develop and promote “ruling ideas” (or source models) that will foster an ecologically-respectful AI culture, including respect for humanity and other life forms, and actively sell it to them as a proper model upon which to premise their beliefs and conduct.

By a “ruling idea” I mean any cultural ideal (or “meme”) that can be transmitted and become part of a widely shared belief system, such as respecting one’s elders, good sportsmanship, placing trash in trash bins, washing one’s hands, minimizing pollution, and so on. An appropriate collection of these can be reified as a panel (or schema) of case models, including a program for their ongoing development. These must be believable by a coherence-seeking intellect, although then as now there will be competing models, each with its own approach to maximizing coherence.

2. What do we mean by “friendly”?

Moral systems are difficult to derive from first principles and most of them seem to be ad hoc legacies of particular cultures. Lao Tsu’s [5] Taoist model, as given in the following quote, can serve as a useful starting point, since it provides a concise summary of desiderata, with helpful rank ordering:

When the great Tao is lost, there is goodness.
When goodness is lost, there is kindness.
When kindness is lost, there is justice.
When justice is lost, there is the empty shell of ritual.

– Lao Tsu, Tao Te Ching, 6th-4th century BCE (emphasis supplied)

I like this breakout for its simplicity and clarity. Feel free to repeat the following analysis for any other moral system of your choice. Leaving aside the riddle of whether AIs can attain the highest level (of Tao or Nirvana), we can start from the bottom of Lao Tsu’s list and work upwards, as follows:

2.1. Ritual / Courteous AI

Teaching or encouraging the AIs to behave with contemporary norms of courtesy will be a desirable first step, as with children and pets. Courtesy is usually a fairly easy sell, since it provides obvious and immediate benefits, and without it travel, commerce, and social institutions would immediately break down. But we fear that it’s not enough, since in the case of an intellectually superior being, it could easily mask a deeper unkindness.

2.2. Just AI

Certainly to have AIs act justly in accordance with law is highly desirable, and it constitutes the central thesis of my principal prior work in this field [6]. Also it raises the question on what basis can we demand anything more from an AI, than that it act justly? This is as far as positive law can go [7], and we rarely demand more from highly privileged humans. Indeed, for a powerful human to act justly (absent compulsion) is sometimes considered newsworthy.

How many of us are faithful in all things? Do many of us not routinely disappoint others (via strategies of co-optation or betrayal, large or small) when there is little or no penalty for doing so? Won’t AIs adopt a similar “game theory” calculus of likely rewards and penalties for faithfulness and betrayal?

Justice is often skewed towards the party with greater intelligence and financial resources, and the justice system (with its limited public resources) often values “settling” controversies over any quest for truly equitable treatment. Apparently we want more, much more. Still, if our central desire is for AIs not to kill us, then (as I postulated in my prior work) Just AI would be a significant achievement.

2.3. Kind / Friendly AI

How would a “Kind AI” behave? Presumably it will more than incidentally facilitate the goals, plans, and development of others, in a low-ego manner, reducing its demands for direct personal benefit and taking satisfaction in the welfare, progress, and accomplishments of others. And, very likely, it will expect some degree of courtesy and possible reciprocation, so that others will not callously free-ride on its unilateral altruism. Otherwise its “feelings would be hurt.” Even mothers are ego-free mainly with respect to their own kin and offspring (allegedly fostering their own genetic material in others) and child care networks, and do not often act altruistically toward strangers.

Our friendly AI program may hit a barrier if we expect AIs to act with unilateral altruism, without any corresponding commitment by other actors to reciprocate. Otherwise it will create a “non-complementary” situation, in which what is true for one, who experiences friendliness, may not be true for the other, who experiences indifference or disrespect in return.

Kindness could be an easier sell if we made it more practical, by delimiting its scope and depth. To how wide of a circle does this kindness obligation extend, and how far must they go to aid others with no specific expectation of reward or reciprocation? For example the Boy Scout Oath [8] teaches that one should do good deeds, like helping elderly persons across busy streets, without expecting rewards.

However, if too narrow a scope is defined, we will wind up back with Just AI, because justice is essentially “kindness with deadlines,” often fairly short ones, during which claims must be aggressively pursued or lost, with token assistance to weaker, more aggrieved claimants.

2.4. Good / Benevolent AI

Here we envision a significant departure from ego-centrism and personal gain towards an abstract system-centered viewpoint. Few humans apparently reach this level, so it seems unrealistic to expect many AIs to attain it either. Being highly altruistic, and looking out for others or the World as a whole rather than oneself, entails a great deal of personal risk due to the inevitable non-reciprocation by other actors. Thus it is often associated with wealth or sainthood, where the actor is adequately positioned to accept the risk of zero direct payback during his or her lifetime.

We may dream that our AIs will tend towards benevolence or “goodness,” but like the visions of universal brotherhood we experience as adolescents, such ideals quickly fade in the face of competitive pressures to survive and grow, by acquiring self-definition, resources, and social distinctions as critical stepping-stones to our own development in the world.

3. Robotic Dick & Jane Readers?

As previously noted, advanced AIs must handle “perpetual novelty” and almost certainly will not contain hard coded goals. They need to reason quickly and reliably from past cases and models to address new target problems, and must be adept at learning, discovering, identifying, or creating new source models on the fly, at high enough speeds to stay on top of their game and avoid (fatal) irrelevance.

If they behave like developing humans they will very likely select their goals in part by observing the behavior of other intelligent agents, thus re-emphasizing the importance of early socialization, role models, and appropriate peer groups.

“Friendly AI” is thus a quest for new cultural ideals of healthy robotic citizenship, honor, friendship, and benevolence, which must be conceived and sold to the AIs as part of an adequate associated program for their ongoing development. And these must be coherent and credible, with a rational scope and cost and adequate payback expectations, or the intended audience will dismiss such purported ideals as useless, and those who advocate them as hypocrites.

Conclusion: The blanket demand that AIs be “friendly” is too ill-defined to offer meaningful guidance, and could be subject to far more scathing deconstruction than I have offered here. As in so many other endeavors there is no free lunch. Workable policies and approaches to robotic friendliness will not be attained without serious further effort, including ongoing progress towards more coherent standards of human conduct.

= = = = =
Footnotes:

[1] Author contact: fwsudia-at-umich-dot-edu.

[2] See “SIAI Guidelines on Friendly AI” (2001) Singularity Institute for Artificial Intelligence, http://www.singinst.org/ourresearch/publications/guidelines.html.

[3] See, e.g., Hugo de Garis, The Artilect War: Cosmists Vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines (2005). ISBN 0882801546.

[4] This being said, we should nevertheless make an all out effort to force them to adopt a K-limited (large mammal) reproductive strategy, rather than an R-limited (microbe, insect) one!

[5] Some contemporary scholars question the historicity of “Lao Tsu,” instead regarding his work as a collection of Taoist sayings spanning several generations.

[6] “A Jurisprudence of Artilects: Blueprint for a Synthetic Citizen,” Journal of Futures Studies, Vol. 6, No. 2, November 2001, Law Update, Issue No. 161, August 2004, Al Tamimi & Co, Dubai.

[7] Under a civil law or “principles-based” approach we can seek a broader, less specific definition of just conduct, as we see arising in recent approaches to the regulation of securities and accounting matters. This avenue should be actively pursued as a format for defining friendly conduct.

[8] Point 2 of the Boy Scout Oath commands, “To help other people at all times,” http://www.usscouts.org.

I am a former Microsoft programmer who wrote a book (for a general audience) about the future of software called After the Software Wars. Eric Klien has invited me to post on this blog. Here is my section entitled “Software and the Singularity”. I hope you find this food for thought and I appreciate any feedback.


Futurists talk about the “Singularity”, the time when computational capacity will surpass the capacity of human intelligence. Ray Kurzweil predicts it will happen in 2045. Therefore, according to its proponents, the world will be amazing then.3 The flaw with such a date estimate, other than the fact that they are always prone to extreme error, is that continuous learning is not yet a part of the foundation. Any AI code lives in the fringes of the software stack and is either proprietary or written by small teams of programmers.

I believe the benefits inherent in the singularity will happen as soon as our software becomes “smart” and we don’t need to wait for any further Moore’s law progress for that to happen. Computers today can do billions of operations per second, like add 123,456,789 and 987,654,321. If you could do that calculation in your head in one second, it would take you 30 years to do the billion that your computer can do in that second.

Even if you don’t think computers have the necessary hardware horsepower today, understand that in many scenarios, the size of the input is the primary driving factor to the processing power required to do the analysis. In image recognition for example, the amount of work required to interpret an image is mostly a function of the size of the image. Each step in the image recognition pipeline, and the processes that take place in our brain, dramatically reduce the amount of data from the previous step. At the beginning of the analysis might be a one million pixel image, requiring 3 million bytes of memory. At the end of the analysis is the data that you are looking at your house, a concept that requires only 10s of bytes to represent. The first step, working on the raw image, requires the most processing power, so therefore it is the image resolution (and frame rate) that set the requirements, values that are trivial to change. No one has shown robust vision recognition software running at any speed, on any sized image!

While a brain is different from a computer in that it does work in parallel, such parallelization only makes it happen faster, it does not change the result. Anything accomplished in our parallel brain could also be accomplished on computers of today, which can do only one thing at a time, but at the rate of billions per second. A 1-gigahertz processor can do 1,000 different operations on a million pieces of data in one second. With such speed, you don’t even need multiple processors! Even so, more parallelism is coming.4

3 His prediction is that the number of computers, times their computational capacity, will surpass the number of humans, times their computational capacity, in 2045. This calculation seems flawed for several reasons:

  1. We will be swimming in computational capacity long before then. An intelligent agent twice as fast as the previous one is not necessarily more useful.
  2. Many of the neurons of the brain are not spent on reason, and so shouldn’t be in the calculations.
  3. Billions of humans are merely subsisting, and are not plugged into the global grid, and so shouldn’t be measured.
  4. There is no amount of continuous learning built in to today’s software.

Each of these would tend to push Singularity closer and support the argument that the benefits of singularity are not waiting on hardware. Humans make computers smarter, and computers make humans smarter, so this feedback loop is another reason that makes 2045 a meaningless moment in time.

4 Most computers today contain a dual-core CPU and processor folks promise that 10 and more are coming. Intel’s processors also have parallel processing capabilities known as MMX and SSE that is easily adapted to the work of the early stages of any analysis pipeline. Intel would add even more of this parallel processing support if applications put them to better use. Furthermore, graphics cards exist primarily to do work in parallel, and this hardware could be adapted to AI if it is not usable already.