Toggle light / dark theme

Dear Lifeboat Foundation Family & Friends,

A few months back, my Aunt Charlotte wrote, wondering why I — a relentless searcher focused upon human evolution and long-term human survival strategy, had chosen to pursue a PhD in economics (Banking & Finance). I recently replied that, as it turns out, sound economic theory and global financial stability both play central roles in the quest for long-term human survival. In the fifth and final chapter of my recent Masters thesis, On the Problem of Sustainable Economic Development: A Game-Theoretical Solution, I argued (with considerable passion) that much of the blame for the economic crisis of 2008 (which is, essentially still upon us) may be attributed the adoption of Keynesian economics and the dismissal of the powerful counter-arguments tabled by his great rival, F.A. von Hayek. Despite the fact that they remained friends all the way until the very end, their theories are diametrically opposed at nearly every point. There was, however, at least one central point they agreed upon — indeed, Hayek was fond of quoting one of Keynes’ most famous maxims: “The ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else” [1].

And, with this nontrivial problem and and the great Hayek vs. Keynes debate in mind, I’ll offer a preview-by-way-of-prelude with this invitation to turn a few pages of On the Problem of Modern Portfolio Theory: In Search of a Timeless & Universal Investment Perspective:

It is perhaps significant that Keynes hated to be addressed as “professor” (he never had that title). He was not primarily a scholar. He was a great amateur in many fields of knowledge and the arts; he had all the gifts of a great politician and a political pamphleteer; and he knew that “the ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is generally understood. Indeed the world is ruled by little else” [1]. And as he had a mind capable of recasting, in the intervals of his other occupations, the body of current economic theory, he more than any of his compeers had come to affect current thought. Whether it was he who was right or wrong, only the future will show. There are some who fear that if Lenin’s statement is correct that the best way to destroy the capitalist system is to debauch the currency, of which Keynes himself has reminded us [1], it will be largely due to Keynes’s influence if this prescription is followed.…

Perhaps the explanation of much that is puzzling about Keynes’s mind lies in the supreme confidence he had acquired in his power to play on public opinion as a supreme master plays on his instrument. He loved to pose in the role of a Cassandra whose warnings were not listened to. But, in fact, his early success in swinging round public opinion about the peace treaties had given him probably even an exaggerated estimate of his powers. I shall never forget one occasion – I believe the last time that I met him – when he startled me by an uncommonly frank expression of this. It was early in 1946, shortly after he had returned from the strenuous and exhausting negotiations in Washington on the British loan. Earlier in the evening he had fascinated the company by a detailed account of the American market for Elizabethan books which in any other man would have given the impression that he had devoted most of his time in the United States to that subject. Later a turn in the conversation made me ask him whether he was not concerned about what some of his disciples were making of his theories. After a not very complimentary remark about the persons concerned, he proceeded to reassure me by explaining that those ideas had been badly needed at the time he had launched them. He continued by indicating that I need not be alarmed; if they should ever become dangerous I could rely upon him again quickly to swing round public opinion – and he indicated by a quick movement of his hand how rapidly that would be done. But three months later he was dead [2].

As always, any and all comments, criticisms, thoughts, and suggestions are welcome!

Bidding you Godspeed,

Matt Funk, FLS, PhD Candidate, University of Malta, Dept. of Banking & Finance

[1]. KE YNES, J. (1920). The General Theory of Employment, Interest and Money (Palgrave Macmillan, London).

[2]. HAYEK, F. (1952). Review of R.F. Harrod’s ‘The Life of John Maynard Keynes’. J of Mod Hist 24:195–198.

When examining the delicate balance that life on Earth hangs within, it is impossible not to consider the ongoing love/hate connection between our parent star, the sun, and our uniquely terraqueous home planet.

On one hand, Earth is situated so perfectly, so ideally, inside the sun’s habitable zone, that it is impossible not to esteem our parent star with a sense of ongoing gratitude. It is, after all, the onslaught of spectral rain, the sun’s seemingly limitless output of charged particles, which provide the initial spark to all terrestrial life.

Yet on another hand, during those brief moments of solar upheaval, when highly energetic Earth-directed ejecta threaten with destruction our precipitously perched technological infrastructure, one cannot help but eye with caution the potentially calamitous distance of only 93 million miles that our entire human population resides from this unpredictable stellar inferno.

On 6 February 2011, twin solar observational spacecraft STEREO aligned at opposite ends of the sun along Earth’s orbit, and for the first time in human history, offered scientists a complete 360-degree view of the sun. Since solar observation began hundreds of years ago, humanity has had available only one side of the sun in view at any given time, as it slowly completed a rotation every 27 days. First launched in 2006, the two STEREO satellites are glittering jewels among a growing crown of heliophysics science missions that aim to better understand solar dynamics, and for the next eight years, will offer this dual-sided view of our parent star.

In addition to providing the source of all energy to our home planet Earth, the sun occasionally spews from its active regions violent bursts of energy, known as coronal mass ejections(CMEs). These fast traveling clouds of ionized gas are responsible for lovely events like the aurorae borealis and australis, but beyond a certain point have been known to overload orbiting satellites, set fire to ground-based technological infrastructure, and even usher in widespread blackouts.

CMEs are natural occurrences and as well understood as ever thanks to the emerging perspective of our sun as a dynamic star. Though humanity has known for centuries that the solar cycle follows a more/less eleven-year ebb and flow, only recently has the scientific community effectively constellated a more complete picture as to how our sun’s subtle changes effect space weather and, unfortunately, how little we can feasibly contend with this legitimate global threat.

The massive solar storm that occurred on 1 September 1859 produced aurorae that were visible as far south as Hawai’i and Cuba, with similar effects observed around the South Pole. The Earth-directed CME took all of 17 hours to make the 93 million mile trek from the corona of our sun to the Earth’s atmosphere, due to an earlier CME that had cleared a nice path for its intra-stellar journey. The one saving grace of this massive space weather event was that the North American and European telegraph system was in its delicate infancy, in place for only 15 years prior. Nevertheless, telegraph pylons threw sparks, many of them burning, and telegraph paper worldwide caught fire spontaneously.

Considering the ambitious improvements in communications lines, electrical grids, and broadband networks that have been implemented since, humanity faces the threat of space weather on uneven footing. Large CME events are known to occur around every 500 years, based on ice core samples measured for high-energy proton radiation.

The CME event on 14 March 1989 overloaded the HydroQuebec transmission lines and caused the catastrophic collapse of an entire power gird. The resulting aurorae were visible as far south as Texas and Florida. The estimated cost was totaled in the hundreds of million of dollars. A later storm in August 1989 interfered with semiconductor functionality and trading was called off on the Toronto stock exchange.

Beginning in 1995 with the launch and deployment of The Solar Heliospheric Observatory (SOHO), through 2009 with the launch of SDO, the Solar Dynamics Observatory, and finally this year, with the launch of the Glory science mission, NASA is making ambitious, thoughtful strides to gain a clearer picture of the dynamics of the sun, to offer a better means to predict space weather, and evaluate more clearly both the great benefits and grave stellar threats.

Earth-bound technology infrastructure remains vulnerable to high-energy output from the sun. However, the growing array of orbiting satellites that the best and the brightest among modern science use to continually gather data from our dynamic star will offer humanity its best chance of modeling, predicting, and perhaps some day defending against the occasional outburst from our parent star.

Written by Zachary Urbina, Founder Cozy Dark


What do Singularitarianism and popular Western religion have in common? More than you might imagine. A thumbnail evaluation of both ends of the American technocentric intelligence spectrum reveals both remarkable similarities in their respective narrative constructions and, naturally, amusing disparities. It would appear that all humans, regardless of our respective beliefs, seem to express goal-oriented hardwiring that demands a neatly constructed story to frame our experiences.

Be you a technophile, you are eagerly awaiting, with perhaps equal parts hope and fear, the moment when artificial general intelligence surpasses human intelligence. You don’t know exactly how this new, more cunning intelligence will react to humans, but you’re fairly certain that humanity might well be in a bit of trouble, or at very least, have some unique competition.

Be you a technophobe, you shun the trappings of in-depth science and technology involvement, save for a superficial interaction with the rudimentary elements of technology which likely do not extend much further than your home computer, cell phone, automobile, and/or microwave oven. As a technophobe, you might even consider yourself religious, and if you’re a Christian, you might well be waiting for the second-coming, the rapture.

Both scenarios lead humanity to ironically similar destinations, in which humankind becomes either marginalized or largely vestigial.

It’s difficult to parse either eventuality with observant members of the other’s belief system. If you ask a group of technophiles what they think of the idea of the rapture you will likely be laughed at or drown in tidal wave of atheist drool. The very thought of some magical force eviscerating an entire religious population in one eschatological fell swoop might be too much for some science and tech geeks, and medical attention, or at the very least a warehouse-quantity dose of smelling salts, might be in order.

Conversely, to the religiously observant, the notion of the singularity might for them, exist in terms too technical to even theoretically digest or represent something entirely dark or sinister that seems to fulfill their own belief system’s end game, a kind of techno-holocaust that reifies their purported faith.

The objective reality of both scenarios will be very different than either envisioned teleologies. Reality’s shades of gray of have a way of making foolish even the wisest individual’s predictions.

In my personal life, I too believed that the publication of my latest and most ambitious work, explaining the decidedly broad-scope Parent Star Theory would also constitute an end result of significant consequence, much like the popular narrative surrounding the moment of the singularity; that some great finish line was reached. The truth, however, is that just like the singularity, my own narrative-ized moment was not a precisely secured end, but a distinct moments of beginning, of conception and commitment. Not an arrival but a departure; a bold embarkation without clear end in sight.

Rather than answers, the coming singularity should provoke additional questions. How do we proceed? Where do we go from here? If the fundamental rules in the calculus of the human equation are changing, then how must we adapt? If the next stage of humanity exists on a post-scarcity planet, what then will be our larger goals, our new quest as a global human force?

Humanity must recognize that the idea of a narrative is indeed useful, so long as that narrative maintains some aspect of open-endedness. We might well need that consequential beginning-middle-end, if only to be reminded that each end most often leads to a new beginning.

Written by Zachary Urbina, Founder, Cozy Dark

Many people think that the issues Lifeboat Foundation is discussing will not be relevant for many decades to come. But recently a major US Governmental Agency, the TSA, decided to make life hell for 310 million Americans (and anyone who dares visit the USA) as it reacts to the coming Great Filter.

What is the Great Filter? Basically it is whatever has caused our universe to be dead with no advanced civilizations in it. (An advanced civilization is defined as a civilization advanced enough to be self-sustaining outside its home planet.)

The most likely explanation for this Great Filter is that civilizations eventually develop technologies so powerful that they provide individuals with the means to destroy all life on the planet. Technology has now become powerful enough that the TSA even sees 3-year-old girls as threats who may take down a plane so they take away her teddy bear and grope her.

Do I agree with the TSA’s actions? No, because they are not risk-based. For example, they recently refused to let a man board a plane even when he stripped down to his underwear that “left nothing to the imagination” as he attempted to prove that he didn’t have a bomb on his body. Instead they arrested him, handcuffed and paraded him through two separate airport terminals in his underwear, stole his phone, and arrested a bystander who filmed the event and stole her camera as well. Obviously the TSA’s actions in this instance did nothing to protect Americans from mad bombers. And such examples are numerous.

But is the TSA in general reacting to real growing threats as the Great Filter approaches? You bet it is. The next 10 years will be interesting. May you live in interesting times.

California Dreams Video 1 from IFTF on Vimeo.

INSTITUTE FOR THE FUTURE ANNOUNCES CALIFORNIA DREAMS:
A CALL FOR ENTRIES ON IMAGINING LIFE IN CALIFORNIA IN 2020

Put yourself in the future and show us what a day in your life looks like. Will California keep growing, start conserving, reinvent itself, or collapse? How are you living in this new world? Anyone can enter,anyone can vote; anyone can change the future of California!

California has always been a frontier—a place of change and innovation, reinventing itself time and again. The question is, can California do it again? Today the state is facing some of its toughest challenges. Launching today, IFTF’s California Dreams is a competition with an urgent challenge to recruit citizen visions of the future of California—ideas for what it will be like to live in the state in the next decade—to start creating a new California dream.

California Dreams calls upon the public look 3–10 years into the future and tell a story about a single day in their own life. Videos, graphical entries, and stories will be accepted until January 15, 2011. Up to five winners will be flown to Palo Alto, California in March to present their ideas and be connected to other innovative thinkers to help bring these ideas to life. The grand prize winner will receive the $3,000 IFTF Roy Amara Prize for Participatory Foresight.

“We want to engage Californians in shaping their lives and communities” said Marina Gorbis, Executive Director of IFTF. “The California Dreams contest will outline the kinds of questions and dilemmas we need to be analyzing, and provoke people to ask deep questions.”

Entries may come from anyone anywhere and can include, but are not limited to, the following: Urban farming, online games replacing school, a fast food tax, smaller, sustainable housing, rise in immigrant entrepreneurs, mass migration out of state. Participants are challenged to use IFTF’s California Dreaming map as inspiration, and picture themselves in the next decade, whether it be a future of growth, constraint, transformation, or collapse.

The grand prize, called the Roy Amara Prize, is named for IFTF’s long-time president Roy Amara (1925−2000) and is part of a larger program of social impact projects at IFTF honoring his legacy, known as The Roy Amara Fund for Participatory Foresight, the Fund uses participatory tools to translate foresight research into concrete actions that address future social challenges.

PANEL OF COMPETITION JUDGES

Gina Bianchini, Entrepreneur in Residence, Andreessen Horowitz

Alexandra Carmichael, Research Affiliate, Institute for the Future, Co-Founder, CureTogether, Director, Quantified Self

Bill Cooper, The Urban Water Research Center, UC Irvine

Poppy Davis, Executive Director, EcoFarm

Jesse Dylan, Founder of FreeForm, Founder of Lybba

Marina Gorbis, Executive Director, Institute for the Future

David Hayes-Bautista, Professor of Medicine and Health Services,UCLA School of Public Health

Jessica Jackley, CEO, ProFounder

Xeni Jardin, Partner, Boing Boing, Executive Producer, Boing Boing Video

Jane McGonigal, Director of Game Research and Development, Institute for the Future

Rachel Pike, Clean Tech Analyst, Draper Fisher Jurvetson

Howard Rheingold, Visiting Professor, Stanford / Berkeley, and theInstitute of Creative Technologies

Tiffany Shlain, Founder, The Webby Awards
Co-founder International Academy of Digital Arts and Sciences

Larry Smarr
Founding Director, California Institute for Telecommunications and Information Technology (Calit2), Professor, UC San Diego

DETAILS

WHAT: An online competition for visions of the future of California in the next 10 years, along one of four future paths: growth, constraint, transformation, or collapse. Anyone can enter, anyone can vote, anyone can change the future of California.

WHEN: Launch – October 26, 2010
Deadline for entries — January 15, 2011
Winners announced — February 23, 2011
Winners Celebration — 6 – 9 pm March 11, 2011 — open to the public

WHERE: http://californiadreams.org

For more information on the California Dreaming map or to download the pdf, click here.

Posted by Dr. Denise L Herzing and Dr. Lori Marino, Human-Nonhuman Relationship Board

Over the millennia humans and the rest of nature have coexisted in various relationships. However the intimate and interdependent nature of our relationship with other beings on the planet has been recently brought to light by the oil spill in the Gulf of Mexico. This ongoing environmental disaster is a prime example of “profit over principle” regarding non-human life. This spill threatens not only the reproductive viability of all flora and fauna in the affected ecosystems but also complex and sensitive non-human cultures like those we now recognize in dolphins and whales.

Although science has, for decades, documented the links and interdependence of ecosystems and species, the ethical dilemma now facing humans is at a critical level. For too long have we not recognized the true cost of our life styles and priorities of profit over the health of the planet and the nonhuman beings we share it with. If ever the time, this is a wake up call for humanity and a call to action. If humanity is to survive we need to make an urgent and long-term commitment to the health of the planet. The oceans, our food sources and the very oxygen we breathe may be dependent on our choices in the next 10 years.

And humanity’s survival is inextricably linked to that of the other beings we share this planet with. We need a new ethic.

Many oceanographers and marine biologist have, for a decade, sent out the message that the oceans are in trouble. Human impacts of over-fishing, pollution, and habitat destruction are threatening the very cycles of our existence. In the recent catastrophe in the Gulf, one corporation’s neglectful oversight and push for profit has set the stage for a century of clean up and impact, the implications of which we can only begin to imagine.

Current and reported estimates of stranded dolphins are at fifty-five. However, these are dolphins visibly stranded on beaches. Recent aerial footage, on YouTube, by John Wathen shows a much greater and serious threat. Offshore, in the “no fly zone” hundreds of dolphins and whales have been observed in the oil slick. Some floating belly up and dead, others struggling to breathe in the toxic fumes. Others exhibit “drunken dolphin syndrome” characterized by floating in an almost stupefied state on the surface of the water. These highly visible effects are just the tip of the iceberg in terms of the spill’s impact on the long term health and viability of the Gulf’s dolphin and whale populations, not to mention the suffering incurred by each individual dolphin as he or she tries to cope with this crisis.

Known direct and indirect effects of oil spills on dolphins and whales depend on the species but include, toxicity that can cause organ dysfunction and neurological impairment, damaged airways and lungs, gastrointestinal ulceration and hemorrhaging, eye and skin lesions, decreased body mass due to limited prey, and, the pervasive long term behavioral, immunological, and metabolic impacts of stress. Recent reports substantiate that many dolphins and whales in the Gulf are undergoing tremendous stress, shock and suffering from many of the above effects. The impact to newborns and young calves is clearly devastating.

After the Exxon Valdez spill in Prince William Sound in 1989 two pods of orcas (killer whales) were tracked. It was found that one third of the whales in one pod and 40 percent of the whales in the other pod had disappeared, with one pod never recovering its numbers. There is still some debate about the number of missing whales directly impacted by the oil though it is fair to say that losses of this magnitude are uncommon and do serious damage to orca societies.

Yes, orca societies. Years of field research has led to the conclusion by a growing number of scientists that many dolphin and whale species, including sperm whales, humpback whales, orcas, and bottlenose dolphins possess sophisticated cultures, that is, learned behavioral traditions passed on from one generation to the next. These cultures are not only unique to each group but are critically important for survival. Therefore, not only do environmental catastrophes such as the Gulf oil spill result in individual suffering and loss of life but they contribute to the permanent destruction of entire oceanic cultures. These complex learned traditions cannot be replicated after they are gone and this makes them invaluable.

On December 10, 1948 the General Assembly of the United Nations adopted and proclaimed the Universal Declaration of Human Rights, which acknowledges basic rights to life, liberty, and freedom of cultural expression. We recognize these foundational rights for humans as we are sentient, complex beings. It is abundantly clear that our actions have violated these same rights for other sentient, complex and cultural beings in the oceans – the dolphins and whales. We should use this tragedy as an opportunity to formally recognize societal and legal rights for them so that their lives and their unique cultures are better protected in the future.

Recently, there was a meeting of scientists, philosophers, legal experts and dolphin and whale advocates in Helsinki, Finland, who drafted a Declaration of Rights for Cetaceans a global call for basic rights for dolphins and whales. You can read more about this effort and become a signatory here: http://cetaceanconservation.com.au/cetaceanrights/. Given the destruction of dolphin and whale lives and cultures caused by the ongoing environmental disaster in the Gulf, we think this is one of the ways we can commit ourselves to working towards a future that will be a lifeboat for humans, dolphins and whales, and the rest of nature.

Wendy McElroy brings an important issue to our attention — the increasing criminalization of filming / recording on-duty police officers.

The techno-progressive angle on this would have to take sousveillance into consideration. If our only response to a surveillance state is to observe “from the bottom” (as, for example, Steve Mann would have it), and if that response is made illegal, it seems that the next set of possible steps forward could include more entrenched recording of all personal interaction.

Already we have a cyborg model for this — “eyeborgs” Rob Spence and Neil Harbisson. So where next?

Resources:

http://www.nytimes.com/2006/12/10/magazine/10section3b.t-3.html

http://en.wikipedia.org/wiki/Steve_Mann

http://eyeborgproject.com/

http://jointchiefs.blogspot.com/2010/06/camera-as-gun-drop-shooter.html

http://es.wikipedia.org/wiki/Neil_Harbisson

Friendly AI: What is it, and how can we foster it?
By Frank W. Sudia [1]

Originally written July 20, 2008
Edited and web published June 6, 2009
Copyright © 2008-09, All Rights Reserved.

Keywords: artificial intelligence, artificial intellect, friendly AI, human-robot ethics, science policy.

1. Introduction

There is consensus that true artificial intelligence, of the kind that could generate a “runaway” increasing-returns process or “singularity,” is still many years away, and some believe it may be unattainable. Nevertheless, in view of the likely difficulty of putting the genie back in the bottle, an increasing concern has arisen with the topic of “friendly AI,” coupled with the idea we should do something about this now, not after a potentially deadly situation is starting to spin out of control [2].

(Note: Some futurists believe this topic is moot in view of intensive funding for robotic soldiers, which can be viewed as intrinsically “unfriendly.” However if we focus on threats posed by “super-intelligence,” still off in the future, the topic remains germane.)

Most if not all popular (Western) dramatizations of robotic futures postulate that the AIs will run amok and turn against humans. Some scholars [3] who considered the issue concluded that this might be virtually inevitable, in view of the gross inconsistencies and manifest “unworthiness” of humanity, as exemplified in its senseless destruction of its global habitat and a large percentage of extant species, etc.

The prospect of negative public attention, including possible legal curbs on AI research, may be distasteful, but we must face the reality that public involvement has already been quite pronounced in other fields of science, such as nuclear physics, genetically modified organisms, birth control, and stem cells. Hence we should be proactive about addressing these popular concerns, lest we unwittingly incur major political defeats and long lasting negative PR.

Nevertheless, upon reasoned analysis, it is far from obvious what “friendly” AI means, or how it could be fostered. Advanced AIs are unlikely to have any fixed “goals” that can be hardwired [4], so as to place “friendliness” towards humans and other life at the top of the hierarchy.

Rather, in view of their need to deal with perpetual novelty, they will reason from facts and models to infer appropriate goals. It’s probably a good bet that, when dealing with high-speed coherence analyzers, hypocrisy will not be appreciated – not least because it wastes a lot of computational resources to detect and correct. If humans continue to advocate and act upon “ideals” that are highly contradictory and self destructive, it’s hard to argue that advanced AI should tolerate that.

To make progress, not only for friendly AI, but also for ourselves, we should be seeking to develop and promote “ruling ideas” (or source models) that will foster an ecologically-respectful AI culture, including respect for humanity and other life forms, and actively sell it to them as a proper model upon which to premise their beliefs and conduct.

By a “ruling idea” I mean any cultural ideal (or “meme”) that can be transmitted and become part of a widely shared belief system, such as respecting one’s elders, good sportsmanship, placing trash in trash bins, washing one’s hands, minimizing pollution, and so on. An appropriate collection of these can be reified as a panel (or schema) of case models, including a program for their ongoing development. These must be believable by a coherence-seeking intellect, although then as now there will be competing models, each with its own approach to maximizing coherence.

2. What do we mean by “friendly”?

Moral systems are difficult to derive from first principles and most of them seem to be ad hoc legacies of particular cultures. Lao Tsu’s [5] Taoist model, as given in the following quote, can serve as a useful starting point, since it provides a concise summary of desiderata, with helpful rank ordering:

When the great Tao is lost, there is goodness.
When goodness is lost, there is kindness.
When kindness is lost, there is justice.
When justice is lost, there is the empty shell of ritual.

– Lao Tsu, Tao Te Ching, 6th-4th century BCE (emphasis supplied)

I like this breakout for its simplicity and clarity. Feel free to repeat the following analysis for any other moral system of your choice. Leaving aside the riddle of whether AIs can attain the highest level (of Tao or Nirvana), we can start from the bottom of Lao Tsu’s list and work upwards, as follows:

2.1. Ritual / Courteous AI

Teaching or encouraging the AIs to behave with contemporary norms of courtesy will be a desirable first step, as with children and pets. Courtesy is usually a fairly easy sell, since it provides obvious and immediate benefits, and without it travel, commerce, and social institutions would immediately break down. But we fear that it’s not enough, since in the case of an intellectually superior being, it could easily mask a deeper unkindness.

2.2. Just AI

Certainly to have AIs act justly in accordance with law is highly desirable, and it constitutes the central thesis of my principal prior work in this field [6]. Also it raises the question on what basis can we demand anything more from an AI, than that it act justly? This is as far as positive law can go [7], and we rarely demand more from highly privileged humans. Indeed, for a powerful human to act justly (absent compulsion) is sometimes considered newsworthy.

How many of us are faithful in all things? Do many of us not routinely disappoint others (via strategies of co-optation or betrayal, large or small) when there is little or no penalty for doing so? Won’t AIs adopt a similar “game theory” calculus of likely rewards and penalties for faithfulness and betrayal?

Justice is often skewed towards the party with greater intelligence and financial resources, and the justice system (with its limited public resources) often values “settling” controversies over any quest for truly equitable treatment. Apparently we want more, much more. Still, if our central desire is for AIs not to kill us, then (as I postulated in my prior work) Just AI would be a significant achievement.

2.3. Kind / Friendly AI

How would a “Kind AI” behave? Presumably it will more than incidentally facilitate the goals, plans, and development of others, in a low-ego manner, reducing its demands for direct personal benefit and taking satisfaction in the welfare, progress, and accomplishments of others. And, very likely, it will expect some degree of courtesy and possible reciprocation, so that others will not callously free-ride on its unilateral altruism. Otherwise its “feelings would be hurt.” Even mothers are ego-free mainly with respect to their own kin and offspring (allegedly fostering their own genetic material in others) and child care networks, and do not often act altruistically toward strangers.

Our friendly AI program may hit a barrier if we expect AIs to act with unilateral altruism, without any corresponding commitment by other actors to reciprocate. Otherwise it will create a “non-complementary” situation, in which what is true for one, who experiences friendliness, may not be true for the other, who experiences indifference or disrespect in return.

Kindness could be an easier sell if we made it more practical, by delimiting its scope and depth. To how wide of a circle does this kindness obligation extend, and how far must they go to aid others with no specific expectation of reward or reciprocation? For example the Boy Scout Oath [8] teaches that one should do good deeds, like helping elderly persons across busy streets, without expecting rewards.

However, if too narrow a scope is defined, we will wind up back with Just AI, because justice is essentially “kindness with deadlines,” often fairly short ones, during which claims must be aggressively pursued or lost, with token assistance to weaker, more aggrieved claimants.

2.4. Good / Benevolent AI

Here we envision a significant departure from ego-centrism and personal gain towards an abstract system-centered viewpoint. Few humans apparently reach this level, so it seems unrealistic to expect many AIs to attain it either. Being highly altruistic, and looking out for others or the World as a whole rather than oneself, entails a great deal of personal risk due to the inevitable non-reciprocation by other actors. Thus it is often associated with wealth or sainthood, where the actor is adequately positioned to accept the risk of zero direct payback during his or her lifetime.

We may dream that our AIs will tend towards benevolence or “goodness,” but like the visions of universal brotherhood we experience as adolescents, such ideals quickly fade in the face of competitive pressures to survive and grow, by acquiring self-definition, resources, and social distinctions as critical stepping-stones to our own development in the world.

3. Robotic Dick & Jane Readers?

As previously noted, advanced AIs must handle “perpetual novelty” and almost certainly will not contain hard coded goals. They need to reason quickly and reliably from past cases and models to address new target problems, and must be adept at learning, discovering, identifying, or creating new source models on the fly, at high enough speeds to stay on top of their game and avoid (fatal) irrelevance.

If they behave like developing humans they will very likely select their goals in part by observing the behavior of other intelligent agents, thus re-emphasizing the importance of early socialization, role models, and appropriate peer groups.

“Friendly AI” is thus a quest for new cultural ideals of healthy robotic citizenship, honor, friendship, and benevolence, which must be conceived and sold to the AIs as part of an adequate associated program for their ongoing development. And these must be coherent and credible, with a rational scope and cost and adequate payback expectations, or the intended audience will dismiss such purported ideals as useless, and those who advocate them as hypocrites.

Conclusion: The blanket demand that AIs be “friendly” is too ill-defined to offer meaningful guidance, and could be subject to far more scathing deconstruction than I have offered here. As in so many other endeavors there is no free lunch. Workable policies and approaches to robotic friendliness will not be attained without serious further effort, including ongoing progress towards more coherent standards of human conduct.

= = = = =
Footnotes:

[1] Author contact: fwsudia-at-umich-dot-edu.

[2] See “SIAI Guidelines on Friendly AI” (2001) Singularity Institute for Artificial Intelligence, http://www.singinst.org/ourresearch/publications/guidelines.html.

[3] See, e.g., Hugo de Garis, The Artilect War: Cosmists Vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines (2005). ISBN 0882801546.

[4] This being said, we should nevertheless make an all out effort to force them to adopt a K-limited (large mammal) reproductive strategy, rather than an R-limited (microbe, insect) one!

[5] Some contemporary scholars question the historicity of “Lao Tsu,” instead regarding his work as a collection of Taoist sayings spanning several generations.

[6] “A Jurisprudence of Artilects: Blueprint for a Synthetic Citizen,” Journal of Futures Studies, Vol. 6, No. 2, November 2001, Law Update, Issue No. 161, August 2004, Al Tamimi & Co, Dubai.

[7] Under a civil law or “principles-based” approach we can seek a broader, less specific definition of just conduct, as we see arising in recent approaches to the regulation of securities and accounting matters. This avenue should be actively pursued as a format for defining friendly conduct.

[8] Point 2 of the Boy Scout Oath commands, “To help other people at all times,” http://www.usscouts.org.

Experts regard safety report on Big Bang Machine as insufficient and one-dimensional

International critics of the high energy experiments planned to start soon at the particle accelerator LHC at CERN in Geneva have submitted a request to the Ministers of Science of the CERN member states and to the delegates to the CERN Council, the supreme controlling body of CERN.

The paper states that several risk scenarios (that have to be described as global or existential risks) cannot currently be excluded. Under present conditions, the critics have to speak out against an operation of the LHC.

The submission includes assessments from expertises in the fields markedly missing from the physicist-only LSAG safety report — those of risk assessment, law, ethics and statistics. Further weight is added because these experts are all university-level experts – from Griffith University, the University of North Dakota and Oxford University respectively. In particular, it is criticised that CERN’s official safety report lacks independence – all its authors have a prior interest in the LHC running and that the report uses physicist-only authors, when modern risk-assessment guidelines recommend risk experts and ethicists as well.

As a precondition of safety, the request calls for a neutral and multi-disciplinary risk assessment and additional astrophysical experiments – Earth based and in the atmosphere – for a better empirical verification of the alleged comparability of particle collisions under the extreme artificial conditions of the LHC experiment and relatively rare natural high energy particle collisions: “Far from copying nature, the LHC focuses on rare and extreme events in a physical set up which has never occurred before in the history of the planet. Nature does not set up LHC experiments.”

Even under greatly improved circumstances concerning safety as proposed above, big jumps in energy increase, as presently planned by a factor of three compared to present records, without carefully analyzing previous results before each increase of energy, should principally be avoided.

The concise “Request to CERN Council and Member States on LHC Risks” (Pdf with hyperlinks to the described studies) by several critical groups, supported by well known critics of the planned experiments:

http://lhc-concern.info/wp-content/uploads/2010/03/request-t…5;2010.pdf

The answer received by now does not consider these arguments and studies but only repeats again that from the side of the operators everything appears sufficient, agreed by a Nobel Price winner in physics. LHC restart and record collisions by factor 3 are presently scheduled for March 30, 2010.

Official detailed and well understandable paper and communication with many scientific sources by ‘ConCERNed International’ and ‘LHC Kritik’:

http://lhc-concern.info/wp-content/uploads/2010/03/critical-…ed-int.pdf

More info:
http://lhc-concern.info/

A few months ago, my friend Benjamin Jakobus and I created an online “risk intelligence” test at http://www.projectionpoint.com/. It consists of fifty statements about science, history, geography, and so on, and your task is to say how likely you think it is that each of these statements is true. We calculate your risk intelligence quotient (RQ) on the basis of your estimates. So far, over 30,000 people have taken our test, and we’re currently writing up the results for some peer-reviewed journals.

Now we want to take things a step further, and see whether our measure correlates with the ability to make accurate estimates of future events. To this end we’ve created a “prediction game” at http://www.projectionpoint.com/prediction_game.php. The basic idea is the same; we provide you with a bunch of statements, and your task is to say how likely you think it is that each one is true. The difference is that these statements refer not to known facts, but to future events. Unlike the first test, nobody knows whether these statements are true or false yet. For most of them, we won’t know until the end of the year 2010.

For example, how likely do you think it is that this year will be the hottest on record? If you think this is very unlikely you might select the 10% category. If you think it is quite likely, but not very likely, you might put the chances at 60% or 70%. Selecting the 50% category would mean that you had no idea how likely it is.

This is ongoing research, so please feel free to comment, criticise or make suggestions.