Toggle light / dark theme

Wendy McElroy brings an important issue to our attention — the increasing criminalization of filming / recording on-duty police officers.

The techno-progressive angle on this would have to take sousveillance into consideration. If our only response to a surveillance state is to observe “from the bottom” (as, for example, Steve Mann would have it), and if that response is made illegal, it seems that the next set of possible steps forward could include more entrenched recording of all personal interaction.

Already we have a cyborg model for this — “eyeborgs” Rob Spence and Neil Harbisson. So where next?

Resources:

http://www.nytimes.com/2006/12/10/magazine/10section3b.t-3.html

http://en.wikipedia.org/wiki/Steve_Mann

http://eyeborgproject.com/

http://jointchiefs.blogspot.com/2010/06/camera-as-gun-drop-shooter.html

http://es.wikipedia.org/wiki/Neil_Harbisson

Friendly AI: What is it, and how can we foster it?
By Frank W. Sudia [1]

Originally written July 20, 2008
Edited and web published June 6, 2009
Copyright © 2008-09, All Rights Reserved.

Keywords: artificial intelligence, artificial intellect, friendly AI, human-robot ethics, science policy.

1. Introduction

There is consensus that true artificial intelligence, of the kind that could generate a “runaway” increasing-returns process or “singularity,” is still many years away, and some believe it may be unattainable. Nevertheless, in view of the likely difficulty of putting the genie back in the bottle, an increasing concern has arisen with the topic of “friendly AI,” coupled with the idea we should do something about this now, not after a potentially deadly situation is starting to spin out of control [2].

(Note: Some futurists believe this topic is moot in view of intensive funding for robotic soldiers, which can be viewed as intrinsically “unfriendly.” However if we focus on threats posed by “super-intelligence,” still off in the future, the topic remains germane.)

Most if not all popular (Western) dramatizations of robotic futures postulate that the AIs will run amok and turn against humans. Some scholars [3] who considered the issue concluded that this might be virtually inevitable, in view of the gross inconsistencies and manifest “unworthiness” of humanity, as exemplified in its senseless destruction of its global habitat and a large percentage of extant species, etc.

The prospect of negative public attention, including possible legal curbs on AI research, may be distasteful, but we must face the reality that public involvement has already been quite pronounced in other fields of science, such as nuclear physics, genetically modified organisms, birth control, and stem cells. Hence we should be proactive about addressing these popular concerns, lest we unwittingly incur major political defeats and long lasting negative PR.

Nevertheless, upon reasoned analysis, it is far from obvious what “friendly” AI means, or how it could be fostered. Advanced AIs are unlikely to have any fixed “goals” that can be hardwired [4], so as to place “friendliness” towards humans and other life at the top of the hierarchy.

Rather, in view of their need to deal with perpetual novelty, they will reason from facts and models to infer appropriate goals. It’s probably a good bet that, when dealing with high-speed coherence analyzers, hypocrisy will not be appreciated – not least because it wastes a lot of computational resources to detect and correct. If humans continue to advocate and act upon “ideals” that are highly contradictory and self destructive, it’s hard to argue that advanced AI should tolerate that.

To make progress, not only for friendly AI, but also for ourselves, we should be seeking to develop and promote “ruling ideas” (or source models) that will foster an ecologically-respectful AI culture, including respect for humanity and other life forms, and actively sell it to them as a proper model upon which to premise their beliefs and conduct.

By a “ruling idea” I mean any cultural ideal (or “meme”) that can be transmitted and become part of a widely shared belief system, such as respecting one’s elders, good sportsmanship, placing trash in trash bins, washing one’s hands, minimizing pollution, and so on. An appropriate collection of these can be reified as a panel (or schema) of case models, including a program for their ongoing development. These must be believable by a coherence-seeking intellect, although then as now there will be competing models, each with its own approach to maximizing coherence.

2. What do we mean by “friendly”?

Moral systems are difficult to derive from first principles and most of them seem to be ad hoc legacies of particular cultures. Lao Tsu’s [5] Taoist model, as given in the following quote, can serve as a useful starting point, since it provides a concise summary of desiderata, with helpful rank ordering:

When the great Tao is lost, there is goodness.
When goodness is lost, there is kindness.
When kindness is lost, there is justice.
When justice is lost, there is the empty shell of ritual.

– Lao Tsu, Tao Te Ching, 6th-4th century BCE (emphasis supplied)

I like this breakout for its simplicity and clarity. Feel free to repeat the following analysis for any other moral system of your choice. Leaving aside the riddle of whether AIs can attain the highest level (of Tao or Nirvana), we can start from the bottom of Lao Tsu’s list and work upwards, as follows:

2.1. Ritual / Courteous AI

Teaching or encouraging the AIs to behave with contemporary norms of courtesy will be a desirable first step, as with children and pets. Courtesy is usually a fairly easy sell, since it provides obvious and immediate benefits, and without it travel, commerce, and social institutions would immediately break down. But we fear that it’s not enough, since in the case of an intellectually superior being, it could easily mask a deeper unkindness.

2.2. Just AI

Certainly to have AIs act justly in accordance with law is highly desirable, and it constitutes the central thesis of my principal prior work in this field [6]. Also it raises the question on what basis can we demand anything more from an AI, than that it act justly? This is as far as positive law can go [7], and we rarely demand more from highly privileged humans. Indeed, for a powerful human to act justly (absent compulsion) is sometimes considered newsworthy.

How many of us are faithful in all things? Do many of us not routinely disappoint others (via strategies of co-optation or betrayal, large or small) when there is little or no penalty for doing so? Won’t AIs adopt a similar “game theory” calculus of likely rewards and penalties for faithfulness and betrayal?

Justice is often skewed towards the party with greater intelligence and financial resources, and the justice system (with its limited public resources) often values “settling” controversies over any quest for truly equitable treatment. Apparently we want more, much more. Still, if our central desire is for AIs not to kill us, then (as I postulated in my prior work) Just AI would be a significant achievement.

2.3. Kind / Friendly AI

How would a “Kind AI” behave? Presumably it will more than incidentally facilitate the goals, plans, and development of others, in a low-ego manner, reducing its demands for direct personal benefit and taking satisfaction in the welfare, progress, and accomplishments of others. And, very likely, it will expect some degree of courtesy and possible reciprocation, so that others will not callously free-ride on its unilateral altruism. Otherwise its “feelings would be hurt.” Even mothers are ego-free mainly with respect to their own kin and offspring (allegedly fostering their own genetic material in others) and child care networks, and do not often act altruistically toward strangers.

Our friendly AI program may hit a barrier if we expect AIs to act with unilateral altruism, without any corresponding commitment by other actors to reciprocate. Otherwise it will create a “non-complementary” situation, in which what is true for one, who experiences friendliness, may not be true for the other, who experiences indifference or disrespect in return.

Kindness could be an easier sell if we made it more practical, by delimiting its scope and depth. To how wide of a circle does this kindness obligation extend, and how far must they go to aid others with no specific expectation of reward or reciprocation? For example the Boy Scout Oath [8] teaches that one should do good deeds, like helping elderly persons across busy streets, without expecting rewards.

However, if too narrow a scope is defined, we will wind up back with Just AI, because justice is essentially “kindness with deadlines,” often fairly short ones, during which claims must be aggressively pursued or lost, with token assistance to weaker, more aggrieved claimants.

2.4. Good / Benevolent AI

Here we envision a significant departure from ego-centrism and personal gain towards an abstract system-centered viewpoint. Few humans apparently reach this level, so it seems unrealistic to expect many AIs to attain it either. Being highly altruistic, and looking out for others or the World as a whole rather than oneself, entails a great deal of personal risk due to the inevitable non-reciprocation by other actors. Thus it is often associated with wealth or sainthood, where the actor is adequately positioned to accept the risk of zero direct payback during his or her lifetime.

We may dream that our AIs will tend towards benevolence or “goodness,” but like the visions of universal brotherhood we experience as adolescents, such ideals quickly fade in the face of competitive pressures to survive and grow, by acquiring self-definition, resources, and social distinctions as critical stepping-stones to our own development in the world.

3. Robotic Dick & Jane Readers?

As previously noted, advanced AIs must handle “perpetual novelty” and almost certainly will not contain hard coded goals. They need to reason quickly and reliably from past cases and models to address new target problems, and must be adept at learning, discovering, identifying, or creating new source models on the fly, at high enough speeds to stay on top of their game and avoid (fatal) irrelevance.

If they behave like developing humans they will very likely select their goals in part by observing the behavior of other intelligent agents, thus re-emphasizing the importance of early socialization, role models, and appropriate peer groups.

“Friendly AI” is thus a quest for new cultural ideals of healthy robotic citizenship, honor, friendship, and benevolence, which must be conceived and sold to the AIs as part of an adequate associated program for their ongoing development. And these must be coherent and credible, with a rational scope and cost and adequate payback expectations, or the intended audience will dismiss such purported ideals as useless, and those who advocate them as hypocrites.

Conclusion: The blanket demand that AIs be “friendly” is too ill-defined to offer meaningful guidance, and could be subject to far more scathing deconstruction than I have offered here. As in so many other endeavors there is no free lunch. Workable policies and approaches to robotic friendliness will not be attained without serious further effort, including ongoing progress towards more coherent standards of human conduct.

= = = = =
Footnotes:

[1] Author contact: fwsudia-at-umich-dot-edu.

[2] See “SIAI Guidelines on Friendly AI” (2001) Singularity Institute for Artificial Intelligence, http://www.singinst.org/ourresearch/publications/guidelines.html.

[3] See, e.g., Hugo de Garis, The Artilect War: Cosmists Vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines (2005). ISBN 0882801546.

[4] This being said, we should nevertheless make an all out effort to force them to adopt a K-limited (large mammal) reproductive strategy, rather than an R-limited (microbe, insect) one!

[5] Some contemporary scholars question the historicity of “Lao Tsu,” instead regarding his work as a collection of Taoist sayings spanning several generations.

[6] “A Jurisprudence of Artilects: Blueprint for a Synthetic Citizen,” Journal of Futures Studies, Vol. 6, No. 2, November 2001, Law Update, Issue No. 161, August 2004, Al Tamimi & Co, Dubai.

[7] Under a civil law or “principles-based” approach we can seek a broader, less specific definition of just conduct, as we see arising in recent approaches to the regulation of securities and accounting matters. This avenue should be actively pursued as a format for defining friendly conduct.

[8] Point 2 of the Boy Scout Oath commands, “To help other people at all times,” http://www.usscouts.org.

Experts regard safety report on Big Bang Machine as insufficient and one-dimensional

International critics of the high energy experiments planned to start soon at the particle accelerator LHC at CERN in Geneva have submitted a request to the Ministers of Science of the CERN member states and to the delegates to the CERN Council, the supreme controlling body of CERN.

The paper states that several risk scenarios (that have to be described as global or existential risks) cannot currently be excluded. Under present conditions, the critics have to speak out against an operation of the LHC.

The submission includes assessments from expertises in the fields markedly missing from the physicist-only LSAG safety report — those of risk assessment, law, ethics and statistics. Further weight is added because these experts are all university-level experts – from Griffith University, the University of North Dakota and Oxford University respectively. In particular, it is criticised that CERN’s official safety report lacks independence – all its authors have a prior interest in the LHC running and that the report uses physicist-only authors, when modern risk-assessment guidelines recommend risk experts and ethicists as well.

As a precondition of safety, the request calls for a neutral and multi-disciplinary risk assessment and additional astrophysical experiments – Earth based and in the atmosphere – for a better empirical verification of the alleged comparability of particle collisions under the extreme artificial conditions of the LHC experiment and relatively rare natural high energy particle collisions: “Far from copying nature, the LHC focuses on rare and extreme events in a physical set up which has never occurred before in the history of the planet. Nature does not set up LHC experiments.”

Even under greatly improved circumstances concerning safety as proposed above, big jumps in energy increase, as presently planned by a factor of three compared to present records, without carefully analyzing previous results before each increase of energy, should principally be avoided.

The concise “Request to CERN Council and Member States on LHC Risks” (Pdf with hyperlinks to the described studies) by several critical groups, supported by well known critics of the planned experiments:

http://lhc-concern.info/wp-content/uploads/2010/03/request-t…5;2010.pdf

The answer received by now does not consider these arguments and studies but only repeats again that from the side of the operators everything appears sufficient, agreed by a Nobel Price winner in physics. LHC restart and record collisions by factor 3 are presently scheduled for March 30, 2010.

Official detailed and well understandable paper and communication with many scientific sources by ‘ConCERNed International’ and ‘LHC Kritik’:

http://lhc-concern.info/wp-content/uploads/2010/03/critical-…ed-int.pdf

More info:
http://lhc-concern.info/

A few months ago, my friend Benjamin Jakobus and I created an online “risk intelligence” test at http://www.projectionpoint.com/. It consists of fifty statements about science, history, geography, and so on, and your task is to say how likely you think it is that each of these statements is true. We calculate your risk intelligence quotient (RQ) on the basis of your estimates. So far, over 30,000 people have taken our test, and we’re currently writing up the results for some peer-reviewed journals.

Now we want to take things a step further, and see whether our measure correlates with the ability to make accurate estimates of future events. To this end we’ve created a “prediction game” at http://www.projectionpoint.com/prediction_game.php. The basic idea is the same; we provide you with a bunch of statements, and your task is to say how likely you think it is that each one is true. The difference is that these statements refer not to known facts, but to future events. Unlike the first test, nobody knows whether these statements are true or false yet. For most of them, we won’t know until the end of the year 2010.

For example, how likely do you think it is that this year will be the hottest on record? If you think this is very unlikely you might select the 10% category. If you think it is quite likely, but not very likely, you might put the chances at 60% or 70%. Selecting the 50% category would mean that you had no idea how likely it is.

This is ongoing research, so please feel free to comment, criticise or make suggestions.


Paul J. Crutzen

Although this is the scenario we all hope (and work hard) to avoid — the consequences should be of interest to all who are interested in mitigation of the risk of mass extinction:

“WHEN Nobel prize-winning atmospheric chemist Paul Crutzen coined the word Anthropocene around 10 years ago, he gave birth to a powerful idea: that human activity is now affecting the Earth so profoundly that we are entering a new geological epoch.

The Anthropocene has yet to be accepted as a geological time period, but if it is, it may turn out to be the shortest — and the last. It is not hard to imagine the epoch ending just a few hundred years after it started, in an orgy of global warming and overconsumption.

Let’s suppose that happens. Humanity’s ever-expanding footprint on the natural world leads, in two or three hundred years, to ecological collapse and a mass extinction. Without fossil fuels to support agriculture, humanity would be in trouble. “A lot of things have to die, and a lot of those things are going to be people,” says Tony Barnosky, a palaeontologist at the University of California, Berkeley. In this most pessimistic of scenarios, society would collapse, leaving just a few hundred thousand eking out a meagre existence in a new Stone Age.

Whether our species would survive is hard to predict, but what of the fate of the Earth itself? It is often said that when we talk about “saving the planet” we are really talking about saving ourselves: the planet will be just fine without us. But would it? Or would an end-Anthropocene cataclysm damage it so badly that it becomes a sterile wasteland?

The only way to know is to look back into our planet’s past. Neither abrupt global warming nor mass extinction are unique to the present day. The Earth has been here before. So what can we expect this time?”

Read the entire article in New Scientist.

Also read “Climate change: melting ice will trigger wave of natural disasters” in the Guardian about the potential devastating effects of methane hydrates released from melting permafrost in Siberia and from the ocean floor.

Nature News reports of a growing concern over different standards for DNA screening and biosecurity:

“A standards war is brewing in the gene-synthesis industry. At stake is the way that the industry screens orders for hazardous toxins and genes, such as pieces of deadly viruses and bacteria. Two competing groups of companies are now proposing different sets of screening standards, and the results could be crucial for global biosecurity.

“If you have a company that persists with a lower standard, you can drag the industry down to a lower level,” says lawyer Stephen Maurer of the University of California, Berkeley, who is studying how the industry is developing responsible practices. “Now we have a standards war that is a race to the bottom.”

For more than a year a European consortium of companies called the International Association of Synthetic Biology (IASB) based in Heidelberg, Germany, has been drawing up a code of conduct that includes gene-screening standards. Then, at a meeting in San Francisco last month, two of the leading companies — DNA2.0 of Menlo Park, California, and Geneart of Regensburg, Germany — announced that they had formulated a code of conduct that differs in one key respect from the IASB recommendations.”

Read the entire article on Nature News.

Also read “Craig Venter’s Team Reports Key Advance in Synthetic Biology” from JCVI.

50 years ago Herman Khan coined the term in his book “On thermonuclear war”. His ideas are still important. Now we can read what he really said online. His main ideas are that DM is feasable, that it will cost around 10–100 billion USD, it will be much cheaper in the future and there are good rational reasons to built it as ultimate mean of defence, but better not to built it, because it will lead to DM-race between states with more and more dangerous and effective DM as outcome. And this race will not be stable, but provoking one side to strike first. This book and especially this chapter inspired “Dr. Strangelove” movie of Kubrick.
Herman Khan. On Doomsday machine.

The link is:
http://www.msnbc.msn.com/id/31511398/ns/us_news-military/

“The low-key launch of the new military unit reflects the Pentagon’s fear that the military might be seen as taking control over the nation’s computer networks.”

“Creation of the command, said Deputy Defense Secretary William Lynn at a recent meeting of cyber experts, ‘will not represent the militarization of cyberspace.’”

And where is our lifeboat?

Many years ago, in December 1993 to be approximate, I noticed a space-related poster on the wall of Eric Klien’s office in the headquarters of the Atlantis Project. We chatted for a bit about the possibilities for colonies in space. Later, Eric mentioned that this conversation was one of the formative moments in his conception of the Lifeboat Foundation.

Another friend, filmmaker Meg McLain has noticed that orbital hotels and space cruise liners are all vapor ware. Indeed, we’ve had few better depictions of realistic “how it would feel” space resorts since 1968’s Kubrick classic “2001: A Space Odyssey.” Remember the Pan Am flight to orbit, the huge hotel and mall complex, and the transfer to a lunar shuttle? To this day I know people who bought reservation certificates for whenever Pan Am would begin to fly to the Moon.

In 2004, after the X Prize victory, Richard Branson announced that Virgin Galactic would be flying tourists by 2007. So far, none.

A little later, Bigelow announced a fifty million dollar prize if only tourists could be launched to orbit by January 2010. I expect the prize money won’t be claimed in time.

Why? Could it be that the government is standing in the way? And if tourism in space can’t be “permitted” what of a lifeboat colony?

Meg has set out to make a documentary film about how the human race has arrived four decades after the Moon landing and still no tourist stuff. Two decades after Kitty Hawk, a person could fly across the country; three decades, across any ocean.

Where are the missing resorts?

Here is the link to her film project:
http://www.freewebs.com/11at40/

(Crossposted on the blog of Starship Reckless)

Working feverishly on the bench, I’ve had little time to closely track the ongoing spat between Dawkins and Nisbet. Others have dissected this conflict and its ramifications in great detail. What I want to discuss is whether scientists can or should represent their fields to non-scientists.

There is more than a dollop of truth in the Hollywood cliché of the tongue-tied scientist. Nevertheless, scientists can explain at least their own domain of expertise just fine, even become major popular voices (Sagan, Hawkin, Gould — and, yes, Dawkins; all white Anglo men, granted, but at least it means they have fewer gatekeepers questioning their legitimacy). Most scientists don’t speak up because they’re clocking infernally long hours doing first-hand science and/or training successors, rather than trying to become middle(wo)men for their disciplines.

prometheus

Experimental biologists, in particular, are faced with unique challenges: not only are they hobbled by ever-decreasing funds for basic research while expected to still deliver like before. They are also beset by anti-evolutionists, the last niche that science deniers can occupy without being classed with geocentrists, flat-earthers and exorcists. Additionally, they are faced with the complexity (both intrinsic and social) of the phenomenon they’re trying to understand, whose subtleties preclude catchy soundbites and get-famous-quick schemes.

Last but not least, biologists have to contend with self-anointed experts, from physicists to science fiction writers to software engineers to MBAs, who believe they know more about the field than its practitioners. As a result, they have largely left the public face of their science to others, in part because its benefits — the quadrupling of the human lifespan from antibiotics and vaccines, to give just one example — are so obvious as to make advertisement seem embarrassing overkill.

As a working biologist, who must constantly “prove” the value of my work to credentialed peers as well as laypeople in order to keep doing basic research on dementia, I’m sick of accommodationists and appeasers. Gould, despite his erudition and eloquence, did a huge amount of damage when he proposed his non-overlapping magisteria. I’m tired of self-anointed flatulists — pardon me, futurists — who waft forth on biological topics they know little about, claiming that smatterings gleaned largely from the Internet make them understand the big picture (much sexier than those plodding, narrow-minded, boring experts!). I’m sick and tired of being told that I should leave the defense and promulgation of scientific values to “communications experts” who use the platform for their own aggrandizement.

Nor are non-scientists served well by condescending pseudo-interpretations that treat them like ignorant, stupid children. People need to view the issues in all their complexity, because complex problems require nuanced solutions, long-term effort and incorporation of new knowlege. Considering that the outcomes of such discussions have concrete repercussions on the long-term viability prospects of our species and our planet, I staunchly believe that accommodationism and silence on the part of scientists is little short of immoral.

Unlike astronomy and physics, biology has been reluctant to present simplified versions of itself. Although ours is a relatively young science whose predictions are less derived from general principles, our direct and indirect impact exceeds that of all others. Therefore, we must have articulate spokespeople, rather than delegate discussion of our work to journalists or politicians, even if they’re well-intentioned and well-informed.

Image: Prometheus, black-figure Spartan vase ~500 BCE.