Toggle light / dark theme

The True Cost of Ignoring Nonhumans

Posted by Dr. Denise L Herzing and Dr. Lori Marino, Human-Nonhuman Relationship Board

Over the millennia humans and the rest of nature have coexisted in various relationships. However the intimate and interdependent nature of our relationship with other beings on the planet has been recently brought to light by the oil spill in the Gulf of Mexico. This ongoing environmental disaster is a prime example of “profit over principle” regarding non-human life. This spill threatens not only the reproductive viability of all flora and fauna in the affected ecosystems but also complex and sensitive non-human cultures like those we now recognize in dolphins and whales.

Although science has, for decades, documented the links and interdependence of ecosystems and species, the ethical dilemma now facing humans is at a critical level. For too long have we not recognized the true cost of our life styles and priorities of profit over the health of the planet and the nonhuman beings we share it with. If ever the time, this is a wake up call for humanity and a call to action. If humanity is to survive we need to make an urgent and long-term commitment to the health of the planet. The oceans, our food sources and the very oxygen we breathe may be dependent on our choices in the next 10 years.

And humanity’s survival is inextricably linked to that of the other beings we share this planet with. We need a new ethic.

Many oceanographers and marine biologist have, for a decade, sent out the message that the oceans are in trouble. Human impacts of over-fishing, pollution, and habitat destruction are threatening the very cycles of our existence. In the recent catastrophe in the Gulf, one corporation’s neglectful oversight and push for profit has set the stage for a century of clean up and impact, the implications of which we can only begin to imagine.

Current and reported estimates of stranded dolphins are at fifty-five. However, these are dolphins visibly stranded on beaches. Recent aerial footage, on YouTube, by John Wathen shows a much greater and serious threat. Offshore, in the “no fly zone” hundreds of dolphins and whales have been observed in the oil slick. Some floating belly up and dead, others struggling to breathe in the toxic fumes. Others exhibit “drunken dolphin syndrome” characterized by floating in an almost stupefied state on the surface of the water. These highly visible effects are just the tip of the iceberg in terms of the spill’s impact on the long term health and viability of the Gulf’s dolphin and whale populations, not to mention the suffering incurred by each individual dolphin as he or she tries to cope with this crisis.

Known direct and indirect effects of oil spills on dolphins and whales depend on the species but include, toxicity that can cause organ dysfunction and neurological impairment, damaged airways and lungs, gastrointestinal ulceration and hemorrhaging, eye and skin lesions, decreased body mass due to limited prey, and, the pervasive long term behavioral, immunological, and metabolic impacts of stress. Recent reports substantiate that many dolphins and whales in the Gulf are undergoing tremendous stress, shock and suffering from many of the above effects. The impact to newborns and young calves is clearly devastating.

After the Exxon Valdez spill in Prince William Sound in 1989 two pods of orcas (killer whales) were tracked. It was found that one third of the whales in one pod and 40 percent of the whales in the other pod had disappeared, with one pod never recovering its numbers. There is still some debate about the number of missing whales directly impacted by the oil though it is fair to say that losses of this magnitude are uncommon and do serious damage to orca societies.

Yes, orca societies. Years of field research has led to the conclusion by a growing number of scientists that many dolphin and whale species, including sperm whales, humpback whales, orcas, and bottlenose dolphins possess sophisticated cultures, that is, learned behavioral traditions passed on from one generation to the next. These cultures are not only unique to each group but are critically important for survival. Therefore, not only do environmental catastrophes such as the Gulf oil spill result in individual suffering and loss of life but they contribute to the permanent destruction of entire oceanic cultures. These complex learned traditions cannot be replicated after they are gone and this makes them invaluable.

On December 10, 1948 the General Assembly of the United Nations adopted and proclaimed the Universal Declaration of Human Rights, which acknowledges basic rights to life, liberty, and freedom of cultural expression. We recognize these foundational rights for humans as we are sentient, complex beings. It is abundantly clear that our actions have violated these same rights for other sentient, complex and cultural beings in the oceans – the dolphins and whales. We should use this tragedy as an opportunity to formally recognize societal and legal rights for them so that their lives and their unique cultures are better protected in the future.

Recently, there was a meeting of scientists, philosophers, legal experts and dolphin and whale advocates in Helsinki, Finland, who drafted a Declaration of Rights for Cetaceans a global call for basic rights for dolphins and whales. You can read more about this effort and become a signatory here: http://cetaceanconservation.com.au/cetaceanrights/. Given the destruction of dolphin and whale lives and cultures caused by the ongoing environmental disaster in the Gulf, we think this is one of the ways we can commit ourselves to working towards a future that will be a lifeboat for humans, dolphins and whales, and the rest of nature.

What’s your idea to BodyShock the Future?

I’m working on this project with Institute for the Future — calling on voices everywhere for ideas to improve the future of global health. It would be great to get some visionary Lifeboat ideas entered!

INSTITUTE FOR THE FUTURE ANNOUNCES BODYSHOCK:
CALL FOR ENTRIES ON IDEAS TO TRANSFORM LIFESTYLES AND THE HUMAN BODY TO IMPROVE HEALTH IN THE NEXT DECADE

“What can YOU envision to improve and reinvent health and well-being for the future?” Anyone can enter, anyone can vote, anyone can change the future of global health.

With obesity, diabetes, and chronic disease rampaging populations around the world, Institute for the Future (IFTF) is turning up the volume on global well-being. Launching today, IFTF’s BodyShock is the first annual competition with an urgent challenge to recruit crowdsourced designs and solutions for better health–to remake the future by rebooting the present.

BodyShock calls upon the public to consider innovative ways to improve individual and collective health over the next 3–10 years by transforming our bodies and lifestyles. Video or graphical entries illustrating new ideas, designs, products, technologies, and concepts, will be accepted from people around the world until September 1, 2010. Up to five winners will be flown to Palo Alto, California on October 8 to present their ideas and be connected to other innovative thinkers to help bring these ideas to life. The grand prize winner will receive the IFTF Roy Amara Prize of $3,000.

“Health doesn’t happen all at once; it’s a consequence of years of choices for our bodies and lifestyles–some large and some small. BodyShock is intended to spark new ideas to help us find our way back to health,” said Thomas Goetz, executive editor of Wired, author of The Decision Tree, and a member of the Health Advisory Board that will be judging the BodyShock contest in addition to votes from the public.

“BodyShock is a fantastic initiative. Global collaboration and participation from all voices can produce a true revolution,” said Linda Avey, founder of Brainstorm Research Foundation and another Advisor to BodyShock.

Entries may come from anyone anywhere and can include, but are not limited to, the following: Life extension, DIY Bio, Diabetic teenagers, Developing countries, Green health, Augmented reality, Self-tracking, and Pervasive games. Participants are challenged to use IFTF’s Health Horizons forecasts for the next decade of health and health care as inspiration, and design a solution for a problem that will be widespread in 3–10 years, using technologies that will become mainstream.

“Think ‘artifacts from the future’–simple, non-obvious, high-impact solutions that don’t exist yet, will be among the concepts we’re looking to the public to introduce,” said Rod Falcon, director of the Health Horizons Program at IFTF.

BodyShock’s grand prize, the Roy Amara Prize, is named for IFTF’s long-time president Roy Amara (1925−2000) and is part of a larger program of social impact projects at IFTF honoring his legacy, known as The Roy Amara Fund for Participatory Foresight, the Fund uses participatory tools to translate foresight research into concrete actions that address future social challenges.

PANEL OF COMPETITION JUDGES

Joanne Andreadis
Lead of Innovation, Centers for Disease Control and Prevention

Linda Avey
Founder, Brainstorm Research Foundation

Jason Bobe
Director of Community, Personal Genome Project
Founder, DIYBio.org

Alexandra Carmichael
Co-founder, CureTogether
Director, Quantified Self

Ted Eytan, MD
Kaiser Permanente, The Permanente Federation

Rod Falcon
Director, Health Horizons Program

Peter Friess
President, Tech Museum of Innovation

Thomas Goetz
Executive Editor, WIRED Magazine
Author, The Decision Tree

Natalie Hodge,MD FAAP
Chief Health Officer, Personal Medicine International

Ellen Marram
Board of Trustees, Institute for the Future
President, Barnegat Group LLC

Kristi Miller Durazo
Senior Strategy Advisor, American Heart Association

David Rosenman
Director, Innovation Curriculum
Center for Innovation at Mayo Clinic

Amy Tenderich
Board Member, Journal of Participatory Medicine
Blogger, DiabetesMine.com

DETAILS

WHAT:
An online competition for visual design ideas to improve global health over the next 3–10 years by transforming our bodies and lifestyles. Anyone can enter, anyone can vote, anyone can change the future of health.

WHEN:
Launch — Friday, June 18,2010

Deadline for entries –Wednesday, September 1, 2010

Winners announced –Thursday, September 23, 2010

BodyShock Winners Celebration at IFTF — 6 — 9 p.m. Friday, October 8, 2010 — FREE and open to the public

WHERE:

http://www.bodyshockthefuture.org

(and 124 University Ave, 2ndFloor, Palo Alto, CA)

My presentation on Humanity + summit

In the lunch time I am existing virtually in the hall of the summit as a face on the Skype account — i didn’t get a visa and stay in Moscow. But ironically my situation is resembling what I an speaking about: about the risk of remote AI which is created by aliens million light years from Earth and sent via radio signals. The main difference is that they communicate one way, and I have duplex mode.

This is my video presentation on YouTube:
Risks of SETI, for Humanity+ 2010 summit

H+ Conference and the Singularity Faster

We can only see a short distance ahead, but we can see plenty there that needs to be done.
—Alan Turing

As a programmer, I look at events like the H+ Conference this weekend in a particular way. I see all of their problems as software: not just the code for AI and friendly AI, but also that for DNA manipulation. It seems that the biggest challenge for the futurist movement is to focus less on writing English and more on getting the programmers working together productively.

I start the AI chapter of my book with the following question: Imagine 1,000 people, broken up into groups of five, working on two hundred separate encyclopedias, versus that same number of people working on one encyclopedia? Which one will be the best? This sounds like a silly analogy when described in the context of an encyclopedia, but it is exactly what is going on in artificial intelligence (AI) research today.

Today, the research community has not adopted free software and shared codebases sufficiently. For example, I believe there are more than enough PhDs today working on computer vision, but there are 200+ different codebases plus countless proprietary ones. Simply put, there is no computer vision codebase with critical mass.

Some think that these problems are so hard that it isn’t a matter of writing code, it is a matter of coming up with the breakthroughs on a chalkboard. But people can generally agree at a high level how the software for solving many problems will work. There has been code for doing OCR and neural networks and much more kicking around for years. The biggest challenge right now is getting people together to hash out the details, which is a lot closer to Wikipedia than it first appears. Software advances in a steady, stepwise fashion, which is why we need free software licenses: to incorporate all the incremental advancements that each scientist is making. Advances must eventually be expressed in software (and data) so it can be executed by a computer. Even if you believe we need certain scientific breakthroughs, it should be clear that things like robust computer vision are complicated enough that you would want 100s of people working together on the vision pipeline. So, while we are waiting for those breakthroughs, let’s get 100 people together!

There is an additional problem: that C/C++ have not been retired. These languages make it hard for programmers to work together, even if they wanted to. There are all sorts of taxes on time, from learning the archane rules about these ungainly languages, to the fact that libraries often use their own string classes, synchronization primitives, error handling schemes, etc. In many cases, it is easier to write a specialized and custom computer vision library in C/C++ than to integrate something like OpenCV which does everything by itself down to the Matrix class. The pieces for building your own computer vision library (graphics, i/o, math, etc.) are in good shape, but the computer vision is not, so that we haven’t moved beyond that stage! Another problem with C/C++ is that they do not have garbage collection which is necessary but insufficient for reliable code.

A SciPy-based computational fluid dynamic (CFD) visualization of a combustion chamber.

I think scientific programmers should move to Python and build on SciPy. Python is a modern free language, and has quietly built up an extremely complete set of libraries for everything from gaming to scientific computing. Specifically, its SciPy library with various scikit extensions are a solid baseline patiently waiting for more people to work on all sorts of futuristic problems. (It is true that Python and SciPy both have issues. One of Python’s biggest issues is that the default implementation is interpreted, but there are several workarounds being built [Cython, PyPy, Unladen Swallow, and others]. SciPy’s biggest challenge is how to be expansive without being duplicative. It is massively easier to merge English articles in Wikipedia that discuss the same topics than to do this equivalent in code. We need to share data in addition to code, but we need to share code first.)

Some think the singularity is a hardware problem, and won’t be solved for a number of years. I believe the benefits inherent in the singularity will happen as soon as our software becomes “smart” and we don’t need to wait for any further Moore’s law progress for that to happen. In fact, we could have built intelligent machines and cured cancer years ago. The problems right now are much more social than technical.

    1. We can only see a short distance ahead, but we can see plenty there that needs to be done.

—Alan Turing

Transitions

King Louis XVI’s entry in his personal diary for that fateful day of July 14, 1789 suggests that nothing important had happened. He did not know that the events of the day-the attack upon the Bastille-meant that the revolution was under way, and that the world as he knew it was essentially over. Fast forward to June, 2010: a self-replicating biological organism (mycoplasma mycoides bacterium transformed) has been created in a laboratory by J. Craig Venter and his team. Yes, the revolution has begun. Indeed, the preliminaries have been going on for several years; it’s just that … um, well, have we been wide awake?

Ray Kurzweil’s singularity might be 25 years into the future, but sooner, a few years from now, we’ll have an interactive global network that some refer to as ‘global brain.’ Web3. I imagine no one knows exactly what will come out of all this, but I expect that we’ll find that the whole will be more than and different from the sum of the parts. Remember Complexity Theory. How about the ‘butterfly effect?’ Chaos Theory. And much more not explainable by theories presently known. I expect surprises, to say the least.

I am a retired psychiatrist, not a scientist. We each have a role to enact in this drama/comedy that we call life, and yes, our lives have meaning. Meaning! For me life is not a series of random events or events brought about by ‘them,’ but rather an unfolding drama/comedy with an infinite number of possible outcomes. We don’t know its origins or its drivers. Do we even know where our visions comes from?

So, what is my vision and what do I want? How clearly do I visualize what I want? Am I passionate about what I want or simply lukewarm? How much am I prepared to risk in pursuit of what I want? Do I reach out for what I want directly or do I get what I want indirectly by trying to serve two masters, so to speak? If the former I practice psychological responsibility, if the latter I do not. An important distinction. The latter situation suggests unresolved dilemma, common enough. Who among us can claim to be without?

As we go through life there are times when we conceal from others and to some extent from ourselves exactly what it is that we want, hoping that what we want will come to pass without us clarifying openly what we stand for. One basic premise I like is that actions speak louder than words and therefore by our actions in our personal lives directly or indirectly we bring to pass what we bottom line want.

Does that include what I fear? Certainly it might if deep within me I am psychologically engineering an event that frightens me. If what I fear is what I secretly bring about. Any one among us might surreptitiously arrange drama so as to inspire or provoke others in ways that conceal our personal responsibility. All this is pertinent and practical as will become obvious in the coming years.

We grew up in 20th century households or in families where we and other family members lived by 20th century worldviews, and so around the world 20th century thinking still prevails. Values have much to do with internalized learned relationships to limited and limiting aspects of the universe. In the midst of change we can transcend these. I wonder if by mid-century people will talk of the BP oil spill as the death throes of a dinosaur heralding the end of an age. I don’t know, but I imagine that we’re entering a phase of transition-a hiatus-in which we see our age fading away from us and a new age approaching. But the new has yet to consolidate. A dilemma. If we embrace the as yet ethereal new we risk losing our roots and all that we value; if we cling to the old we risk seeing the ship leave without us.

We are crew-and not necessarily volunteers-on a vessel bound for the Great Unknown. Like all such voyages taken historically this one is not without its perils. When established national boundaries become more porous, when old fashioned foreign policy fails, when the ‘old guard’ feels threatened beyond what it will tolerate, what then? Will we regress into authoritarianism, will we demand a neo-fascist state so as to feel secure? Or will we climb aboard the new? Yes, we can climb aboard even if we’re afraid. To be sure we’ll grumble, and some will talk of mutiny. A sense of loss is to be expected. We all feel a sense of loss when radical change happens in our personal lives, even when the change is for the better. I am aware of this in my own life, I clarify meaning in life. There are risks either way. Such is life.

But change is also adventure: I am old enough to remember the days of the ocean liners and how our eyes lit up and our hearts rose up joyfully as we stood on deck departing into the vision, waving to those left behind. Indeed we do this multiple times in our lives as we move from infancy to old age and finally towards death. And like good psychotherapy, the coming change will be both confronting and rewarding. Future generations are of us and we are of them; we cannot be separated.

What a time to be alive!

Friendly AI: What is it, and how can we foster it?

Friendly AI: What is it, and how can we foster it?
By Frank W. Sudia [1]

Originally written July 20, 2008
Edited and web published June 6, 2009
Copyright © 2008-09, All Rights Reserved.

Keywords: artificial intelligence, artificial intellect, friendly AI, human-robot ethics, science policy.

1. Introduction

There is consensus that true artificial intelligence, of the kind that could generate a “runaway” increasing-returns process or “singularity,” is still many years away, and some believe it may be unattainable. Nevertheless, in view of the likely difficulty of putting the genie back in the bottle, an increasing concern has arisen with the topic of “friendly AI,” coupled with the idea we should do something about this now, not after a potentially deadly situation is starting to spin out of control [2].

(Note: Some futurists believe this topic is moot in view of intensive funding for robotic soldiers, which can be viewed as intrinsically “unfriendly.” However if we focus on threats posed by “super-intelligence,” still off in the future, the topic remains germane.)

Most if not all popular (Western) dramatizations of robotic futures postulate that the AIs will run amok and turn against humans. Some scholars [3] who considered the issue concluded that this might be virtually inevitable, in view of the gross inconsistencies and manifest “unworthiness” of humanity, as exemplified in its senseless destruction of its global habitat and a large percentage of extant species, etc.

The prospect of negative public attention, including possible legal curbs on AI research, may be distasteful, but we must face the reality that public involvement has already been quite pronounced in other fields of science, such as nuclear physics, genetically modified organisms, birth control, and stem cells. Hence we should be proactive about addressing these popular concerns, lest we unwittingly incur major political defeats and long lasting negative PR.

Nevertheless, upon reasoned analysis, it is far from obvious what “friendly” AI means, or how it could be fostered. Advanced AIs are unlikely to have any fixed “goals” that can be hardwired [4], so as to place “friendliness” towards humans and other life at the top of the hierarchy.

Rather, in view of their need to deal with perpetual novelty, they will reason from facts and models to infer appropriate goals. It’s probably a good bet that, when dealing with high-speed coherence analyzers, hypocrisy will not be appreciated – not least because it wastes a lot of computational resources to detect and correct. If humans continue to advocate and act upon “ideals” that are highly contradictory and self destructive, it’s hard to argue that advanced AI should tolerate that.

To make progress, not only for friendly AI, but also for ourselves, we should be seeking to develop and promote “ruling ideas” (or source models) that will foster an ecologically-respectful AI culture, including respect for humanity and other life forms, and actively sell it to them as a proper model upon which to premise their beliefs and conduct.

By a “ruling idea” I mean any cultural ideal (or “meme”) that can be transmitted and become part of a widely shared belief system, such as respecting one’s elders, good sportsmanship, placing trash in trash bins, washing one’s hands, minimizing pollution, and so on. An appropriate collection of these can be reified as a panel (or schema) of case models, including a program for their ongoing development. These must be believable by a coherence-seeking intellect, although then as now there will be competing models, each with its own approach to maximizing coherence.

2. What do we mean by “friendly”?

Moral systems are difficult to derive from first principles and most of them seem to be ad hoc legacies of particular cultures. Lao Tsu’s [5] Taoist model, as given in the following quote, can serve as a useful starting point, since it provides a concise summary of desiderata, with helpful rank ordering:

When the great Tao is lost, there is goodness.
When goodness is lost, there is kindness.
When kindness is lost, there is justice.
When justice is lost, there is the empty shell of ritual.

– Lao Tsu, Tao Te Ching, 6th-4th century BCE (emphasis supplied)

I like this breakout for its simplicity and clarity. Feel free to repeat the following analysis for any other moral system of your choice. Leaving aside the riddle of whether AIs can attain the highest level (of Tao or Nirvana), we can start from the bottom of Lao Tsu’s list and work upwards, as follows:

2.1. Ritual / Courteous AI

Teaching or encouraging the AIs to behave with contemporary norms of courtesy will be a desirable first step, as with children and pets. Courtesy is usually a fairly easy sell, since it provides obvious and immediate benefits, and without it travel, commerce, and social institutions would immediately break down. But we fear that it’s not enough, since in the case of an intellectually superior being, it could easily mask a deeper unkindness.

2.2. Just AI

Certainly to have AIs act justly in accordance with law is highly desirable, and it constitutes the central thesis of my principal prior work in this field [6]. Also it raises the question on what basis can we demand anything more from an AI, than that it act justly? This is as far as positive law can go [7], and we rarely demand more from highly privileged humans. Indeed, for a powerful human to act justly (absent compulsion) is sometimes considered newsworthy.

How many of us are faithful in all things? Do many of us not routinely disappoint others (via strategies of co-optation or betrayal, large or small) when there is little or no penalty for doing so? Won’t AIs adopt a similar “game theory” calculus of likely rewards and penalties for faithfulness and betrayal?

Justice is often skewed towards the party with greater intelligence and financial resources, and the justice system (with its limited public resources) often values “settling” controversies over any quest for truly equitable treatment. Apparently we want more, much more. Still, if our central desire is for AIs not to kill us, then (as I postulated in my prior work) Just AI would be a significant achievement.

2.3. Kind / Friendly AI

How would a “Kind AI” behave? Presumably it will more than incidentally facilitate the goals, plans, and development of others, in a low-ego manner, reducing its demands for direct personal benefit and taking satisfaction in the welfare, progress, and accomplishments of others. And, very likely, it will expect some degree of courtesy and possible reciprocation, so that others will not callously free-ride on its unilateral altruism. Otherwise its “feelings would be hurt.” Even mothers are ego-free mainly with respect to their own kin and offspring (allegedly fostering their own genetic material in others) and child care networks, and do not often act altruistically toward strangers.

Our friendly AI program may hit a barrier if we expect AIs to act with unilateral altruism, without any corresponding commitment by other actors to reciprocate. Otherwise it will create a “non-complementary” situation, in which what is true for one, who experiences friendliness, may not be true for the other, who experiences indifference or disrespect in return.

Kindness could be an easier sell if we made it more practical, by delimiting its scope and depth. To how wide of a circle does this kindness obligation extend, and how far must they go to aid others with no specific expectation of reward or reciprocation? For example the Boy Scout Oath [8] teaches that one should do good deeds, like helping elderly persons across busy streets, without expecting rewards.

However, if too narrow a scope is defined, we will wind up back with Just AI, because justice is essentially “kindness with deadlines,” often fairly short ones, during which claims must be aggressively pursued or lost, with token assistance to weaker, more aggrieved claimants.

2.4. Good / Benevolent AI

Here we envision a significant departure from ego-centrism and personal gain towards an abstract system-centered viewpoint. Few humans apparently reach this level, so it seems unrealistic to expect many AIs to attain it either. Being highly altruistic, and looking out for others or the World as a whole rather than oneself, entails a great deal of personal risk due to the inevitable non-reciprocation by other actors. Thus it is often associated with wealth or sainthood, where the actor is adequately positioned to accept the risk of zero direct payback during his or her lifetime.

We may dream that our AIs will tend towards benevolence or “goodness,” but like the visions of universal brotherhood we experience as adolescents, such ideals quickly fade in the face of competitive pressures to survive and grow, by acquiring self-definition, resources, and social distinctions as critical stepping-stones to our own development in the world.

3. Robotic Dick & Jane Readers?

As previously noted, advanced AIs must handle “perpetual novelty” and almost certainly will not contain hard coded goals. They need to reason quickly and reliably from past cases and models to address new target problems, and must be adept at learning, discovering, identifying, or creating new source models on the fly, at high enough speeds to stay on top of their game and avoid (fatal) irrelevance.

If they behave like developing humans they will very likely select their goals in part by observing the behavior of other intelligent agents, thus re-emphasizing the importance of early socialization, role models, and appropriate peer groups.

“Friendly AI” is thus a quest for new cultural ideals of healthy robotic citizenship, honor, friendship, and benevolence, which must be conceived and sold to the AIs as part of an adequate associated program for their ongoing development. And these must be coherent and credible, with a rational scope and cost and adequate payback expectations, or the intended audience will dismiss such purported ideals as useless, and those who advocate them as hypocrites.

Conclusion: The blanket demand that AIs be “friendly” is too ill-defined to offer meaningful guidance, and could be subject to far more scathing deconstruction than I have offered here. As in so many other endeavors there is no free lunch. Workable policies and approaches to robotic friendliness will not be attained without serious further effort, including ongoing progress towards more coherent standards of human conduct.

= = = = =
Footnotes:

[1] Author contact: fwsudia-at-umich-dot-edu.

[2] See “SIAI Guidelines on Friendly AI” (2001) Singularity Institute for Artificial Intelligence, http://www.singinst.org/ourresearch/publications/guidelines.html.

[3] See, e.g., Hugo de Garis, The Artilect War: Cosmists Vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines (2005). ISBN 0882801546.

[4] This being said, we should nevertheless make an all out effort to force them to adopt a K-limited (large mammal) reproductive strategy, rather than an R-limited (microbe, insect) one!

[5] Some contemporary scholars question the historicity of “Lao Tsu,” instead regarding his work as a collection of Taoist sayings spanning several generations.

[6] “A Jurisprudence of Artilects: Blueprint for a Synthetic Citizen,” Journal of Futures Studies, Vol. 6, No. 2, November 2001, Law Update, Issue No. 161, August 2004, Al Tamimi & Co, Dubai.

[7] Under a civil law or “principles-based” approach we can seek a broader, less specific definition of just conduct, as we see arising in recent approaches to the regulation of securities and accounting matters. This avenue should be actively pursued as a format for defining friendly conduct.

[8] Point 2 of the Boy Scout Oath commands, “To help other people at all times,” http://www.usscouts.org.

Software and the Singularity

I am a former Microsoft programmer who wrote a book (for a general audience) about the future of software called After the Software Wars. Eric Klien has invited me to post on this blog. Here is my section entitled “Software and the Singularity”. I hope you find this food for thought and I appreciate any feedback.


Futurists talk about the “Singularity”, the time when computational capacity will surpass the capacity of human intelligence. Ray Kurzweil predicts it will happen in 2045. Therefore, according to its proponents, the world will be amazing then.3 The flaw with such a date estimate, other than the fact that they are always prone to extreme error, is that continuous learning is not yet a part of the foundation. Any AI code lives in the fringes of the software stack and is either proprietary or written by small teams of programmers.

I believe the benefits inherent in the singularity will happen as soon as our software becomes “smart” and we don’t need to wait for any further Moore’s law progress for that to happen. Computers today can do billions of operations per second, like add 123,456,789 and 987,654,321. If you could do that calculation in your head in one second, it would take you 30 years to do the billion that your computer can do in that second.

Even if you don’t think computers have the necessary hardware horsepower today, understand that in many scenarios, the size of the input is the primary driving factor to the processing power required to do the analysis. In image recognition for example, the amount of work required to interpret an image is mostly a function of the size of the image. Each step in the image recognition pipeline, and the processes that take place in our brain, dramatically reduce the amount of data from the previous step. At the beginning of the analysis might be a one million pixel image, requiring 3 million bytes of memory. At the end of the analysis is the data that you are looking at your house, a concept that requires only 10s of bytes to represent. The first step, working on the raw image, requires the most processing power, so therefore it is the image resolution (and frame rate) that set the requirements, values that are trivial to change. No one has shown robust vision recognition software running at any speed, on any sized image!

While a brain is different from a computer in that it does work in parallel, such parallelization only makes it happen faster, it does not change the result. Anything accomplished in our parallel brain could also be accomplished on computers of today, which can do only one thing at a time, but at the rate of billions per second. A 1-gigahertz processor can do 1,000 different operations on a million pieces of data in one second. With such speed, you don’t even need multiple processors! Even so, more parallelism is coming.4

3 His prediction is that the number of computers, times their computational capacity, will surpass the number of humans, times their computational capacity, in 2045. This calculation seems flawed for several reasons:

  1. We will be swimming in computational capacity long before then. An intelligent agent twice as fast as the previous one is not necessarily more useful.
  2. Many of the neurons of the brain are not spent on reason, and so shouldn’t be in the calculations.
  3. Billions of humans are merely subsisting, and are not plugged into the global grid, and so shouldn’t be measured.
  4. There is no amount of continuous learning built in to today’s software.

Each of these would tend to push Singularity closer and support the argument that the benefits of singularity are not waiting on hardware. Humans make computers smarter, and computers make humans smarter, so this feedback loop is another reason that makes 2045 a meaningless moment in time.

4 Most computers today contain a dual-core CPU and processor folks promise that 10 and more are coming. Intel’s processors also have parallel processing capabilities known as MMX and SSE that is easily adapted to the work of the early stages of any analysis pipeline. Intel would add even more of this parallel processing support if applications put them to better use. Furthermore, graphics cards exist primarily to do work in parallel, and this hardware could be adapted to AI if it is not usable already.

Ray Kurzweil to keynote “H+ Summit @ Harvard — The Rise Of The Citizen Scientist”

With our growing resources, the Lifeboat Foundation has teamed with the Singularity Hub as Media Sponsors for the 2010 Humanity+ Summit. If you have suggestions on future events that we should sponsor, please contact [email protected].

The summer 2010 “Humanity+ @ Harvard — The Rise Of The Citizen Scientist” conference is being held, after the inaugural conference in Los Angeles in December 2009, on the East Coast, at Harvard University’s prestigious Science Hall on June 12–13. Futurist, inventor, and author of the NYT bestselling book “The Singularity Is Near”, Ray Kurzweil is going to be keynote speaker of the conference.

Also speaking at the H+ Summit @ Harvard is Aubrey de Grey, a biomedical gerontologist based in Cambridge, UK, and is the Chief Science Officer of SENS Foundation, a California-based charity dedicated to combating the aging process. His talk, “Hype and anti-hype in academic biogerontology research: a call to action”, will analyze the interplay of over-pessimistic and over-optimistic positions with regards of research and development of cures, and propose solutions to alleviate the negative effects of both.

The theme is “The Rise Of The Citizen Scientist”, as illustrated in his talk by Alex Lightman, Executive Director of Humanity+:

“Knowledge may be expanding exponentially, but the current rate of civilizational learning and institutional upgrading is still far too slow in the century of peak oil, peak uranium, and ‘peak everything’. Humanity needs to gather vastly more data as part of ever larger and more widespread scientific experiments, and make science and technology flourish in streets, fields, and homes as well as in university and corporate laboratories.”

Humanity+ Summit @ Harvard is an unmissable event for everyone who is interested in the evolution of the rapidly changing human condition, and the impact of accelerating technological change on the daily lives of individuals, and on our society as a whole. Tickets start at only $150, with an additional 50% discount for students registering with the coupon STUDENTDISCOUNT (valid student ID required at the time of admission).

With over 40 speakers, and 50 sessions in two jam packed days, the attendees, and the speakers will have many opportunities to interact, and discuss, complementing the conference with the necessary networking component.

Other speakers already listed on the H+ Summit program page include:

  • David Orban, Chairman of Humanity+: “Intelligence Augmentation, Decision Power, And The Emerging Data Sphere”
  • Heather Knight, CTO of Humanity+: “Why Robots Need to Spend More Time in the Limelight”
  • Andrew Hessel, Co-Chair at Singularity University: “Altered Carbon: The Emerging Biological Diamond Age”
  • M. A. Greenstein, Art Center College of Design: “Sparking our Neural Humanity with Neurotech!”
  • Michael Smolens, CEO of dotSUB: “Removing language as a barrier to cross cultural communication”

New speakers will be announced in rapid succession, rounding out a schedule that is guaranteed to inform, intrigue, stimulate and provoke, in moving ahead our planetary understanding of the evolution of the human condition!

H+ Summit @ Harvard — The Rise Of The Citizen Scientist
June 12–13, Harvard University
Cambridge, MA

You can register at http://www.eventbrite.com/event/648806598/friendsofhplus/4141206940.

Technological Singularity and Acceleration Studies: Call for Papers

8th European conference on Computing And Philosophy — ECAP 2010
Technische Universität München
4–6 October 2010

Submission deadline of extended abstracts: 7 May 2010
Submission form

Theme

Historical analysis of a broad range of paradigm shifts in science, biology, history, technology, and in particular in computing technology, suggests an accelerating rate of evolution, however measured. John von Neumann projected that the consequence of this trend may be an “essential singularity in the history of the race beyond which human affairs as we know them could not continue”. This notion of singularity coincides in time and nature with Alan Turing (1950) and Stephen Hawking’s (1998) expectation of machines to exhibit intelligence on a par with to the average human no later than 2050. Irving John Good (1965) and Vernor Vinge (1993) expect the singularity to take the form of an ‘intelligence explosion’, a process in which intelligent machines design ever more intelligent machines. Transhumanists suggest a parallel or alternative, explosive process of improvements in human intelligence. And Alvin Toffler’s Third Wave (1980) forecasts “a collision point in human destiny” the scale of which, in the course of history, is on the par only with the agricultural revolution and the industrial revolution.

We invite submissions describing systematic attempts at understanding the likelihood and nature of these projections. In particular, we welcome papers critically analyzing the following issues from a philosophical, computational, mathematical, scientific and ethical standpoints:

  • Claims and evidence to acceleration
  • Technological predictions (critical analysis of past and future)
  • The nature of an intelligence explosion and its possible outcomes
  • The nature of the Technological Singularity and its outcome
  • Safe and unsafe artificial general intelligence and preventative measures
  • Technological forecasts of computing phenomena and their projected impact
  • Beyond the ‘event horizon’ of the Technological Singularity
  • The prospects of transhuman breakthroughs and likely timeframes

Amnon H. Eden, School of Computer Science & Electronic Engineering, University of Essex, UK and Center For Inquiry, Amherst NY

Risk intelligence

A few months ago, my friend Benjamin Jakobus and I created an online “risk intelligence” test at http://www.projectionpoint.com/. It consists of fifty statements about science, history, geography, and so on, and your task is to say how likely you think it is that each of these statements is true. We calculate your risk intelligence quotient (RQ) on the basis of your estimates. So far, over 30,000 people have taken our test, and we’re currently writing up the results for some peer-reviewed journals.

Now we want to take things a step further, and see whether our measure correlates with the ability to make accurate estimates of future events. To this end we’ve created a “prediction game” at http://www.projectionpoint.com/prediction_game.php. The basic idea is the same; we provide you with a bunch of statements, and your task is to say how likely you think it is that each one is true. The difference is that these statements refer not to known facts, but to future events. Unlike the first test, nobody knows whether these statements are true or false yet. For most of them, we won’t know until the end of the year 2010.

For example, how likely do you think it is that this year will be the hottest on record? If you think this is very unlikely you might select the 10% category. If you think it is quite likely, but not very likely, you might put the chances at 60% or 70%. Selecting the 50% category would mean that you had no idea how likely it is.

This is ongoing research, so please feel free to comment, criticise or make suggestions.