Toggle light / dark theme

Friendly AI: What is it, and how can we foster it?
By Frank W. Sudia [1]

Originally written July 20, 2008
Edited and web published June 6, 2009
Copyright © 2008-09, All Rights Reserved.

Keywords: artificial intelligence, artificial intellect, friendly AI, human-robot ethics, science policy.

1. Introduction

There is consensus that true artificial intelligence, of the kind that could generate a “runaway” increasing-returns process or “singularity,” is still many years away, and some believe it may be unattainable. Nevertheless, in view of the likely difficulty of putting the genie back in the bottle, an increasing concern has arisen with the topic of “friendly AI,” coupled with the idea we should do something about this now, not after a potentially deadly situation is starting to spin out of control [2].

(Note: Some futurists believe this topic is moot in view of intensive funding for robotic soldiers, which can be viewed as intrinsically “unfriendly.” However if we focus on threats posed by “super-intelligence,” still off in the future, the topic remains germane.)

Most if not all popular (Western) dramatizations of robotic futures postulate that the AIs will run amok and turn against humans. Some scholars [3] who considered the issue concluded that this might be virtually inevitable, in view of the gross inconsistencies and manifest “unworthiness” of humanity, as exemplified in its senseless destruction of its global habitat and a large percentage of extant species, etc.

The prospect of negative public attention, including possible legal curbs on AI research, may be distasteful, but we must face the reality that public involvement has already been quite pronounced in other fields of science, such as nuclear physics, genetically modified organisms, birth control, and stem cells. Hence we should be proactive about addressing these popular concerns, lest we unwittingly incur major political defeats and long lasting negative PR.

Nevertheless, upon reasoned analysis, it is far from obvious what “friendly” AI means, or how it could be fostered. Advanced AIs are unlikely to have any fixed “goals” that can be hardwired [4], so as to place “friendliness” towards humans and other life at the top of the hierarchy.

Rather, in view of their need to deal with perpetual novelty, they will reason from facts and models to infer appropriate goals. It’s probably a good bet that, when dealing with high-speed coherence analyzers, hypocrisy will not be appreciated – not least because it wastes a lot of computational resources to detect and correct. If humans continue to advocate and act upon “ideals” that are highly contradictory and self destructive, it’s hard to argue that advanced AI should tolerate that.

To make progress, not only for friendly AI, but also for ourselves, we should be seeking to develop and promote “ruling ideas” (or source models) that will foster an ecologically-respectful AI culture, including respect for humanity and other life forms, and actively sell it to them as a proper model upon which to premise their beliefs and conduct.

By a “ruling idea” I mean any cultural ideal (or “meme”) that can be transmitted and become part of a widely shared belief system, such as respecting one’s elders, good sportsmanship, placing trash in trash bins, washing one’s hands, minimizing pollution, and so on. An appropriate collection of these can be reified as a panel (or schema) of case models, including a program for their ongoing development. These must be believable by a coherence-seeking intellect, although then as now there will be competing models, each with its own approach to maximizing coherence.

2. What do we mean by “friendly”?

Moral systems are difficult to derive from first principles and most of them seem to be ad hoc legacies of particular cultures. Lao Tsu’s [5] Taoist model, as given in the following quote, can serve as a useful starting point, since it provides a concise summary of desiderata, with helpful rank ordering:

When the great Tao is lost, there is goodness.
When goodness is lost, there is kindness.
When kindness is lost, there is justice.
When justice is lost, there is the empty shell of ritual.

– Lao Tsu, Tao Te Ching, 6th-4th century BCE (emphasis supplied)

I like this breakout for its simplicity and clarity. Feel free to repeat the following analysis for any other moral system of your choice. Leaving aside the riddle of whether AIs can attain the highest level (of Tao or Nirvana), we can start from the bottom of Lao Tsu’s list and work upwards, as follows:

2.1. Ritual / Courteous AI

Teaching or encouraging the AIs to behave with contemporary norms of courtesy will be a desirable first step, as with children and pets. Courtesy is usually a fairly easy sell, since it provides obvious and immediate benefits, and without it travel, commerce, and social institutions would immediately break down. But we fear that it’s not enough, since in the case of an intellectually superior being, it could easily mask a deeper unkindness.

2.2. Just AI

Certainly to have AIs act justly in accordance with law is highly desirable, and it constitutes the central thesis of my principal prior work in this field [6]. Also it raises the question on what basis can we demand anything more from an AI, than that it act justly? This is as far as positive law can go [7], and we rarely demand more from highly privileged humans. Indeed, for a powerful human to act justly (absent compulsion) is sometimes considered newsworthy.

How many of us are faithful in all things? Do many of us not routinely disappoint others (via strategies of co-optation or betrayal, large or small) when there is little or no penalty for doing so? Won’t AIs adopt a similar “game theory” calculus of likely rewards and penalties for faithfulness and betrayal?

Justice is often skewed towards the party with greater intelligence and financial resources, and the justice system (with its limited public resources) often values “settling” controversies over any quest for truly equitable treatment. Apparently we want more, much more. Still, if our central desire is for AIs not to kill us, then (as I postulated in my prior work) Just AI would be a significant achievement.

2.3. Kind / Friendly AI

How would a “Kind AI” behave? Presumably it will more than incidentally facilitate the goals, plans, and development of others, in a low-ego manner, reducing its demands for direct personal benefit and taking satisfaction in the welfare, progress, and accomplishments of others. And, very likely, it will expect some degree of courtesy and possible reciprocation, so that others will not callously free-ride on its unilateral altruism. Otherwise its “feelings would be hurt.” Even mothers are ego-free mainly with respect to their own kin and offspring (allegedly fostering their own genetic material in others) and child care networks, and do not often act altruistically toward strangers.

Our friendly AI program may hit a barrier if we expect AIs to act with unilateral altruism, without any corresponding commitment by other actors to reciprocate. Otherwise it will create a “non-complementary” situation, in which what is true for one, who experiences friendliness, may not be true for the other, who experiences indifference or disrespect in return.

Kindness could be an easier sell if we made it more practical, by delimiting its scope and depth. To how wide of a circle does this kindness obligation extend, and how far must they go to aid others with no specific expectation of reward or reciprocation? For example the Boy Scout Oath [8] teaches that one should do good deeds, like helping elderly persons across busy streets, without expecting rewards.

However, if too narrow a scope is defined, we will wind up back with Just AI, because justice is essentially “kindness with deadlines,” often fairly short ones, during which claims must be aggressively pursued or lost, with token assistance to weaker, more aggrieved claimants.

2.4. Good / Benevolent AI

Here we envision a significant departure from ego-centrism and personal gain towards an abstract system-centered viewpoint. Few humans apparently reach this level, so it seems unrealistic to expect many AIs to attain it either. Being highly altruistic, and looking out for others or the World as a whole rather than oneself, entails a great deal of personal risk due to the inevitable non-reciprocation by other actors. Thus it is often associated with wealth or sainthood, where the actor is adequately positioned to accept the risk of zero direct payback during his or her lifetime.

We may dream that our AIs will tend towards benevolence or “goodness,” but like the visions of universal brotherhood we experience as adolescents, such ideals quickly fade in the face of competitive pressures to survive and grow, by acquiring self-definition, resources, and social distinctions as critical stepping-stones to our own development in the world.

3. Robotic Dick & Jane Readers?

As previously noted, advanced AIs must handle “perpetual novelty” and almost certainly will not contain hard coded goals. They need to reason quickly and reliably from past cases and models to address new target problems, and must be adept at learning, discovering, identifying, or creating new source models on the fly, at high enough speeds to stay on top of their game and avoid (fatal) irrelevance.

If they behave like developing humans they will very likely select their goals in part by observing the behavior of other intelligent agents, thus re-emphasizing the importance of early socialization, role models, and appropriate peer groups.

“Friendly AI” is thus a quest for new cultural ideals of healthy robotic citizenship, honor, friendship, and benevolence, which must be conceived and sold to the AIs as part of an adequate associated program for their ongoing development. And these must be coherent and credible, with a rational scope and cost and adequate payback expectations, or the intended audience will dismiss such purported ideals as useless, and those who advocate them as hypocrites.

Conclusion: The blanket demand that AIs be “friendly” is too ill-defined to offer meaningful guidance, and could be subject to far more scathing deconstruction than I have offered here. As in so many other endeavors there is no free lunch. Workable policies and approaches to robotic friendliness will not be attained without serious further effort, including ongoing progress towards more coherent standards of human conduct.

= = = = =
Footnotes:

[1] Author contact: fwsudia-at-umich-dot-edu.

[2] See “SIAI Guidelines on Friendly AI” (2001) Singularity Institute for Artificial Intelligence, http://www.singinst.org/ourresearch/publications/guidelines.html.

[3] See, e.g., Hugo de Garis, The Artilect War: Cosmists Vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines (2005). ISBN 0882801546.

[4] This being said, we should nevertheless make an all out effort to force them to adopt a K-limited (large mammal) reproductive strategy, rather than an R-limited (microbe, insect) one!

[5] Some contemporary scholars question the historicity of “Lao Tsu,” instead regarding his work as a collection of Taoist sayings spanning several generations.

[6] “A Jurisprudence of Artilects: Blueprint for a Synthetic Citizen,” Journal of Futures Studies, Vol. 6, No. 2, November 2001, Law Update, Issue No. 161, August 2004, Al Tamimi & Co, Dubai.

[7] Under a civil law or “principles-based” approach we can seek a broader, less specific definition of just conduct, as we see arising in recent approaches to the regulation of securities and accounting matters. This avenue should be actively pursued as a format for defining friendly conduct.

[8] Point 2 of the Boy Scout Oath commands, “To help other people at all times,” http://www.usscouts.org.

I am a former Microsoft programmer who wrote a book (for a general audience) about the future of software called After the Software Wars. Eric Klien has invited me to post on this blog. Here are several more sections on AI topics. I hope you find these pages food for thought and I appreciate any feedback.


The future is open source everything.

—Linus Torvalds

That knowledge has become the resource, rather than a resource, is what makes our society post-capitalist.

—Peter Drucker, 1993

Imagine 1,000 people, broken up into groups of five, working on two hundred separate encyclopedias, versus that same number of people working on one encyclopedia? Which one will be the best? This sounds like a silly analogy when described in the context of an encyclopedia, but it is exactly what is going on in artificial intelligence (AI) research today.1 Some say free software doesn’t work in theory, but it does work in practice. In truth, it “works” in proportion to the number of people who are working together, and their collective efficiency.

In early drafts of this book, I had positioned this chapter after the one explaining economic and legal issues around free software. However, I now believe it is important to discuss artificial intelligence separately and first, because AI is the holy-grail of computing, and the reason we haven’t solved AI is that there are no free software codebases that have gained critical mass. Far more than enough people are out there, but they are usually working in teams of one or two people, or proprietary codebases.

Deep Blue has been Deep-Sixed

Some people worry that artificial intelligence will make us feel inferior, but then, anybody in his right mind should have an inferiority complex every time he looks at a flower.

—Alan Kay, computer scientist

The source code for IBM’s Deep Blue, the first chess machine to beat then-reigning World Champion Gary Kasparov, was built by a team of about five people. That code has been languishing in a vault at IBM ever since because it was not created under a license that would enable further use by anyone, even though IBM is not attempting to make money from the code or using it for anything.

The second best chess engine in the world, Deep Junior, is also not free, and is therefore being worked on by a very small team. If we have only small teams of people attacking AI, or writing code and then locking it away, we are not going to make progress any time soon towards truly smart software.

Today’s chess computers have no true AI in them; they simply play moves, and then use human-created analysis to measure the result. If you were to go tweak the computer’s value for how much a queen is worth compared to a pawn, the machine would start losing and wouldn’t even understand why. It comes off as intelligent only because it has very smart chess experts programming the computer precisely how to analyze moves, and to rate the relative importance of pieces and their locations, etc.

Deep Blue could analyze two hundred million positions per second, compared to grandmasters who can analyze only 3 positions per second. Who is to say where that code might be today if chess AI aficionados around the world had been hacking on it for the last 10 years?

DARPA Grand Challenge

Proprietary software developers have the advantages money provides; free software developers need to make advantages for each other. I hope some day we will have a large collection of free libraries that have no parallel available to proprietary software, providing useful modules to serve as building blocks in new free software, and adding up to a major advantage for further free software development. What does society need? It needs information that is truly available to its citizens—for example, programs that people can read, fix, adapt, and improve, not just operate. But what software owners typically deliver is a black box that we can’t study or change.

—Richard Stallman

The hardest computing challenges we face are man-made: language, roads and spam. Take, for instance, robot-driven cars. We could do this without a vision system, and modify every road on the planet by adding driving rails or other guides for robot-driven cars, but it is much cheaper and safer to build software for cars to travel on roads as they exist today — a chaotic mess.

At the annual American Association for the Advancement of Science (AAAS) conference in February 2007, the “consensus” among the scientists was that we will have driverless cars by 2030. This prediction is meaningless because those working on the problem are not working together, just as those working on the best chess software are not working together. Furthermore, as American cancer researcher Sidney Farber has said, “Any man who predicts a date for discovery is no longer a scientist.”

Today, Lexus has a car that can parallel park itself, but its vision system needs only a very vague idea of the obstacles around it to accomplish this task. The challenge of building a robot-driven car rests in creating a vision system that makes sense of painted lines, freeway signs, and the other obstacles on the road, including dirtbags not following “the rules”.

The Defense Advanced Research Projects Agency (DARPA), which unlike Al Gore, really invented the Internet, has sponsored several contests to build robot-driven vehicles:


Stanley, Stanford University’s winning entry for the 2005 challenge. It might not run over a Stop sign, but it wouldn’t know to stop.

Like the parallel parking scenario, the DARPA Grand Challenge of 2004 required only a simple vision system. Competing cars traveled over a mostly empty dirt road and were given a detailed series of map points. Even so, many of the cars didn’t finish, or perform confidently. There is an expression in engineering called “garbage in, garbage out”; as such, if a car sees “poorly”, it drives poorly.

What was disappointing about the first challenge was that an enormous amount of software was written to operate these vehicles yet none of it has been released (especially the vision system) for others to review, comment on, improve, etc. I visited Stanford’s Stanley website and could find no link to the source code, or even information such as the programming language it was written in.

Some might wonder why people should work together in a contest, but if all the cars used rubber tires, Intel processors and the Linux kernel, would you say they were not competing? It is a race, with the fastest hardware and driving style winning in the end. By working together on some of the software, engineers can focus more on the hardware, which is the fun stuff.

The following is a description of the computer vision pipeline required to successfully operate a driverless car. Whereas Stanley’s entire software team involved only 12 part-time people, the vision software alone is a problem so complicated it will take an effort comparable in complexity to the Linux kernel to build it:

Image acquisition: Converting sensor inputs from 2 or more cameras, radar, heat, etc. into a 3-dimensional image sequence

Pre-processing: Noise reduction, contrast enhancement

Feature extraction: lines, edges, shape, motion

Detection/Segmentation: Find portions of the images that need further analysis (highway signs)

High-level processing: Data verification, text recognition, object analysis and categorization

The 5 stages of an image recognition pipeline.

A lot of software needs to be written in support of such a system:


The vision pipeline is the hardest part of creating a robot-driven car, but even such diagnostic software is non-trivial.

In 2007, there was a new DARPA Urban challenge. This is a sample of the information given to the contestants:


It is easier and safer to program a car to recognize a Stop sign than it is to point out the location of all of them.

Constructing a vision pipeline that can drive in an urban environment presents a much harder software problem. However, if you look at the vision requirements needed to solve the Urban Challenge, it is clear that recognizing shapes and motion is all that is required, and those are the same requirements as had existed in the 2004 challenge! But even in the 2007 contest, there was no more sharing than in the previous contest.

Once we develop the vision system, everything else is technically easy. Video games contain computer-controlled drivers that can race you while shooting and swearing at you. Their trick is that they already have detailed information about all of the objects in their simulated world.

After we’ve built a vision system, there are still many fun challenges to tackle: preparing for Congressional hearings to argue that these cars should have a speed limit controlled by the computer, or telling your car not to drive aggressively and spill your champagne, or testing and building confidence in such a system.2

Eventually, our roads will get smart. Once we have traffic information, we can have computers efficiently route vehicles around any congestion. A study found that traffic jams cost the average large city $1 billion dollars a year.

No organization today, including Microsoft and Google, contains hundreds of computer vision experts. Do you think GM would be gutsy enough to fund a team of 100 vision experts even if they thought they could corner this market?

There are enough people worldwide working on the vision problem right now. If we could pool their efforts into one codebase, written in a modern programming language, we could have robot-driven cars in five years. It is not a matter of invention, it is a matter of engineering.

1 One website documents 60 pieces of source code that perform Fourier transformations, which is an important software building block. The situation is the same for neural networks, computer vision, and many other advanced technologies.

2 There are various privacy issues inherent in robot-driven cars. When computers know their location, it becomes easy to build a “black box” that would record all this information and even transmit it to the government. We need to make sure that machines owned by a human stay under his control, and do not become controlled by the government without a court order and a compelling burden of proof.

I am a former Microsoft programmer who wrote a book (for a general audience) about the future of software called After the Software Wars. Eric Klien has invited me to post on this blog. Here is my section entitled “Software and the Singularity”. I hope you find this food for thought and I appreciate any feedback.


Futurists talk about the “Singularity”, the time when computational capacity will surpass the capacity of human intelligence. Ray Kurzweil predicts it will happen in 2045. Therefore, according to its proponents, the world will be amazing then.3 The flaw with such a date estimate, other than the fact that they are always prone to extreme error, is that continuous learning is not yet a part of the foundation. Any AI code lives in the fringes of the software stack and is either proprietary or written by small teams of programmers.

I believe the benefits inherent in the singularity will happen as soon as our software becomes “smart” and we don’t need to wait for any further Moore’s law progress for that to happen. Computers today can do billions of operations per second, like add 123,456,789 and 987,654,321. If you could do that calculation in your head in one second, it would take you 30 years to do the billion that your computer can do in that second.

Even if you don’t think computers have the necessary hardware horsepower today, understand that in many scenarios, the size of the input is the primary driving factor to the processing power required to do the analysis. In image recognition for example, the amount of work required to interpret an image is mostly a function of the size of the image. Each step in the image recognition pipeline, and the processes that take place in our brain, dramatically reduce the amount of data from the previous step. At the beginning of the analysis might be a one million pixel image, requiring 3 million bytes of memory. At the end of the analysis is the data that you are looking at your house, a concept that requires only 10s of bytes to represent. The first step, working on the raw image, requires the most processing power, so therefore it is the image resolution (and frame rate) that set the requirements, values that are trivial to change. No one has shown robust vision recognition software running at any speed, on any sized image!

While a brain is different from a computer in that it does work in parallel, such parallelization only makes it happen faster, it does not change the result. Anything accomplished in our parallel brain could also be accomplished on computers of today, which can do only one thing at a time, but at the rate of billions per second. A 1-gigahertz processor can do 1,000 different operations on a million pieces of data in one second. With such speed, you don’t even need multiple processors! Even so, more parallelism is coming.4

3 His prediction is that the number of computers, times their computational capacity, will surpass the number of humans, times their computational capacity, in 2045. This calculation seems flawed for several reasons:

  1. We will be swimming in computational capacity long before then. An intelligent agent twice as fast as the previous one is not necessarily more useful.
  2. Many of the neurons of the brain are not spent on reason, and so shouldn’t be in the calculations.
  3. Billions of humans are merely subsisting, and are not plugged into the global grid, and so shouldn’t be measured.
  4. There is no amount of continuous learning built in to today’s software.

Each of these would tend to push Singularity closer and support the argument that the benefits of singularity are not waiting on hardware. Humans make computers smarter, and computers make humans smarter, so this feedback loop is another reason that makes 2045 a meaningless moment in time.

4 Most computers today contain a dual-core CPU and processor folks promise that 10 and more are coming. Intel’s processors also have parallel processing capabilities known as MMX and SSE that is easily adapted to the work of the early stages of any analysis pipeline. Intel would add even more of this parallel processing support if applications put them to better use. Furthermore, graphics cards exist primarily to do work in parallel, and this hardware could be adapted to AI if it is not usable already.

With our growing resources, the Lifeboat Foundation has teamed with the Singularity Hub as Media Sponsors for the 2010 Humanity+ Summit. If you have suggestions on future events that we should sponsor, please contact [email protected].

The summer 2010 “Humanity+ @ Harvard — The Rise Of The Citizen Scientist” conference is being held, after the inaugural conference in Los Angeles in December 2009, on the East Coast, at Harvard University’s prestigious Science Hall on June 12–13. Futurist, inventor, and author of the NYT bestselling book “The Singularity Is Near”, Ray Kurzweil is going to be keynote speaker of the conference.

Also speaking at the H+ Summit @ Harvard is Aubrey de Grey, a biomedical gerontologist based in Cambridge, UK, and is the Chief Science Officer of SENS Foundation, a California-based charity dedicated to combating the aging process. His talk, “Hype and anti-hype in academic biogerontology research: a call to action”, will analyze the interplay of over-pessimistic and over-optimistic positions with regards of research and development of cures, and propose solutions to alleviate the negative effects of both.

The theme is “The Rise Of The Citizen Scientist”, as illustrated in his talk by Alex Lightman, Executive Director of Humanity+:

“Knowledge may be expanding exponentially, but the current rate of civilizational learning and institutional upgrading is still far too slow in the century of peak oil, peak uranium, and ‘peak everything’. Humanity needs to gather vastly more data as part of ever larger and more widespread scientific experiments, and make science and technology flourish in streets, fields, and homes as well as in university and corporate laboratories.”

Humanity+ Summit @ Harvard is an unmissable event for everyone who is interested in the evolution of the rapidly changing human condition, and the impact of accelerating technological change on the daily lives of individuals, and on our society as a whole. Tickets start at only $150, with an additional 50% discount for students registering with the coupon STUDENTDISCOUNT (valid student ID required at the time of admission).

With over 40 speakers, and 50 sessions in two jam packed days, the attendees, and the speakers will have many opportunities to interact, and discuss, complementing the conference with the necessary networking component.

Other speakers already listed on the H+ Summit program page include:

  • David Orban, Chairman of Humanity+: “Intelligence Augmentation, Decision Power, And The Emerging Data Sphere”
  • Heather Knight, CTO of Humanity+: “Why Robots Need to Spend More Time in the Limelight”
  • Andrew Hessel, Co-Chair at Singularity University: “Altered Carbon: The Emerging Biological Diamond Age”
  • M. A. Greenstein, Art Center College of Design: “Sparking our Neural Humanity with Neurotech!”
  • Michael Smolens, CEO of dotSUB: “Removing language as a barrier to cross cultural communication”

New speakers will be announced in rapid succession, rounding out a schedule that is guaranteed to inform, intrigue, stimulate and provoke, in moving ahead our planetary understanding of the evolution of the human condition!

H+ Summit @ Harvard — The Rise Of The Citizen Scientist
June 12–13, Harvard University
Cambridge, MA

You can register at http://www.eventbrite.com/event/648806598/friendsofhplus/4141206940.

AI is our best hope for long term survival. If we fail to create it, it will happened by some reason. Here I suggest the complete list of possible causes of failure, but I do not believe in them. (I was inspired bu V.Vinge artile “What if singularity does not happen”?)

I think most of these points are wrong and AI finaly will be created.

Technical reasons:
1) Moore’s Law will stop by physical causes earlier than would be established sufficiently powerful and inexpensive apparatus for artificial intelligence.
2) Silicon processors are less efficient than neurons to create artificial intelligence.
3) Solution of the AI cannot be algorithmically parallelization and as a result of the AI will be extremely slow.

Philosophy:
4) Human beings use some method of processing information, essentially inaccessible to algorithmic computers. So Penrose believes. (But we can use this method using bioengineering techniques.) Generally, the final recognition of the impossibility of creating artificial intelligence would be tantamount to recognizing the existence of the soul.
5) The system cannot create a system more complex then themselves, and so the people cannot create artificial intelligence, since all the proposed solutions are too simple. That is, AI is in principle possible, but people are too stupid to do it. In fact, one reason for past failures in the creation of artificial intelligence is that people underestimate the complexity of the problem.
6) AI is impossible, because any sufficiently complex system reveals the meaninglessness of existence and stops.
7) All possible ways to optimize are exhausted.AI does not have any fundamental advantage in comparison with the human-machine interface and has a limited scope of use.
8. The man in the body has a maximum level of common sense, and any incorporeal AIs are or ineffective, or are the models of people.
9) AI is created, but has no problems, which he could and should be addressed. All the problems have been solved by conventional methods, or proven uncomputable.
10) AI is created, but not capable of recursive self-optimization, since this would require some radically new ideas, but they had not. As a result, AI is there, or as a curiosity, or as a limited specific applications, such as automatic drivers.
11) The idea of artificial intelligence is flawed, because it has no precise definition or even it is an oxymoron, like “artificial natural.” As a result, developing specific goals or to create models of man, but not universal artificial intelligence.
12) There is an upper limit of the complexity of systems for which they have become chaotic and unstable, and it slightly exceeds the intellect of the most intelligent people. AI is slowly coming to this threshold of complexity.
13) The bearer of intelligence is Qualia. For our level of intelligence should be a lot events that are indescribable and not knowable, but superintellect should understand them, by definition, otherwise it is not superintellect, but simply a quick intellect.

Economic:
14) The growth of computer programs has led to an increase in the number of failures that were so spectacular that of automation software had to be abandoned. This led to a drop in demand for powerful computers and stop Moore’s Law, before it reached its physical limits. The same increase in complexity and number of failures made it difficult the creation of AI.
15) AI is possible, but it does not give a significant advantage over the man in any sense of the results, nor speed, nor the cost of computing. For example, a simulation of human worth one billion dollars, and she has no idea how a to self-optimize. But people found ways to break up their intellectual abilities by injecting the stem cell precursors of neurons, which further increases the competitive advantage of people.
16) No person engaged in the development of AI, because it is considered that this is impossible. It turns out self-fulfilling prophecy. AI is engaged only by fricks, who do not have enough of their own intellect and money. But the scale of the Manhattan Project could solve the problem of AI, but just no one is taking.
17) Technology of uploading consciousness into a computer has so developed, that this is enough for all practical purposes, have been associated with AI, and therefore there is no need to create an algorithmic AI. This upload is done mechanically, through scanning, and still no one understands what happens in the brain.

Political:
18) AI systems are prohibited or severely restricted for ethical reasons, so that people still feel themselves above all. Perhaps are allowed specialized AI systems in military and aerospace.
19) AI is prohibited for safety reasons, as it represents too great global risk.
20) AI emerged and established his authority over the Earth, but does not show itself, except it does not allow others to develop their own AI projects.
21) AI did not appear as was is imagined, and therefore no one call it AI (eg, the distributed intelligence of social networks).

Artificial brain ’10 years away’

By Jonathan Fildes
Technology reporter, BBC News, Oxford

A detailed, functional artificial human brain can be built within the next 10 years, a leading scientist has claimed.

Henry Markram, director of the Blue Brain Project, has already simulated elements of a rat brain.

He told the TED Global conference in Oxford that a synthetic human brain would be of particular use finding treatments for mental illnesses.

Around two billion people are thought to suffer some kind of brain impairment, he said.

“It is not impossible to build a human brain and we can do it in 10 years,” he said.

“And if we do succeed, we will send a hologram to TED to talk.”

‘Shared fabric’

The Blue Brain project was launched in 2005 and aims to reverse engineer the mammalian brain from laboratory data.

In particular, his team has focused on the neocortical column — repetitive units of the mammalian brain known as the neocortex.

Neurons

The team are trying to reverse engineer the brain

“It’s a new brain,” he explained. “The mammals needed it because they had to cope with parenthood, social interactions complex cognitive functions.

“It was so successful an evolution from mouse to man it expanded about a thousand fold in terms of the numbers of units to produce this almost frightening organ.”

And that evolution continues, he said. “It is evolving at an enormous speed.”

Over the last 15 years, Professor Markram and his team have picked apart the structure of the neocortical column.

“It’s a bit like going and cataloguing a bit of the rainforest — how may trees does it have, what shape are the trees, how many of each type of tree do we have, what is the position of the trees,” he said.

“But it is a bit more than cataloguing because you have to describe and discover all the rules of communication, the rules of connectivity.”

The project now has a software model of “tens of thousands” of neurons — each one of which is different — which has allowed them to digitally construct an artificial neocortical column.

Although each neuron is unique, the team has found the patterns of circuitry in different brains have common patterns.

“Even though your brain may be smaller, bigger, may have different morphologies of neurons — we do actually share the same fabric,” he said.

“And we think this is species specific, which could explain why we can’t communicate across species.”

World view

To make the model come alive, the team feeds the models and a few algorithms into a supercomputer.

“You need one laptop to do all the calculations for one neuron,” he said. “So you need ten thousand laptops.”

Computer-generated image of a human brain

The research could give insights into brain disease

Instead, he uses an IBM Blue Gene machine with 10,000 processors.

Simulations have started to give the researchers clues about how the brain works.

For example, they can show the brain a picture — say, of a flower — and follow the electrical activity in the machine.

“You excite the system and it actually creates its own representation,” he said.

Ultimately, the aim would be to extract that representation and project it so that researchers could see directly how a brain perceives the world.

But as well as advancing neuroscience and philosophy, the Blue Brain project has other practical applications.

For example, by pooling all the world’s neuroscience data on animals — to create a “Noah’s Ark”, researchers may be able to build animal models.

“We cannot keep on doing animal experiments forever,” said Professor Markram.

It may also give researchers new insights into diseases of the brain.

“There are two billion people on the planet affected by mental disorder,” he told the audience.

The project may give insights into new treatments, he said.

The TED Global conference runs from 21 to 24 July in Oxford, UK.


It will probably come as a surprise to those who are not well acquainted with the life and work of Alan Turing that in addition to his renowned pioneering work in computer science and mathematics, he also helped to lay the groundwork in the field of mathematical biology(1). Why would a renowned mathematician and computer scientist find himself drawn to the biosciences?

Interestingly, it appears that Turing’s fascination with this sub-discipline of biology most probably stemmed from the same source as the one that inspired his better known research: at that time all of these fields of knowledge were in a state of flux and development, and all posed challenging fundamental questions. Furthermore, in each of the three disciplines that engaged his interest, the matters to which he applied his uniquely creative vision were directly connected to central questions underlying these disciplines, and indeed to deeper and broader philosophical questions into the nature of humanity, intelligence and the role played by evolution in shaping who we are and how we shape our world.

Central to Turing’s biological work was his interest in mechanisms that shape the development of form and pattern in autonomous biological systems, and which underlie the patterns we see in nature (2), from animal coat markings to leaf arrangement patterns on plant stems (phyllotaxis). This topic of research, which he named “morphogenesis,” (3) had not been previously studied with modeling tools. This was a knowledge gap that beckoned Turing; particularly as such methods of research came naturally to him.

In addition to the diverse reasons that attracted him to the field of pattern formation, a major ulterior motive for his research had to do with a contentious subject which, astonishingly, is still highly controversial in some countries to this day. In studying pattern formation he was seeking to help invalidate the “argument from design(4) concept, which we know today as the hypothesis of “Intelligent Design.

Turing was intent on demonstrating that the laws of physics are sufficient to explain our observations in the natural world; or in other words, that our findings do not need an omnipotent creator to explain them. It is ironic that Turing, whose work played a central role in laying the groundwork for the creation of Artificial Intelligence (AI), took a clear stance against creationism. This is testament to his acceptance of scientific evidence and rigorous research over weak analogy.

Unfortunately, those who did not and will not accept Darwinian natural selection as the mechanism of evolution will not see anything compelling in Turing’s work on morphogenesis. To those individuals, the development of AI can be taken as “proof,” or a convincing analogy, of the necessity and presence of a creator, the argument being that the Creator created humanity, and humanity creates AI.

However, what the supporters of intelligent design do not acknowledge is that natural selection is itself precisely the cause underlying the development of both humanity and its AI progeny. Just as natural selection resulted in the phenomena that Turing sought to model in his work on morphogenesis (which brings about the propagation of successful traits through the development of biological form and pattern), it is also the driver for the development of intelligence. Itself generated via internalized neuronal selection mechanisms (5, 6), intelligence allows organisms to adapt to their environment continually during life.

Intelligence is the ultimate tool, the development of which allows organisms to survive; it enables them to learn, respond to their environment and adapt their behavior within their own lifetime. It is the fruit of the natural process that brings about successive development over time in organisms faced with scarcity of resources. Moreover, it now allows humans to defy generational selection and develop intelligences external to our own, making use of computational techniques, including some which utilize evolutionary mechanisms (7).

The eventual development of true AI will be a landmark in many ways, notably in that these intelligences will have the ability to alter their own circuits (their version of neurons), immediately and at will. While the human body is capable of some degree of non-developmental neuronal plasticity, this takes place slowly and control of the process is limited to indirect mechanisms (such as varied forms of learning or stimulation). In contrast, the high plasticity and directly controlled design and structure of AI software and hardware will render them well suited to altering themselves and hence to developing improved subsequent AI generations.

In addition to a jump in the degree of plasticity and its control, AIs will constitute a further step forward with regard to the speed at which beneficial information can be shared. In contrast to the exceedingly slow rate at which advantageous evolutionary adaptations were spread through the populations observed by Darwin (over several generations), the rapidly increasing rates of communication in current society result in successful “adaptations” (which we call science and technology) being distributed at ever-increasing speeds. This is, of course, the principal reason why information sharing is beneficial for humans – it allows us to better adapt to reality and harness the environment to our advantage. It seems reasonable to predict that ultimately the sharing of information in AI will be practically instantaneous.

It is difficult to speculate what a combination of such rapid communication and high plasticity combined with ever-increasing processing speeds will be like. The point at which self-improving AIs emerge has been termed a technological singularity (8).

Thus, in summary: evolution begets intelligence (via evolutionary neuronal selection mechanisms); human intelligence begets artificial intelligence (using, among others, evolutionary computation methods), which at increasing cycle speeds, leads to a technological singularity – a further big step up the evolutionary ladder.

Sadly, being considerably ahead of his time and living in an environment that castigated his lifestyle and drove him from his research, meant that Turing did not live to see the full extent of his work’s influence. While he did not survive to an age in which AIs became prevalent, he did fulfill his ambition by taking part in the defeat of argument from design in the scientific community, and witnessed Darwinian natural selection becoming widely accepted. The breadth of his vision, the insight he displayed, and his groundbreaking research clearly place Turing on an equal footing with the most celebrated scientists of the previous century.

The link is:
http://www.msnbc.msn.com/id/31511398/ns/us_news-military/

“The low-key launch of the new military unit reflects the Pentagon’s fear that the military might be seen as taking control over the nation’s computer networks.”

“Creation of the command, said Deputy Defense Secretary William Lynn at a recent meeting of cyber experts, ‘will not represent the militarization of cyberspace.’”

And where is our lifeboat?

An unmanned beast that cruises over any terrain at speeds that leave an M1A Abrams in the dust

Mean Machine: Troops could use the Ripsaw as an advance scout, sending it a mile or two ahead of a convoy, and use its cameras and new sensor technology to sniff out roadside bombs or ambushes John B. Carnett

Today’s featured Invention Award winner really requires no justification–it’s an unmanned, armed tank faster than anything the US Army has. Behold, the Ripsaw.

Cue up the Ripsaw’s greatest hits on YouTube, and you can watch the unmanned tank tear across muddy fields at 60 mph, jump 50 feet, and crush birch trees. But right now, as its remote driver inches it back and forth for a photo shoot, it’s like watching Babe Ruth forced to bunt with the bases loaded. The Ripsaw, lurching and belching black puffs of smoke, somehow seems restless.

Like their creation, identical twins Geoff and Mike Howe, 34, don’t like to sit still for long. At age seven, they built a log cabin. Ten years later, they converted a school bus into a drivable, transforming stage for their heavy-metal band, Two Much Trouble. In 2000 they couldn’t agree on their next project: Geoff favored a jet-turbine-powered off-road truck; Mike, the world’s fastest tracked vehicle. “That weekend, Mike calls me down to his garage,” Geoff says. “He’s already got the suspension built for the Ripsaw. So we went with that.”

Every engineer they consulted said they couldn’t best the 42mph top speed of an M1A Abrams, the most powerful tank in the world. Other tanks are built to protect the people inside, with frames made of heavy armored-steel plates. Designed for rugged unmanned missions, the Ripsaw just needed to go fast, so the brothers started trimming weight. First they built a frame of welded steel tubes, like the ones used by Nascar, that provides 50 percent more strength at half the weight.

Ripsaw: How It Works: To glide over rough terrain at top speed, the Ripsaw has shock absorbers that provide 14 inches of travel. But when the suspension compresses, it creates slack that could cause a track to come off, potentially flipping the vehicle. So the inventors devised a spring-loaded wheel at the front that extends to keep the tracks taut. The Ripsaw has never thrown a track Bland Designs

Behind the Wheel: The Ripsaw’s six cameras send live, 360-degree video to a control room, where program manager Will McMaster steers the tank John B. Carnett

When you reinvent the tank, finding ready-made parts is no easy task, and a tread light enough to spin at 60 mph and strong enough to hold together at that speed didn’t exist. So the Howes hand-shaped steel cleats and redesigned the mechanism for connecting them in a track. (Because the patent for the mechanism, one of eight on Ripsaw components, is still pending, they will reveal only that they didn’t use the typical pin-and-bushing system of connecting treads.) The two-pound cleats weigh about 90 percent less than similarly scaled tank cleats. With the combined weight savings, the Ripsaw’s 650-horsepower V8 engine cranks out nine times as much horsepower per pound as an M1A Abrams.

While working their day jobs — Mike as a financial adviser, Geoff as a foreman at a utilities plant — the self-taught engineers hauled the Ripsaw prototype from their workshop in Maine to the 2005 Washington Auto Show, where they showed it to army officials interested in developing weaponized unmanned ground vehicles (UGVs). That led to a demonstration for Maine Senator Susan Collins, who helped the Howes secure $1.25 million from the Department of Defense.The brothers founded Howe and Howe Technologies in 2006 and set to work upgrading various Ripsaw systems, including a differential drive train that automatically doles out the right amount of power to each track for turns. The following year they handed it over to the Army’s Armament Research Development and Engineering Center (ARDEC), which paired it with a remote-control M240 machine gun and put the entire system through months of strenuous tests. “What really set it apart from other UGVs was its speed,” says Bhavanjot Singh, the ARDEC project manager overseeing the Ripsaw’s development. Other UGVs top out at around 20 mph, but the Ripsaw can keep up with a pack of Humvees.

Over the Hill: Despite the best efforts of inventors Mike [left] and Geoff Howe, the Ripsaw has proven unbreakable. It did once break a suspension mount — and drove on for hours without trouble John B. Carnett

Back on the field, the tank has been readied for the photo. The program manager for Howe and Howe Technologies, Will McMaster, who is sitting at the Ripsaw’s controls around the corner and roughly a football field away, drives it straight over a three-foot-tall concrete wall. The brothers think that when the $760,000 Ripsaw is ready for mass production this summer, feats like this will give them a lead over other companies vying for a military UGV contract. “Every other UGV is small and uses [artificial intelligence] to avoid obstacles,” Mike says. “The Ripsaw doesn’t have to avoid obstacles; it drives over them.“

Singularity Hub

Create an AI on Your Computer

Written on May 28, 2009 – 11:48 am | by Aaron Saenz |

If many hands make light work, then maybe many computers can make an artificial brain. That’s the basic reasoning behind Intelligence Realm’s Artificial Intelligence project. By reverse engineering the brain through a simulation spread out over many different personal computers, Intelligence Realm hopes to create an AI from the ground-up, one neuron at a time. The first waves of simulation are already proving successful, with over 14,000 computers used and 740 billion neurons modeled. Singularity Hub managed to snag the project’s leader, Ovidiu Anghelidi, for an interview: see the full text at the end of this article.

The ultimate goal of Intelligence Realm is to create an AI or multiple AIs, and use these intelligences in scientific endeavors. By focusing on the human brain as a prototype, they can create an intelligence that solves problems and “thinks” like a human. This is akin to the work done at FACETS that Singularity Hub highlighted some weeks ago. The largest difference between Intelligence Realm and FACETS is that Intelligence Realm is relying on a purely simulated/software approach.

Which sort of makes Intelligence Realm similar to the Blue Brain Project that Singularity Hub also discussed. Both are computer simulations of neurons in the brain, but Blue Brain’s ultimate goal is to better understand neurological functions, while Intelligence Realm is seeking to eventually create an AI. In either case, to successfully simulate the brain in software alone, you need a lot of computing power. Blue Brain runs off a high-tech supercomputer, a resource that’s pretty much exclusive to that project. Even with that impressive commodity, Blue Brain is hitting the limit of what it can simulate. There’s too much to model for just one computer alone, no matter how powerful. Intelligence Realm is using a distributed computing solution. Where one computer cluster alone may fail, many working together may succeed. Which is why Intelligence Realm is looking for help.

The AI system project is actively recruiting, with more than 6700 volunteers answering the call. Each volunteer runs a small portion of the larger simulation on their computer(s) and then ships the results back to the main server. BOINC, the Berkeley built distributed computing software that makes it all possible, manages the flow of data back and forth. It’s the same software used for SETI’s distributed computing processing. Joining the project is pretty simple: you just download BOINC, some other data files, and you’re good to go. You can run the simulation as an application, or as part of your screen saver.

Baby Steps

So, 6700 volunteers, 14,000 or so platforms, 740 billion neurons, but what is the simulated brain actually thinking? Not a lot at the moment. The same is true with the Blue Brain Project, or FACETS. Simulating a complex organ like the brain is a slow process, and the first steps are focused on understanding how the thing actually works. Inputs (Intelligence Realm is using text strings) are converted into neuronal signals, those signals are allowed to interact in the simulation and the end state is converted back to an output. It’s a time and labor (computation) intensive process. Right now, Intelligence Realm is just building towards simple arithmetic.

Which is definitely a baby step, but there are more steps ahead. Intelligence Realm plans on learning how to map numbers to neurons, understanding the kind of patterns of neurons in your brain that represent numbers, and figuring out basic mathematical operators (addition, subtraction, etc). From these humble beginnings, more complex reasoning will emerge. At least, that’s the plan.

Intelligence Realm isn’t just building some sort of biophysical calculator. Their brain is being designed so that it can change and grow, just like a human brain. They’ve focused on simulating all parts of the brain (including the lower reasoning sections) and increasing the plasticity of their model. Right now it’s stumbling towards knowing 1+1 = 2. Even with linear growth they hope that this same stumbling intelligence will evolve into a mental giant. It’s a monumental task, though, and there’s no guarantee it will work. Building artificial intelligence is probably one of the most difficult tasks to undertake, and this early in the game, it’s hard to see if the baby steps will develop into adult strides. The simulation process may not even be the right approach. It’s a valuable experiment for what it can teach us about the brain, but it may never create an AI. A larger question may be, do we want it to?

Knock, Knock…It’s Inevitability

With the newest Terminator movie out, it’s only natural to start worrying about the dangers of artificial intelligence again. Why build these things if they’re just going to hunt down Christian Bale? For many, the threats of artificial intelligence make it seem like an effort of self-destructive curiosity. After all, from Shelley’s Frankenstein Monster to Adam and Eve, Western civilization seems to believe that creations always end up turning on their creators.

AI, however, promises rewards as well as threats. Problems in chemistry, biology, physics, economics, engineering, and astronomy, even questions of philosophy could all be helped by the application of an advanced AI. What’s more, as we seek to upgrade ourselves through cybernetics and genetic engineering, we will become more artificial. In the end, the line between artificial and natural intelligence may be blurred to a point that AIs will seem like our equals, not our eventual oppressors. However, that’s not a path that everyone will necessarily want to walk down.

Will AI and Humans learn to co-exist?

Will AI and Humans learn to co-exist?

The nature of distributed computing and BOINC allow you to effectively vote on whether or not this project will succeed. Intelligence Realm will eventually need hundred of thousands if not millions of computing platforms to run their simulations. If you believe that AI deserves a chance to exist, give them a hand and recruit others. If you think we’re building our own destroyers, then don’t run the program. In the end, the success or failure of this project may very well depend on how many volunteers are willing to serve as mid-wives to a new form of intelligence.

Before you make your decision though, make sure to read the following interview. As project leader, Ovidiu Anghelidi is one of the driving minds behind reverse engineering the brain and developing the eventual AI that Intelligence Realm hopes to build. He’s didn’t mean for this to be a recruiting speech, but he makes some good points:

SH: Hello. Could you please start by giving yourself and your project a brief introduction?

OA: Hi. My name is Ovidiu Anghelidi and I am working on a distributed computing project involving thousands of computers in the field of artificial intelligence. Our goal is to develop a system that can perform automated research.

What drew you to this project?

During my adolescence I tried understanding the nature of question. I used extensively questions as a learning tool. That drove me to search for better understanding methods. After looking at all kinds of methods, I kinda felt that understanding creativity is a worthier pursuit. Applying various methods of learning and understanding is a fine job, but finding outstanding solutions requires much more than that. For a short while I tried understanding how creativity is done and what exactly is it. I found out that there is not much work done on this subject, mainly because it is an overlapping concept. The search for creativity led me to the field of AI. Because one of the past presidents of the American Association of Artificial Intelligence dedicated an entire issue to this subject I started pursuing that direction. I looked into the field of artificial intelligence for a couple of years and at some point I was reading more and more papers that touched the subject of cognition and brain so I looked briefly into neuroscience. After I read an introductory book about neuroscience, I realized that understanding brain mechanisms is what I should have done all along, for the past 20 years. To this day I am pursuing this direction.

What’s your time table for success? How long till we have a distributed AI running around using your system?

I have been working on this project for about 3 years now, and I estimate that we will need another 7–8 years to finalize the project. Nonetheless we do not need that much time to be able to use some its features. I expect to have some basic features that work within a couple of months. Take for example the multiple simulations feature. If we want to pursue various directions in different fields (i.e. mathematics, biology, physics) we will need to set up a simulation for each field. But we do not need to get to the end of the project, to be able to run single simulations.

Do you think that Artificial Intelligence is a necessary step in the evolution of intelligence? If not, why pursue it? If so, does it have to happen at a given time?

I wouldn’t say necessary, because we don’t know what we are evolving towards. As long as we do not have the full picture from beginning to end, or cases from other species to compare our history to, we shouldn’t just assume that it is necessary.

We should pursue it with all our strength and understanding because soon enough it can give us a lot of answers about ourselves and this Universe. By soon I mean two or three decades. A very short time span, indeed. Artificial Intelligence will amplify a couple of orders of magnitude our research efforts across all disciplines.

In our case it is a natural extension. Any species that reaches a certain level of intelligence, at some point in time, they would start replicating and extending their natural capacities in order to control their environment. The human race did that for the last couple thousands of years, we tried to replicate and extend our capacity to run, see, smell and touch. Now it reached thinking. We invented vehicles, television sets, other devices and we are now close to have artificial intelligence.

What do you think are important short term and long term consequences of this project?

We hope that in short term we will create some awareness in regards to the benefits of artificial intelligence technology. Longer term it is hard to foresee.

How do you see Intelligence Realm interacting with more traditional research institutions? (Universities, peer reviewed Journals, etc)

Well…, we will not be able to provide full details about the entire project because we are pursuing a business model, so that we can support the project in the future, so there is little chance of a collaboration with a University or other research institution. Down the road, as we we will be in an advanced stage with the development, we will probably forge some collaborations. For the time being this doesn’t appear feasible. I am open to collaborations but I can’t see how that would happen.

I submitted some papers to a couple of journals in the past, but I usually receive suggestions that I should look at other journals, from other fields. Most of the work in artificial intelligence doesn’t have neuroscience elements and the work in neuroscience contains little or no artificial intelligence elements. Anyway, I need no recognition.

Why should someone join your project? Why is this work important?

If someone is interested in artificial intelligence it might help them having a different view on the subject and seeing what components are being developed over time. I can not tell how important is this for someone else. On a personal level, I can say that because my work is important to me and by having an AI system I will be able to get answers to many questions, I am working on that. Artificial Intelligence will provide exceptional benefits to the entire society.

What should someone do who is interested in joining the simulation? What can someone do if they can’t participate directly? (Is there a “write-your-congressman” sort of task they could help you with?)

If someone is interested in joining the project they need to download the Boinc client from the http://boinc.berkeley.edu site and then attach to the project using the master Url for this project, http://www.intelligencerealm.com/aisystem. We appreciate the support received from thousands of volunteers from all over the world.

If someone can’t participate directly I suggest to him/her to keep an open mind about what AI is and how it can benefit them. He or she should also try to understand its pitfalls.

There is no write-your-congressman type of task. Mass education is key for AI success. This project doesn’t need to be in the spotlight.

What is the latest news?

We reached 14,000 computers and we simulated over 740 billion neurons. We are working on implementing a basic hippocampal model for learning and memory.

Anything else you want to tell us?

If someone considers the development of artificial intelligence impossible or too far into the future to care about, I can only tell him or her, “Embrace the inevitable”. The advances in the field of neuroscience are increasing rapidly. Scientists are thorough.

Understanding its benefits and pitfalls is all that is needed.

Thank you for your time and we look forward to covering Intelligence Realm as it develops further.

Thank you for having me.