Menu

Blog

Page 12240

Jun 5, 2010

Friendly AI: What is it, and how can we foster it?

Posted by in categories: complex systems, ethics, existential risks, futurism, information science, policy, robotics/AI

Friendly AI: What is it, and how can we foster it?
By Frank W. Sudia [1]

Originally written July 20, 2008
Edited and web published June 6, 2009
Copyright © 2008-09, All Rights Reserved.

Keywords: artificial intelligence, artificial intellect, friendly AI, human-robot ethics, science policy.

1. Introduction

Continue reading “Friendly AI: What is it, and how can we foster it?” »

Jun 2, 2010

New Terrorism: Five days in Manhattan

Posted by in categories: counterterrorism, cybercrime/malcode, defense, finance

Originally posted @ Perspective Intelligence

Two events centered on New York City separated by five days demonstrated the end of one phase of terrorism and the pending arrival of the next. The failed car-bombing in Times square and the dizzying stock market crash less than a week later mark the book ends of terrorist eras.

The attempt by Faisal Shahzad to detonate a car bomb in Times Square was notable not just for its failure but also the severely limited systemic impact a car-bomb could have, even when exploding in crowded urban center. Car-bombs or Vehicle-Borne IED’s have a long history (incidentally one of the first was the 1920 ‘cart and horse bomb’ in Wall Street, which killed 38 people). VBIED’s remain deadly as a tactic within an insurgency or warfare setting but with regard to modern urban terrorism the world has moved on. We are now living within a highly virtualized system and the dizzying stock-market crash on the 6th May 2010 shows how vulnerable this system is to digital failure. While the NYSE building probably remains a symbolic target for some terrorists a deadly and capable adversary would ignore this physical manifestation of the financial system and disrupt the data-centers, software and routers that make the global financial system tick. Shahzad’s attempted car-bomb was from another age and posed no overarching risk to western societies. The same cannot be said of the vulnerable and highly unstable financial system.

Computer aided crash (proof of concept for future cyber-attack)

Continue reading “New Terrorism: Five days in Manhattan” »

May 31, 2010

A Space Elevator in 7

Posted by in categories: nanotechnology, space

I am a former Microsoft programmer who wrote a book (for a general audience) about the future of software called After the Software Wars. Eric Klien has invited me to post on this blog (Software and the Singularity, AI and Driverless cars) Here are the sections on the Space Elevator. I hope you find these pages food for thought and I appreciate any feedback.


A Space Elevator in 7

Midnight, July 20, 1969; a chiaroscuro of harsh contrasts appears on the television screen. One of the shadows moves. It is the leg of astronaut Edwin Aldrin, photographed by Neil Armstrong. Men are walking on the moon. We watch spellbound. The earth watches. Seven hundred million people are riveted to their radios and television screens on that July night in 1969. What can you do with the moon? No one knew. Still, a feeling in the gut told us that this was the greatest moment in the history of life. We were leaving the planet. Our feet had stirred the dust of an alien world.

—Robert Jastrow, Journey to the Stars

Management is doing things right, Leadership is doing the right things!

Continue reading “A Space Elevator in 7” »

May 27, 2010

AI and Driverless cars

Posted by in categories: human trajectories, open source, robotics/AI

I am a former Microsoft programmer who wrote a book (for a general audience) about the future of software called After the Software Wars. Eric Klien has invited me to post on this blog. Here are several more sections on AI topics. I hope you find these pages food for thought and I appreciate any feedback.


The future is open source everything.

—Linus Torvalds

That knowledge has become the resource, rather than a resource, is what makes our society post-capitalist.

Continue reading “AI and Driverless cars” »

May 2, 2010

Nuclear Winter and Fire and Reducing Fire Risks to Cities

Posted by in categories: defense, existential risks, lifeboat, military, nuclear weapons

This is a crosspost from Nextbigfuture

I looked at nuclear winter and city firestorms a few months ago I will summarize the case I made then in the next section. There is significant additions based on my further research and email exchanges that I had with Prof Alan Robock and Brian Toon who wrote the nuclear winter research.

The Steps needed to prove nuclear winter:
1. Prove that enough cities will have firestorms or big enough fires (the claim here is that does not happen)
2. Prove that when enough cities in a suffient area have big fire that enough smoke and soot gets into the stratosphere (trouble with this claim because of the Kuwait fires)
3. Prove that condition persists and effects climate as per models (others have questioned that but this issue is not addressed here

The nuclear winter case is predictated on getting 150 million tons (150 teragram case) of soot, smoke into the stratosphere and having it stay there. The assumption seemed to be that the cities will be targeted and the cities will burn in massive firestorms. Alan Robock indicated that they only included a fire based on the radius of ignition from the atmospheric blasts. However, in the scientific american article and in their 2007 paper the stated assumptions are:

Continue reading “Nuclear Winter and Fire and Reducing Fire Risks to Cities” »

Apr 21, 2010

Software and the Singularity

Posted by in categories: futurism, robotics/AI

I am a former Microsoft programmer who wrote a book (for a general audience) about the future of software called After the Software Wars. Eric Klien has invited me to post on this blog. Here is my section entitled “Software and the Singularity”. I hope you find this food for thought and I appreciate any feedback.


Futurists talk about the “Singularity”, the time when computational capacity will surpass the capacity of human intelligence. Ray Kurzweil predicts it will happen in 2045. Therefore, according to its proponents, the world will be amazing then.3 The flaw with such a date estimate, other than the fact that they are always prone to extreme error, is that continuous learning is not yet a part of the foundation. Any AI code lives in the fringes of the software stack and is either proprietary or written by small teams of programmers.

I believe the benefits inherent in the singularity will happen as soon as our software becomes “smart” and we don’t need to wait for any further Moore’s law progress for that to happen. Computers today can do billions of operations per second, like add 123,456,789 and 987,654,321. If you could do that calculation in your head in one second, it would take you 30 years to do the billion that your computer can do in that second.

Even if you don’t think computers have the necessary hardware horsepower today, understand that in many scenarios, the size of the input is the primary driving factor to the processing power required to do the analysis. In image recognition for example, the amount of work required to interpret an image is mostly a function of the size of the image. Each step in the image recognition pipeline, and the processes that take place in our brain, dramatically reduce the amount of data from the previous step. At the beginning of the analysis might be a one million pixel image, requiring 3 million bytes of memory. At the end of the analysis is the data that you are looking at your house, a concept that requires only 10s of bytes to represent. The first step, working on the raw image, requires the most processing power, so therefore it is the image resolution (and frame rate) that set the requirements, values that are trivial to change. No one has shown robust vision recognition software running at any speed, on any sized image!

Continue reading “Software and the Singularity” »

Apr 18, 2010

Ray Kurzweil to keynote “H+ Summit @ Harvard — The Rise Of The Citizen Scientist”

Posted by in categories: biological, biotech/medical, business, complex systems, education, events, existential risks, futurism, geopolitics, human trajectories, information science, media & arts, neuroscience, robotics/AI

With our growing resources, the Lifeboat Foundation has teamed with the Singularity Hub as Media Sponsors for the 2010 Humanity+ Summit. If you have suggestions on future events that we should sponsor, please contact [email protected].

The summer 2010 “Humanity+ @ Harvard — The Rise Of The Citizen Scientist” conference is being held, after the inaugural conference in Los Angeles in December 2009, on the East Coast, at Harvard University’s prestigious Science Hall on June 12–13. Futurist, inventor, and author of the NYT bestselling book “The Singularity Is Near”, Ray Kurzweil is going to be keynote speaker of the conference.

Also speaking at the H+ Summit @ Harvard is Aubrey de Grey, a biomedical gerontologist based in Cambridge, UK, and is the Chief Science Officer of SENS Foundation, a California-based charity dedicated to combating the aging process. His talk, “Hype and anti-hype in academic biogerontology research: a call to action”, will analyze the interplay of over-pessimistic and over-optimistic positions with regards of research and development of cures, and propose solutions to alleviate the negative effects of both.

Continue reading “Ray Kurzweil to keynote "H+ Summit @ Harvard — The Rise Of The Citizen Scientist"” »

Apr 14, 2010

Technology Readiness Levels for Non-rocket Space Launch

Posted by in categories: asteroid/comet impacts, engineering, habitats, human trajectories, space

An obvious next step in the effort to dramatically lower the cost of access to low Earth orbit is to explore non-rocket options. A wide variety of ideas have been proposed, but it’s difficult to meaningfully compare them and to get a sense of what’s actually on the technology horizon. The best way to quantitatively assess these technologies is by using Technology Readiness Levels (TRLs). TRLs are used by NASA, the United States military, and many other agencies and companies worldwide. Typically there are nine levels, ranging from speculations on basic principles to full flight-tested status.

The system NASA uses can be summed up as follows:

TRL 1 Basic principles observed and reported
TRL 2 Technology concept and/or application formulated
TRL 3 Analytical and experimental critical function and/or characteristic proof-of concept
TRL 4 Component and/or breadboard validation in laboratory environment
TRL 5 Component and/or breadboard validation in relevant environment
TRL 6 System/subsystem model or prototype demonstration in a relevant environment (ground or space)
TRL 7 System prototype demonstration in a space environment
TRL 8 Actual system completed and “flight qualified” through test and demonstration (ground or space)
TRL 9 Actual system “flight proven” through successful mission operations.

Progress towards achieving a non-rocket space launch will be facilitated by popular understanding of each of these proposed technologies and their readiness level. This can serve to coordinate more work into those methods that are the most promising. I think it is important to distinguish between options with acceleration levels within the range human safety and those that would be useful only for cargo. Below I have listed some non-rocket space launch methods and my assessment of their technology readiness levels.

Continue reading “Technology Readiness Levels for Non-rocket Space Launch” »

Apr 3, 2010

Natural selection of universes and risks for the parent civilization

Posted by in category: existential risks

Lee Smolin is said to believe (according to personal communication from Danila Medvedev who was told about it by John Smart. I tried to reach Smolin for comments, but failed) that global catastrophe is impossible, based on the following reasoning: the multiverse is dominated by those universes that are able to replicate. This Self-replication occurs in black holes, and in especially in those black holes, which are created civilizations. Thus, the parameters of the universe are selected so that civilization cannot self-destruct before they create black holes. As a result, all physical processes, in which civilization may self-destruct, are closed or highly unlikely. Early version of Smolin’s argument is here: http://en.wikipedia.org/wiki/Lee_Smolin but this early version was refuted in 2004, and so he (probably) added existence of civilization as another condition for cosmic natural selection. Anyway, even if it is not Smolin’s real line of thoughts, it is quite possible line of thoughts.

I think this argument is not persuasive, since the selection can operate both in the direction of universes with more viable civilizations, and in the direction of universes with a larger number of civilizations, just as biological evolution works to more robust offspring in some species (mammals) and in the larger number of offspring with lower viability (plants, for example, dandelion). Since some parameters for the development of civilizations is extremely difficult to adjust by the basic laws of nature (for example, the chances of nuclear war or a hostile AI), but it is easy to adjust the number of emerging civilizations, it seems to me that the universes, if they replicated with the help of civilizations, will use the strategy of dandelions, but not the strategy of mammals. So it will create many unstable civilization and we are most likely one of them (self indication assumption also help us to think so – see recent post of Katja Grace http://meteuphoric.wordpress.com/2010/03/23/sia-doomsday-the-filter-is-ahead/)

But still some pressure can exist for the preservation of civilization. Namely, if an atomic bomb would be as easy to create as a dynamite – much easier then on Earth (which depends on the quantity of uranium and its chemical and nuclear properties, ie, is determined by the original basic laws of the universe), then the chances of the average survival of civilization would be lower. If Smolin’s hypothesis is correct, then we should encounter insurmountable difficulties in creating nano-robots, microelectronics, needed for strong AI, harmful experiments on accelerators with strangelet (except those that lead to the creation of black holes and new universes), and in several other potentially dangerous technology trends that depend on their success from the basic properties of the universe, which may manifest itself in the peculiarities of its chemistry.

In addition, the evolution of universes by Smolin leads to the fact that civilization should create a black hole as early as possible in the course of its history, leading to replication of universes, because the later it happens, the greater the chances that the civilization will self-destruct before it can create black holes. In addition, the civilization is not required to survive after the moment of “replication” (though survival may be useful for the replication, if civilization creates a lot of black holes during its long existence.) From these two points, it follows that we may underestimate the risks from Hadron Collider in the creation of black holes.

Continue reading “Natural selection of universes and risks for the parent civilization” »

Apr 2, 2010

Technological Singularity and Acceleration Studies: Call for Papers

Posted by in category: futurism

8th European conference on Computing And Philosophy — ECAP 2010
Technische Universität München
4–6 October 2010

Submission deadline of extended abstracts: 7 May 2010
Submission form

Theme

Historical analysis of a broad range of paradigm shifts in science, biology, history, technology, and in particular in computing technology, suggests an accelerating rate of evolution, however measured. John von Neumann projected that the consequence of this trend may be an “essential singularity in the history of the race beyond which human affairs as we know them could not continue”. This notion of singularity coincides in time and nature with Alan Turing (1950) and Stephen Hawking’s (1998) expectation of machines to exhibit intelligence on a par with to the average human no later than 2050. Irving John Good (1965) and Vernor Vinge (1993) expect the singularity to take the form of an ‘intelligence explosion’, a process in which intelligent machines design ever more intelligent machines. Transhumanists suggest a parallel or alternative, explosive process of improvements in human intelligence. And Alvin Toffler’s Third Wave (1980) forecasts “a collision point in human destiny” the scale of which, in the course of history, is on the par only with the agricultural revolution and the industrial revolution.

We invite submissions describing systematic attempts at understanding the likelihood and nature of these projections. In particular, we welcome papers critically analyzing the following issues from a philosophical, computational, mathematical, scientific and ethical standpoints:

  • Claims and evidence to acceleration
  • Technological predictions (critical analysis of past and future)
  • The nature of an intelligence explosion and its possible outcomes
  • The nature of the Technological Singularity and its outcome
  • Safe and unsafe artificial general intelligence and preventative measures
  • Technological forecasts of computing phenomena and their projected impact
  • Beyond the ‘event horizon’ of the Technological Singularity
  • The prospects of transhuman breakthroughs and likely timeframes

Amnon H. Eden, School of Computer Science & Electronic Engineering, University of Essex, UK and Center For Inquiry, Amherst NY