Menu

Blog

Page 12060

May 2, 2010

Nuclear Winter and Fire and Reducing Fire Risks to Cities

Posted by in categories: defense, existential risks, lifeboat, military, nuclear weapons

This is a crosspost from Nextbigfuture

I looked at nuclear winter and city firestorms a few months ago I will summarize the case I made then in the next section. There is significant additions based on my further research and email exchanges that I had with Prof Alan Robock and Brian Toon who wrote the nuclear winter research.

The Steps needed to prove nuclear winter:
1. Prove that enough cities will have firestorms or big enough fires (the claim here is that does not happen)
2. Prove that when enough cities in a suffient area have big fire that enough smoke and soot gets into the stratosphere (trouble with this claim because of the Kuwait fires)
3. Prove that condition persists and effects climate as per models (others have questioned that but this issue is not addressed here

The nuclear winter case is predictated on getting 150 million tons (150 teragram case) of soot, smoke into the stratosphere and having it stay there. The assumption seemed to be that the cities will be targeted and the cities will burn in massive firestorms. Alan Robock indicated that they only included a fire based on the radius of ignition from the atmospheric blasts. However, in the scientific american article and in their 2007 paper the stated assumptions are:

Continue reading “Nuclear Winter and Fire and Reducing Fire Risks to Cities” »

Apr 21, 2010

Software and the Singularity

Posted by in categories: futurism, robotics/AI

I am a former Microsoft programmer who wrote a book (for a general audience) about the future of software called After the Software Wars. Eric Klien has invited me to post on this blog. Here is my section entitled “Software and the Singularity”. I hope you find this food for thought and I appreciate any feedback.


Futurists talk about the “Singularity”, the time when computational capacity will surpass the capacity of human intelligence. Ray Kurzweil predicts it will happen in 2045. Therefore, according to its proponents, the world will be amazing then.3 The flaw with such a date estimate, other than the fact that they are always prone to extreme error, is that continuous learning is not yet a part of the foundation. Any AI code lives in the fringes of the software stack and is either proprietary or written by small teams of programmers.

I believe the benefits inherent in the singularity will happen as soon as our software becomes “smart” and we don’t need to wait for any further Moore’s law progress for that to happen. Computers today can do billions of operations per second, like add 123,456,789 and 987,654,321. If you could do that calculation in your head in one second, it would take you 30 years to do the billion that your computer can do in that second.

Even if you don’t think computers have the necessary hardware horsepower today, understand that in many scenarios, the size of the input is the primary driving factor to the processing power required to do the analysis. In image recognition for example, the amount of work required to interpret an image is mostly a function of the size of the image. Each step in the image recognition pipeline, and the processes that take place in our brain, dramatically reduce the amount of data from the previous step. At the beginning of the analysis might be a one million pixel image, requiring 3 million bytes of memory. At the end of the analysis is the data that you are looking at your house, a concept that requires only 10s of bytes to represent. The first step, working on the raw image, requires the most processing power, so therefore it is the image resolution (and frame rate) that set the requirements, values that are trivial to change. No one has shown robust vision recognition software running at any speed, on any sized image!

Continue reading “Software and the Singularity” »

Apr 18, 2010

Ray Kurzweil to keynote “H+ Summit @ Harvard — The Rise Of The Citizen Scientist”

Posted by in categories: biological, biotech/medical, business, complex systems, education, events, existential risks, futurism, geopolitics, human trajectories, information science, media & arts, neuroscience, robotics/AI

With our growing resources, the Lifeboat Foundation has teamed with the Singularity Hub as Media Sponsors for the 2010 Humanity+ Summit. If you have suggestions on future events that we should sponsor, please contact [email protected].

The summer 2010 “Humanity+ @ Harvard — The Rise Of The Citizen Scientist” conference is being held, after the inaugural conference in Los Angeles in December 2009, on the East Coast, at Harvard University’s prestigious Science Hall on June 12–13. Futurist, inventor, and author of the NYT bestselling book “The Singularity Is Near”, Ray Kurzweil is going to be keynote speaker of the conference.

Also speaking at the H+ Summit @ Harvard is Aubrey de Grey, a biomedical gerontologist based in Cambridge, UK, and is the Chief Science Officer of SENS Foundation, a California-based charity dedicated to combating the aging process. His talk, “Hype and anti-hype in academic biogerontology research: a call to action”, will analyze the interplay of over-pessimistic and over-optimistic positions with regards of research and development of cures, and propose solutions to alleviate the negative effects of both.

Continue reading “Ray Kurzweil to keynote "H+ Summit @ Harvard — The Rise Of The Citizen Scientist"” »

Apr 14, 2010

Technology Readiness Levels for Non-rocket Space Launch

Posted by in categories: asteroid/comet impacts, engineering, habitats, human trajectories, space

An obvious next step in the effort to dramatically lower the cost of access to low Earth orbit is to explore non-rocket options. A wide variety of ideas have been proposed, but it’s difficult to meaningfully compare them and to get a sense of what’s actually on the technology horizon. The best way to quantitatively assess these technologies is by using Technology Readiness Levels (TRLs). TRLs are used by NASA, the United States military, and many other agencies and companies worldwide. Typically there are nine levels, ranging from speculations on basic principles to full flight-tested status.

The system NASA uses can be summed up as follows:

TRL 1 Basic principles observed and reported
TRL 2 Technology concept and/or application formulated
TRL 3 Analytical and experimental critical function and/or characteristic proof-of concept
TRL 4 Component and/or breadboard validation in laboratory environment
TRL 5 Component and/or breadboard validation in relevant environment
TRL 6 System/subsystem model or prototype demonstration in a relevant environment (ground or space)
TRL 7 System prototype demonstration in a space environment
TRL 8 Actual system completed and “flight qualified” through test and demonstration (ground or space)
TRL 9 Actual system “flight proven” through successful mission operations.

Progress towards achieving a non-rocket space launch will be facilitated by popular understanding of each of these proposed technologies and their readiness level. This can serve to coordinate more work into those methods that are the most promising. I think it is important to distinguish between options with acceleration levels within the range human safety and those that would be useful only for cargo. Below I have listed some non-rocket space launch methods and my assessment of their technology readiness levels.

Continue reading “Technology Readiness Levels for Non-rocket Space Launch” »

Apr 3, 2010

Natural selection of universes and risks for the parent civilization

Posted by in category: existential risks

Lee Smolin is said to believe (according to personal communication from Danila Medvedev who was told about it by John Smart. I tried to reach Smolin for comments, but failed) that global catastrophe is impossible, based on the following reasoning: the multiverse is dominated by those universes that are able to replicate. This Self-replication occurs in black holes, and in especially in those black holes, which are created civilizations. Thus, the parameters of the universe are selected so that civilization cannot self-destruct before they create black holes. As a result, all physical processes, in which civilization may self-destruct, are closed or highly unlikely. Early version of Smolin’s argument is here: http://en.wikipedia.org/wiki/Lee_Smolin but this early version was refuted in 2004, and so he (probably) added existence of civilization as another condition for cosmic natural selection. Anyway, even if it is not Smolin’s real line of thoughts, it is quite possible line of thoughts.

I think this argument is not persuasive, since the selection can operate both in the direction of universes with more viable civilizations, and in the direction of universes with a larger number of civilizations, just as biological evolution works to more robust offspring in some species (mammals) and in the larger number of offspring with lower viability (plants, for example, dandelion). Since some parameters for the development of civilizations is extremely difficult to adjust by the basic laws of nature (for example, the chances of nuclear war or a hostile AI), but it is easy to adjust the number of emerging civilizations, it seems to me that the universes, if they replicated with the help of civilizations, will use the strategy of dandelions, but not the strategy of mammals. So it will create many unstable civilization and we are most likely one of them (self indication assumption also help us to think so – see recent post of Katja Grace http://meteuphoric.wordpress.com/2010/03/23/sia-doomsday-the-filter-is-ahead/)

But still some pressure can exist for the preservation of civilization. Namely, if an atomic bomb would be as easy to create as a dynamite – much easier then on Earth (which depends on the quantity of uranium and its chemical and nuclear properties, ie, is determined by the original basic laws of the universe), then the chances of the average survival of civilization would be lower. If Smolin’s hypothesis is correct, then we should encounter insurmountable difficulties in creating nano-robots, microelectronics, needed for strong AI, harmful experiments on accelerators with strangelet (except those that lead to the creation of black holes and new universes), and in several other potentially dangerous technology trends that depend on their success from the basic properties of the universe, which may manifest itself in the peculiarities of its chemistry.

In addition, the evolution of universes by Smolin leads to the fact that civilization should create a black hole as early as possible in the course of its history, leading to replication of universes, because the later it happens, the greater the chances that the civilization will self-destruct before it can create black holes. In addition, the civilization is not required to survive after the moment of “replication” (though survival may be useful for the replication, if civilization creates a lot of black holes during its long existence.) From these two points, it follows that we may underestimate the risks from Hadron Collider in the creation of black holes.

Continue reading “Natural selection of universes and risks for the parent civilization” »

Apr 2, 2010

Technological Singularity and Acceleration Studies: Call for Papers

Posted by in category: futurism

8th European conference on Computing And Philosophy — ECAP 2010
Technische Universität München
4–6 October 2010

Submission deadline of extended abstracts: 7 May 2010
Submission form

Theme

Historical analysis of a broad range of paradigm shifts in science, biology, history, technology, and in particular in computing technology, suggests an accelerating rate of evolution, however measured. John von Neumann projected that the consequence of this trend may be an “essential singularity in the history of the race beyond which human affairs as we know them could not continue”. This notion of singularity coincides in time and nature with Alan Turing (1950) and Stephen Hawking’s (1998) expectation of machines to exhibit intelligence on a par with to the average human no later than 2050. Irving John Good (1965) and Vernor Vinge (1993) expect the singularity to take the form of an ‘intelligence explosion’, a process in which intelligent machines design ever more intelligent machines. Transhumanists suggest a parallel or alternative, explosive process of improvements in human intelligence. And Alvin Toffler’s Third Wave (1980) forecasts “a collision point in human destiny” the scale of which, in the course of history, is on the par only with the agricultural revolution and the industrial revolution.

We invite submissions describing systematic attempts at understanding the likelihood and nature of these projections. In particular, we welcome papers critically analyzing the following issues from a philosophical, computational, mathematical, scientific and ethical standpoints:

  • Claims and evidence to acceleration
  • Technological predictions (critical analysis of past and future)
  • The nature of an intelligence explosion and its possible outcomes
  • The nature of the Technological Singularity and its outcome
  • Safe and unsafe artificial general intelligence and preventative measures
  • Technological forecasts of computing phenomena and their projected impact
  • Beyond the ‘event horizon’ of the Technological Singularity
  • The prospects of transhuman breakthroughs and likely timeframes

Amnon H. Eden, School of Computer Science & Electronic Engineering, University of Essex, UK and Center For Inquiry, Amherst NY

Mar 27, 2010

Critical Request to CERN Council and Member States on LHC Risks

Posted by in categories: complex systems, cosmology, engineering, ethics, existential risks, particle physics, policy

Experts regard safety report on Big Bang Machine as insufficient and one-dimensional

International critics of the high energy experiments planned to start soon at the particle accelerator LHC at CERN in Geneva have submitted a request to the Ministers of Science of the CERN member states and to the delegates to the CERN Council, the supreme controlling body of CERN.

The paper states that several risk scenarios (that have to be described as global or existential risks) cannot currently be excluded. Under present conditions, the critics have to speak out against an operation of the LHC.

The submission includes assessments from expertises in the fields markedly missing from the physicist-only LSAG safety report — those of risk assessment, law, ethics and statistics. Further weight is added because these experts are all university-level experts – from Griffith University, the University of North Dakota and Oxford University respectively. In particular, it is criticised that CERN’s official safety report lacks independence – all its authors have a prior interest in the LHC running and that the report uses physicist-only authors, when modern risk-assessment guidelines recommend risk experts and ethicists as well.

Continue reading “Critical Request to CERN Council and Member States on LHC Risks” »

Mar 23, 2010

Risk intelligence

Posted by in categories: education, events, futurism, geopolitics, policy, polls

A few months ago, my friend Benjamin Jakobus and I created an online “risk intelligence” test at http://www.projectionpoint.com/. It consists of fifty statements about science, history, geography, and so on, and your task is to say how likely you think it is that each of these statements is true. We calculate your risk intelligence quotient (RQ) on the basis of your estimates. So far, over 30,000 people have taken our test, and we’re currently writing up the results for some peer-reviewed journals.

Now we want to take things a step further, and see whether our measure correlates with the ability to make accurate estimates of future events. To this end we’ve created a “prediction game” at http://www.projectionpoint.com/prediction_game.php. The basic idea is the same; we provide you with a bunch of statements, and your task is to say how likely you think it is that each one is true. The difference is that these statements refer not to known facts, but to future events. Unlike the first test, nobody knows whether these statements are true or false yet. For most of them, we won’t know until the end of the year 2010.

For example, how likely do you think it is that this year will be the hottest on record? If you think this is very unlikely you might select the 10% category. If you think it is quite likely, but not very likely, you might put the chances at 60% or 70%. Selecting the 50% category would mean that you had no idea how likely it is.

This is ongoing research, so please feel free to comment, criticise or make suggestions.

Mar 12, 2010

Reduction of human intelligence as global risk

Posted by in categories: existential risks, neuroscience

Another risk is loss of human rationality, while preserving human life. In a society there are always so many people with limited cognitive abilities, and most of the achievements are made by a small number of talented people. Genetic and social degradation, reducing the level of education, loss of skills of logic can lead to a temporary decrease in intelligence of individual groups of people. But as long as humanity is very large in population, it is not so bad, because there will always be enough intelligent people. Significant drop in population after nonglobal disaster may exacerbate this problem. And the low intelligence of the remaining people will reduce their chances of survival. Of course, one can imagine such an absurd situation that people are so degraded that by the evolutionary path new species arise from us, which is not having a full-fledged intelligence — and that back then this kind of evolving reasonable, developed a new intelligence.
More dangerous is decline of intelligence because of the spread of technological contaminants (or use of a certain weapon). For example, I should mention constantly growing global arsenic contamination, which is used in various technological processes. Sergio Dani wrote about this in his article “Gold, coal and oil.” http://sosarsenic.blogspot.com/2009/11/gold-coal-and-oil-reg…is-of.html, http://www.medical-hypotheses.com/article/S0306-9877 (09) 00666–5/abstract
Disengaged during the extraction of gold mines in the arsenic remains in the biosphere for millennia. Dani binds arsenic with Alzheimer’s disease. In his another paper is demonstrated that increasing concentrations of arsenic leads to an exponential increase in incidence of Alzheimer’s disease. He believes that people are particularly vulnerable to arsenic poisoning, as they have large brains and longevity. If, however, according to Denis, in the course of evolution, people will adapt to high levels of arsenic, it will lead to a decline in the brain and life expectancy, resulting in the intellect of people will be lost.
In addition to arsenic contamination occurs among many other neurotoxic substances — CO, CO2, methane, benzene, dioxin, mercury, lead, etc. Although the level of pollution by each of them separately is below health standards, the sum of the impacts may be larger. One reason for the fall of the Roman Empire was called the total poisoning of its citizens (though not barbarians) of lead from water pipes. Of course, they could not have knowledge about these remote and unforeseen consequences — but we also may not know about the many consequences of our affairs.
In addition to dementia is alcohol and most drugs, many drugs (eg, side effect in the accompanying sheets of mixtures of heartburn called dementia). Also rigid ideological system, or memes.
Number of infections, particularly prion, also leads to dementia.
Despite this, the average IQ of people is growing as life expectancy.

Mar 10, 2010

Why AI could fail?

Posted by in category: robotics/AI

AI is our best hope for long term survival. If we fail to create it, it will happened by some reason. Here I suggest the complete list of possible causes of failure, but I do not believe in them. (I was inspired bu V.Vinge artile “What if singularity does not happen”?)

I think most of these points are wrong and AI finaly will be created.

Technical reasons:
1) Moore’s Law will stop by physical causes earlier than would be established sufficiently powerful and inexpensive apparatus for artificial intelligence.
2) Silicon processors are less efficient than neurons to create artificial intelligence.
3) Solution of the AI cannot be algorithmically parallelization and as a result of the AI will be extremely slow.

Philosophy:
4) Human beings use some method of processing information, essentially inaccessible to algorithmic computers. So Penrose believes. (But we can use this method using bioengineering techniques.) Generally, the final recognition of the impossibility of creating artificial intelligence would be tantamount to recognizing the existence of the soul.
5) The system cannot create a system more complex then themselves, and so the people cannot create artificial intelligence, since all the proposed solutions are too simple. That is, AI is in principle possible, but people are too stupid to do it. In fact, one reason for past failures in the creation of artificial intelligence is that people underestimate the complexity of the problem.
6) AI is impossible, because any sufficiently complex system reveals the meaninglessness of existence and stops.
7) All possible ways to optimize are exhausted.AI does not have any fundamental advantage in comparison with the human-machine interface and has a limited scope of use.
8. The man in the body has a maximum level of common sense, and any incorporeal AIs are or ineffective, or are the models of people.
9) AI is created, but has no problems, which he could and should be addressed. All the problems have been solved by conventional methods, or proven uncomputable.
10) AI is created, but not capable of recursive self-optimization, since this would require some radically new ideas, but they had not. As a result, AI is there, or as a curiosity, or as a limited specific applications, such as automatic drivers.
11) The idea of artificial intelligence is flawed, because it has no precise definition or even it is an oxymoron, like “artificial natural.” As a result, developing specific goals or to create models of man, but not universal artificial intelligence.
12) There is an upper limit of the complexity of systems for which they have become chaotic and unstable, and it slightly exceeds the intellect of the most intelligent people. AI is slowly coming to this threshold of complexity.
13) The bearer of intelligence is Qualia. For our level of intelligence should be a lot events that are indescribable and not knowable, but superintellect should understand them, by definition, otherwise it is not superintellect, but simply a quick intellect.

Continue reading “Why AI could fail?” »