Menu

Blog

Jan 11, 2007

State of Existential Risk in 2007

Posted by in category: existential risks

An existential risk is a global catastrophic risk that threatens to exterminate humanity or severely curtail its potential. Existential risks are unique because current institutions have little incentive to mitigate them, except as a side effect of pursuing other goals. There is little to no financial return in mitigating existential risk. Bostrom (2001) argues that because reductions in existential risks are global public goods, they may be undervalued by the market. Also, because we have never confronted a major existential risk before, we have little to learn from, and little impetus to be afraid. For more information, see this reference.

There are three main categories of existential risk — threats from biotechnology, nanotechnology, and AI/robotics. Nuclear proliferation itself is not quite an existential risk, but widespread availability of nuclear weapons could greatly exacerbate future risks, providing a stepping stone into a post-nuclear arms race. We’ll look at that first, then go over the others.

Nuclear risk. The risk of nuclear proliferation is currently high. The United States is planning to spend $100 billion on developing new nuclear weapons, and reports suggest that the President is not doing enough to curtail nuclear proliferation, despite the emphasis on the War on Terror. Syria, Qatar, Egypt, and the United Arab Emirates met to announce they their desire to develop nuclear technology. North Korea successfully tested a nuclear weapon in October. Iran continues enriching uranium against the will of the United Nations, and an Iranian official hinted that the country may be obtaining nuclear weapons. Last night, President Bush used the most confrontational language yet towards Iran, accusing it of directly providing weapons and funds to combatants killing US soldiers. The geopolitical situation today with respect to nuclear technology is probably the worst it has been since the Cold War.

Biotechnological risk. The risk of biotechnological disaster is currently high. An attempt among synthetic life researchers to formulate a common set of ethical standards, at the International Conference on Synthetic Biology, has failed. Among the synthetic biology and biotechnology communities, there is little recognition of the risk of genetically engineered pathogens. President Bush’s plan to spend $7.1 billion on bird flu vaccines was decreased to $2.3 billion by Congress. There is little federal money being spent on research to develop blanket countermeasures against unanticipated biotechnological threats. There are still custom DNA synthesis labs that fill orders without first scanning for harmful sequences. Watch-lists for possible bioweapon sequences are out of date, and far from comprehensive. The cost of lab equipment necessary to make bioweapons has decreased in cost and increased in performance, putting it within the financial reach of terrorist organizations. Until there is more oversight in this area, the risk will not only remain, but increase over time. For more information, see this report.

Nanotechnological risk. The risk of nanotechnological disaster is currently low. Although substantial progress has been made with custom machinery at the nanoscale, there is little effort or money going towards the development of molecular manufacturing, the most dangerous (but also most beneficial) branch of nanotechnology. Although the level of risk today is low, once it begins to escalate, it could do so very rapidly due to the self-replicating nature of molecular manufacturing. Nanotechnology researcher Chris Phoenix has published a paper on how it would be technologically feasible to go from a basic self-replicating assembler to a desktop nanofactory in a matter of weeks. His organization projects the development of nanofactories sometime before 2020. Once desktop nanofactories hit the market, it would be extremely difficult to limit their proliferation, as nanofactories could probably be used to create additional nanofactories very quickly. Unrestricted nanofactories, if made available, could be used to synthesize bombs, biological weapons, or synthetic life that is destructive to the biosphere. Important papers on nanoethics have been published by the Nanoethics Group, the Center for Responsible Nanotechnology, and the Lifeboat Foundation.

Artificial Intelligence risk. The risk from AI and robotics is currently moderate. Because we know so little about how difficult AI is as a problem, we can’t say if it will be developed in 2010 or 2050. Like nanofactories, AI is a threat that could balloon exponentially if it gets out of hand, going from “negligible risk” to “severe risk” practically overnight. There is very little attention given towards the risk of AI and how it should be handled. Some of the only papers published on the topic during 2006 were released by the Singularity Institute for Artificial Intelligence. Just recently, Bill Gates, co-founder of Microsoft, wrote “A Robot in Every Home”, outlining why he thinks robotics will be the next big revolution. There has been increased acceptance, both in academia and the public, for the possibility of AI of human-surpassing intelligence. However, the concept of seed AI continues to be poorly understood and infrequently discussed both in popular and academic discourse.

1

Comment — comments are now closed.


  1. Matus says:

    Maybe this is just a catagorization question, but are natural disasters not considered existential risks? A tremendous caldera volcanic eruption, for instance, could wipe out the vast majority of humanity, or at least thrust it into a new dark age which it may never recover from. A significant asteroid or comet strike? A magnetic field reduction prior to a pole flip? Viruses and diseases absent of biotechnological meddling. Today, for instance, HIV is absorbed by the metabolism of mosquitos, if it were to mutate to not be, these flying self replicating used needles could wipe out a major portion of the global human population. As it is there are all ready some cases where the source of infection can not be determined. It is estimated that Mosquitos, the primary vector of 10 of the 12 most deadly diseases mankind can suffer from, are probably responsible for half of all human deaths throughout the history of humanity. The spanish flu came out of nowhere and killed tens of millions of people in the early 1900’s and then disappeared just as mysteriously.