Toggle light / dark theme

An excellent article by Bruce Schneier on the psychology of security is available here. It starts as follows:

Security is both a feeling and a reality. And they’re not the same.

The reality of security is mathematical, based on the probability of different risks and the effectiveness of different countermeasures. We can calculate how secure your home is from burglary, based on such factors as the crime rate in the neighborhood you live in and your door-locking habits. We can calculate how likely it is for you to be murdered, either on the streets by a stranger or in your home by a family member. Or how likely you are to be the victim of identity theft. Given a large enough set of statistics on criminal acts, it’s not even hard; insurance companies do it all the time.

We can also calculate how much more secure a burglar alarm will make your home, or how well a credit freeze will protect you from identity theft. Again, given enough data, it’s easy.

But security is also a feeling, based not on probabilities and mathematical calculations, but on your psychological reactions to both risks and countermeasures. You might feel terribly afraid of terrorism, or you might feel like it’s not something worth worrying about. You might feel safer when you see people taking their shoes off at airport metal detectors, or you might not. You might feel that you’re at high risk of burglary, medium risk of murder, and low risk of identity theft. And your neighbor, in the exact same situation, might feel that he’s at high risk of identity theft, medium risk of burglary, and low risk of murder.

The difference between the feeling of security and true security, and the difference between pursuing one thing or the other, is central to the Lifeboat Foundation’s mission. For example, planetwide risks like synthetic life or unfriendly AI should be analyzed more thoroughly and given more effort than prevention of nuclear proliferation, even if we consider the near-term probability of the former scenarios to be less, simply because their scope is so much larger. For more on this topic, see Cognitive biases affecting judgement of existential risks.

A valuable paper by Jason Matheny of the University of Maryland is “Reducing the Risk of Human Extinction”. The abstract is as follows:

In this century a number of events could extinguish humanity. The probability of these events may be very low, but the expected value of preventing them could be high, as it represents the value of all future lives. We review the challenges to studying human extinction risks and, by way of example, estimate the cost-effectiveness of preventing extinction-level asteroid impacts.

Continue reading it here.

NASA estimates the cost to find at least 90 percent of the 20,000 potentially hazardous asteroids and comets by 2020 would be about $1 billion, according to a report NASA will release later this week. It would cost $300 million if a asteroid locating telescope was piggybacked on another vehicle. The report was previewed Monday at a Planetary Defense Conference in Washington.

The agency is already tracking bigger objects, at least 3,300 feet in diameter, that could wipe out most life on Earth, much like what is theorized to have happened to dinosaurs 65 million years ago. But even that search, which has spotted 769 asteroids and comets — none of which is on course to hit Earth — is behind schedule. It’s supposed to be complete by the end of next year.

A cheaper option would be to simply piggyback on other agencies’ telescopes, a cost of about $300 million, also rejected, Johnson said.

“The decision of the agency is we just can’t do anything about it right now,” he added.

Earth got a scare in 2004, when initial readings suggested an 885-foot asteroid called 99942 Apophis seemed to have a chance of hitting Earth in 2029. But more observations showed that wouldn’t happen. Scientists say there is a 1-in-45,000 chance that it could hit in 2036.

They think it would mostly likely strike the Pacific Ocean, which would cause a tsunami on the U.S. West Coast the size of the devastating 2004 Indian Ocean wave.

John Logsdon, space policy director at George Washington University, said a stepped-up search for such asteroids is needed.

“You can’t deflect them if you can’t find them,” Logsdon said. “And we can’t find things that can cause massive damage.”

Lifeboat has an asteroid shield project

Graduate student (University of Alabama Huntsville) Blake Anderton wrote his master’s thesis on “Application of Mode-locked lasers to asteroid characterization and mitigation.” Undergraduate Gordon Aiken won a prize at a recent student conference for his poster and presentation “Space positioned LIDAR system for characterization and mitigation of Near Earth Objects.” And members of the group are building a laser system “that is the grandfather of the laser that will push the asteroids,” Fork said.

Anderton’s mode locked lasers could characterize asteroids up to 1 AU away (1.5 × 10 to the 11 meters). Arecibo and other radar observatories can only detect objects up to 0.1 AU away, so in theory a laser would represent a vast improvement over radar.

A one page powerpoint describes their asteroid detection and deflection approach About 12 of the 1AU detection volumes (around the sun in the asteroid belt) would be needed to cover the main areas for near earth asteroids.

40KW femtosecond lasers could deflect an asteroid the size of Apophis (320meters, would hit with 880 megaton force) given one year of illumination and an early start in the trajectory.

Asteroid shields are a project of the Lifeboat Foundation

There are 67 kilowatt solid state lasers and modular laser systems & mirrors for reflecting lasers to achieve more laser power from smaller modules

A giant asteroid named Apophis has a one in 45,000 chance of hitting the Earth in 2036. If it did hit the earth it could destroy a city or a region. A slate of new proposals for addressing the asteroid menace was presented today at a recent meeting of the American Association for the Advancement of Science in San Francisco.

One of the Lifeboat Foundation projects is an Asteroid Shield and the issues and points discussed are in direct alignment with Lifeboat. The specific detection and deflection projects are in the Lifeboat Asteroid Shield project.

Edward Lu of NASA has proposed “gravitational tractor” is a spacecraft—up to 20 tons (18 metric tons)—that it could divert an asteroid’s path just by thrusting its engines in a specific direction while in the asteroid’s vicinity.

Scientists also described two massive new survey-telescope projects to detect would-be killer asteroids.

One, dubbed Pan-STARRS, is slated to begin operation later this year. The project will use an array of four 6-foot-wide (1.8-meter-wide) telescopes in Hawaii to scan the skies.

The other program, the Large Synoptic Survey Telescope in Chile, will use a giant 27.5-foot-wide (8.4-meter-wide) telescope to search for killer asteroids. This telescope is scheduled for completion sometime between 2010 and 2015.


David Morrison, an astronomer at NASA’s Ames Research Center, said that “the rate of discoveries is going to ramp up. We’re going to see discoveries being made at 50 to 100 times the current rate.”

“You can expect asteroids like Apophis [to be found] every month.”

Schweickart, the former astronaut, thinks the United Nations needs to draft a treaty detailing standardized international measures that will be carried out in response to any asteroid threat.

His group, the Association of Space Explorers, has started building a team of scientists, risk specialists, and policymakers to draft such a treaty, which will be submitted to the UN for consideration in 2009.

Two new reports on global security conclude with a growing risk for nuclear terrorism Reuters report today.

The EastWest Institute and Chatham House, the two think-tanks behind the reports, cite that more states are pursuing their own nuclear ambitions and that the materials and engineering effort for a bomb “have all become commodities, more or less available to those determined enough to acquire them”.

The vulnerability of nuclear power plants are mentioned. This is highly relevant considering all the new power plants under planning or construction. Read about the planned terrorist attack on a nuclear power plant in Australia, “Australia nuclear plant plot trial opens in Paris”, Reuters.

But most suprisingly:

Ken Berry, author of the EastWest Institute report, said the rise of environmental militants would bring “an even bigger prospect that scientific personnel from the richest countries will aid eco-terrorist use of nuclear weapons or materials”.

This reminds me of Pentti Linkola, Finnish eco-philosopher and by many considered an eco-fascist. In a Wall Street Journal interview he expresses the view that World War III would be: “a happy occasion for the planet.… If there were a button I could press, I would sacrifice myself without hesitating, if it meant millions of people would die.”

Source: Reuters.

Read the reports; “Preventing Nuclear Terrorism” from EastWest Institute and The CBRN System: Assessing the threat of terrorist use of chemical, biological, radiological and nuclear weapons in the UK from Chatham House (The Royal Institute of International Affairs).

An existential risk is a global catastrophic risk that threatens to exterminate humanity or severely curtail its potential. Existential risks are unique because current institutions have little incentive to mitigate them, except as a side effect of pursuing other goals. There is little to no financial return in mitigating existential risk. Bostrom (2001) argues that because reductions in existential risks are global public goods, they may be undervalued by the market. Also, because we have never confronted a major existential risk before, we have little to learn from, and little impetus to be afraid. For more information, see this reference.

There are three main categories of existential risk — threats from biotechnology, nanotechnology, and AI/robotics. Nuclear proliferation itself is not quite an existential risk, but widespread availability of nuclear weapons could greatly exacerbate future risks, providing a stepping stone into a post-nuclear arms race. We’ll look at that first, then go over the others.

Nuclear risk. The risk of nuclear proliferation is currently high. The United States is planning to spend $100 billion on developing new nuclear weapons, and reports suggest that the President is not doing enough to curtail nuclear proliferation, despite the emphasis on the War on Terror. Syria, Qatar, Egypt, and the United Arab Emirates met to announce they their desire to develop nuclear technology. North Korea successfully tested a nuclear weapon in October. Iran continues enriching uranium against the will of the United Nations, and an Iranian official hinted that the country may be obtaining nuclear weapons. Last night, President Bush used the most confrontational language yet towards Iran, accusing it of directly providing weapons and funds to combatants killing US soldiers. The geopolitical situation today with respect to nuclear technology is probably the worst it has been since the Cold War.

Biotechnological risk. The risk of biotechnological disaster is currently high. An attempt among synthetic life researchers to formulate a common set of ethical standards, at the International Conference on Synthetic Biology, has failed. Among the synthetic biology and biotechnology communities, there is little recognition of the risk of genetically engineered pathogens. President Bush’s plan to spend $7.1 billion on bird flu vaccines was decreased to $2.3 billion by Congress. There is little federal money being spent on research to develop blanket countermeasures against unanticipated biotechnological threats. There are still custom DNA synthesis labs that fill orders without first scanning for harmful sequences. Watch-lists for possible bioweapon sequences are out of date, and far from comprehensive. The cost of lab equipment necessary to make bioweapons has decreased in cost and increased in performance, putting it within the financial reach of terrorist organizations. Until there is more oversight in this area, the risk will not only remain, but increase over time. For more information, see this report.

Nanotechnological risk. The risk of nanotechnological disaster is currently low. Although substantial progress has been made with custom machinery at the nanoscale, there is little effort or money going towards the development of molecular manufacturing, the most dangerous (but also most beneficial) branch of nanotechnology. Although the level of risk today is low, once it begins to escalate, it could do so very rapidly due to the self-replicating nature of molecular manufacturing. Nanotechnology researcher Chris Phoenix has published a paper on how it would be technologically feasible to go from a basic self-replicating assembler to a desktop nanofactory in a matter of weeks. His organization projects the development of nanofactories sometime before 2020. Once desktop nanofactories hit the market, it would be extremely difficult to limit their proliferation, as nanofactories could probably be used to create additional nanofactories very quickly. Unrestricted nanofactories, if made available, could be used to synthesize bombs, biological weapons, or synthetic life that is destructive to the biosphere. Important papers on nanoethics have been published by the Nanoethics Group, the Center for Responsible Nanotechnology, and the Lifeboat Foundation.

Artificial Intelligence risk. The risk from AI and robotics is currently moderate. Because we know so little about how difficult AI is as a problem, we can’t say if it will be developed in 2010 or 2050. Like nanofactories, AI is a threat that could balloon exponentially if it gets out of hand, going from “negligible risk” to “severe risk” practically overnight. There is very little attention given towards the risk of AI and how it should be handled. Some of the only papers published on the topic during 2006 were released by the Singularity Institute for Artificial Intelligence. Just recently, Bill Gates, co-founder of Microsoft, wrote “A Robot in Every Home”, outlining why he thinks robotics will be the next big revolution. There has been increased acceptance, both in academia and the public, for the possibility of AI of human-surpassing intelligence. However, the concept of seed AI continues to be poorly understood and infrequently discussed both in popular and academic discourse.