Menu

FAQ

What is the Lifeboat Foundation mission statement?

The Lifeboat Foundation is a nonprofit nongovernmental organization dedicated to encouraging scientific advancements while helping humanity survive existential risks and possible misuse of increasingly powerful technologies, including genetic engineering, nanotechnology, and robotics/AI, as we move towards the Singularity.
 
Lifeboat Foundation is pursuing a variety of options, including helping to accelerate the development of technologies to defend humanity such as new methods to combat viruses, effective nanotechnological defensive strategies, and even self-sustaining space colonies in case the other defensive strategies fail.
 
We believe that, in some situations, it might be feasible to relinquish technological capacity in the public interest (for example, we are against the U.S. government posting the recipe for the 1918 flu virus on the internet). We have some of the best minds on the planet working on programs to enable our survival. We invite you to join our cause!
 

Why should we worry about the fate of the human race?

As technology continues to advance, it will vastly increase the power of leading nations, leading corporations, and leading individuals. Soon, a set of emerging technologies — Genetics, Robotics, and Nanotechnology — will make more power available than has ever been known in human history. This power may be used by leaders for our benefit but the same technologies that could raise standards of living and increase healthspans also could enable a small group, or even a single individual, to dominate the world — or destroy it.
 
If that sounds impossible, remember that dinosaurs ruled the planet for 150 million years (far longer than all mammals have existed!), but they met their doom in a catastrophic extinction. Their end came through natural means — ours may not.
 
Genetics is already dangerous today. A bioengineered virus or bacteria could be unleashed from a small lab and kill tens of millions of people or more.
 
Nanotechnology, when it reaches the advanced stage of molecular manufacturing, could trigger a rapidly escalating arms race that spins out of control and ends in devastating war, possibly threatening the survival of all humanity. This may become possible by 2020, or perhaps even sooner. There will also be the threat of a nano-built self-replicating system (popularly called grey goo) that could in theory consume large amounts of the biosphere.
 
The sad fact is that it will take very little resources to launch a nanotechnology attack, basically resources equivalent to the cost of the 2001 anthrax attack in the U.S. And the FBI believes that particular attack was the work of a lone crazy. With this in mind, it’s not hard to imagine the disastrous result of nanotechnological weapons falling into the wrong hands.
 
Robotics could become dangerous as early as 2035. Our best solution to this problem is Friendly AI as proposed by the Singularity Institute for Artificial Intelligence.
 
As you can see, there are various means by which all life could be extinguished in the near future. And we’ve probably left out a few.
 
In our time, how much danger do we face from all of these technologies? How high are the extinction risks?
 
The philosopher John Leslie has studied this question and concluded that the risk of human extinction is at least 30 percent, while Ray Kurzweil believes we have “a better than even chance of making it through”, with the caveat that he has “always been accused of being an optimist”.
 
Lord Martin Rees, Royal Society Professor at Cambridge University, a Fellow of Kings College, the U.K.’s Astronomer Royal, and winner of the 2001 Cosmology Prize of the Peter Gruber Foundation, has published a book on why we should worry about the fate of the human race titled Our Final Hour: A Scientist’s Warning: How Terror, Error, and Environmental Disaster Threaten Humankind’s Future In This Century — On Earth and Beyond.
 

Won’t the government protect us?

One government can’t protect the world; many will have to work together. To avoid a nanotech-fueled arms race, all governments would have to accept a strict, enforceable agreement on weapons development, along with full transparency. Obviously, this would not be easy to achieve.
 
An option that could reduce some classes of risk is a single world government (or strong coalition) with absolute authority. However, this would create a frightening potential for oppression and destruction, especially since the other governments probably would not fade away quietly.
 
Moreover, governments might choose not to control the most dangerous technologies. The US government recently posted the recipe for the 1918 flu virus on the Internet. This virus killed 20 to 50 million people the last time it was in the wild, and technologies for recreating the virus are rapidly being developed. It seems possible that a government would post the recipe for grey goo on the Internet if they had it!
 

How will Lifeboat Foundation protect us?

Since it is usually not feasible to slow down the advancement of dangerous technologies, our foundation will do everything possible to speed up the advancement of defensive technologies against all possible threats.
 
We will also support relinquishment when feasible. For example, we are against publishing information on how to create dangerous viruses on the internet. We also support the Cooperative Threat Reduction (CTR) program that enables the old Soviet Union to relinquish some of its dangerous weapons.
 

How can we be protected against advances in Genetics?

We support our BioShield proposal (backed by Bill Frist, Bill Joy, and Ray Kurzweil) for a one hundred billion dollar program to accelerate the development of technologies to combat biological viruses.
 
We support Ray Kurzweil’s proposal for Congress to initiate legislation to prohibit publication of all sensitive data on virulent genomes on all U.S. government publicly available Internet sites.
 

How can we be protected against advances in Nanotechnology?

We support our NanoShield proposal for an active, distributed sensing and response system: an “immune system” that would detect and stop bad uses of nanotech. Side effects of this system include the potential for oppression by the owners of the shield and the potential for the shield itself to turn destructive due to a software bug or due to hackers getting control of it. Note that our proposal is more sophisticated than the active nanotechnological shield proposed by Eric Drexler in that it has both specific immunity responses like his proposal and non-specific immunity responses which his proposal lacked.
 
We support the creation of self-sustaining space colonies, on the Moon and in deep space, in case of a nanotech (or other) disaster that makes human life difficult or impossible on Earth. We also support the creation of self-contained bunkers on the Earth; these would use much of the same technology as self-sustaining space colonies.
 

How can we be protected against advances in Robotics?

We support the Friendly AI proposal by the Singularity Institute for Artificial Intelligence.
 

How can we be protected against asteroids?

We support our Scientific Advisory Board member Nick Kaiser’s efforts to locate any asteroids that may impact the Earth. He was principal investigator of the $50 million Panoramic Survey Telescope & Rapid Response System (Pan-STARRS) asteroid early-warning system.
 
We support the proposal by the B612 Foundation to significantly alter the orbit of an asteroid in a controlled manner by 2015.
 

How can we be protected against global warming?

We support investigations into climate change such as the Glacsweb project led by our Scientific Advisory Board member Kirk Martinez which monitors glacier behavior using sensor networks.
 

How can we be protected against nuclear weapons?

We support the proposal by the Nuclear Threat Initiative co-chaired by Ted Turner and Sam Nunn to speed up implementation of the Cooperative Threat Reduction (CTR) program to remove dangerous weapons from the old Soviet Union.
 

How can we be protected against anti-matter bombs and some high energy particle accelerator mishaps?

Anti-matter bombs and high energy particle accelerator mishaps which create small black holes or turn the Earth into a giant strangelet would threaten all life on Earth so self-sustaining colonies elsewhere is our proposal.
 

What is nanotechnology?

A basic definition of nanotechnology is: engineering of functional systems at the molecular scale. This covers current work and concepts that are more advanced. In its original sense, nanotechnology refers to the projected ability to construct items from the bottom up, using techniques and tools being developed today to make complete, high performance products. This capability is often called molecular manufacturing.
 

What is grey goo?

When nanotechnology-based manufacturing was first proposed in Eric Drexler’s Engines of Creation (1986), a concern arose that tiny self-contained manufacturing systems, “replicating assemblers”, might run amok and “eat” the biosphere, reducing it to copies of themselves. More recent designs by Drexler and others make it clear, though, that it would be rather easy to design manufacturing systems that would not malfunction and so therefore the larger concern is that grey goo would be designed on purpose instead of by accident.
 
Many people have noted that creating grey goo which simply destroyed the world would have no purpose. However, computer viruses are also useless and destructive, and people have created thousands of them. When grey goo becomes technically feasible, a hobbyist might destroy the world.
 
For more detailed information on the subject of grey goo and its devastating effects, read Some Limits to Global Ecophagy by Biovorous Nanoreplicators, with Public Policy Recommendations by Robert A. Freitas Jr.
 

What is green goo?

Green goo is based on bionanotechnology while grey goo is based on nanotechnology. Bionanotechnology is synthetic technology based on the principles and chemical pathways of living organisms, ranging from genetic-engineered microbes to custom-made organic molecules.
 
Green goo and grey goo are similar threats and the same types of precautions and defenses apply to both.
 

What is red goo?

Red goo is deliberately designed and released destructive nanotechnology, as opposed to accidentally created grey goo. We are more concerned about red goo than grey goo which is less likely to occur.
 
For simplicity, when we talk about “goo” dangers, we will just mention grey goo as it is a more established term than green goo or red goo.
 

What is blue goo?

Blue goo is a group of nanotechnological machines that would monitor and control other machines to ensure that their replication does not get out of control. In other words, Eric Drexler’s proposed active nanotechnological shield would be blue goo. Bugs in the blue goo “operating system” could turn it into grey goo.
 
We hope it concerns the reader that four types of “goo” have already been named yet the dangers of self-replicating nanotechnology are not a significant concern to our world leaders.
 

Do we expect to be successful?

With the survival of the human race at stake, we have to be successful.
 
Here is what Carl Sagan had to say about our mission, “This is the first moment in the history of our planet when any species, by its own voluntary actions, has become a danger to itself — as well as to vast numbers of others.
 
It might be a familiar progression, transpiring on many worlds — a planet, newly formed, placidly revolves around its star; life slowly forms; a kaleidoscopic procession of creatures evolves; intelligence emerges which, at least up to a point, confers enormous survival value; and then technology is invented. It dawns on them that there are such things as laws of Nature, that these laws can be revealed by experiment, and that knowledge of these laws can be made both to save and to take lives, both on unprecedented scales. Science, they recognize, grants immense powers. In a flash, they create world-altering contrivances. Some planetary civilizations see their way through, place limits on what may and what must not be done, and safely pass through the time of perils. Others, not so lucky or so prudent, perish.”
 
It is imperative that the human race safely pass through such a time of perils, but to do so, we will have to be prepared. We will have to create options, and that is exactly why the success of this project is so important.
 

Why don’t we just move away from the current battlegrounds, to the safety of an island?

The problems conceivably facing the future of our planet will leave no place of sanctuary on Earth’s surface! No place to hide. No escape.
 

Why not hide in a deep sea colony?

The one single atmosphere of pressure in outer space seems inconsequential compared to the hundreds of atmospheres of pressure under the sea.
 
Anyway, the sea and its contents, both indigenous and otherwise, would still be within the Earth’s biosphere and therefore susceptible to grey goo devastation. So the point is moot.
 

I don’t want to wait until 2020 to go into outer space. Can you help me?

All Lifeboat members are eligible for 5% discounts on Space Adventures terrestrial tours, zero-gravity and supersonic jet flights, sub-orbital space flights, and a $200,000 discount on trips to the International Space Station!
 

How do we avoid taking our problems with us into space?

The Israeli government is powerless against individuals, who with the use of advanced technologies, routinely take the lives of innocent civilians. However, they are capable of providing sufficient security in the confined space of an airplane. That same advantage would apply on a space station as opposed to securing the entire planet.
 
The Earth, due to a larger population, would need to have six million times as strict security as a space colony to have the same chance of survival. Living on Earth with six million times the security of a space colony seems neither realistic nor a pleasant way to live.
 
If one space colony failed to provide sufficient safeguards, then only they would face the consequences, leaving the other colonies to learn from their mistake.
 

I only have $10 in the bank. Is there a chance I could get on a lifeboat?

In the tradition of Harvard’s admissions policy, we expect lifeboats to not be exclusive to the rich and powerful. We expect that there will be lotteries for spots on lifeboats and there will also be trust funds to provide “lifeboat scholarships”.
 

Help! I still have questions!

Explore our supplemental FAQ.