Menu

Blog

Page 12028

Apr 3, 2010

Natural selection of universes and risks for the parent civilization

Posted by in category: existential risks

Lee Smolin is said to believe (according to personal communication from Danila Medvedev who was told about it by John Smart. I tried to reach Smolin for comments, but failed) that global catastrophe is impossible, based on the following reasoning: the multiverse is dominated by those universes that are able to replicate. This Self-replication occurs in black holes, and in especially in those black holes, which are created civilizations. Thus, the parameters of the universe are selected so that civilization cannot self-destruct before they create black holes. As a result, all physical processes, in which civilization may self-destruct, are closed or highly unlikely. Early version of Smolin’s argument is here: http://en.wikipedia.org/wiki/Lee_Smolin but this early version was refuted in 2004, and so he (probably) added existence of civilization as another condition for cosmic natural selection. Anyway, even if it is not Smolin’s real line of thoughts, it is quite possible line of thoughts.

I think this argument is not persuasive, since the selection can operate both in the direction of universes with more viable civilizations, and in the direction of universes with a larger number of civilizations, just as biological evolution works to more robust offspring in some species (mammals) and in the larger number of offspring with lower viability (plants, for example, dandelion). Since some parameters for the development of civilizations is extremely difficult to adjust by the basic laws of nature (for example, the chances of nuclear war or a hostile AI), but it is easy to adjust the number of emerging civilizations, it seems to me that the universes, if they replicated with the help of civilizations, will use the strategy of dandelions, but not the strategy of mammals. So it will create many unstable civilization and we are most likely one of them (self indication assumption also help us to think so – see recent post of Katja Grace http://meteuphoric.wordpress.com/2010/03/23/sia-doomsday-the-filter-is-ahead/)

But still some pressure can exist for the preservation of civilization. Namely, if an atomic bomb would be as easy to create as a dynamite – much easier then on Earth (which depends on the quantity of uranium and its chemical and nuclear properties, ie, is determined by the original basic laws of the universe), then the chances of the average survival of civilization would be lower. If Smolin’s hypothesis is correct, then we should encounter insurmountable difficulties in creating nano-robots, microelectronics, needed for strong AI, harmful experiments on accelerators with strangelet (except those that lead to the creation of black holes and new universes), and in several other potentially dangerous technology trends that depend on their success from the basic properties of the universe, which may manifest itself in the peculiarities of its chemistry.

In addition, the evolution of universes by Smolin leads to the fact that civilization should create a black hole as early as possible in the course of its history, leading to replication of universes, because the later it happens, the greater the chances that the civilization will self-destruct before it can create black holes. In addition, the civilization is not required to survive after the moment of “replication” (though survival may be useful for the replication, if civilization creates a lot of black holes during its long existence.) From these two points, it follows that we may underestimate the risks from Hadron Collider in the creation of black holes.

Continue reading “Natural selection of universes and risks for the parent civilization” »

Apr 2, 2010

Technological Singularity and Acceleration Studies: Call for Papers

Posted by in category: futurism

8th European conference on Computing And Philosophy — ECAP 2010
Technische Universität München
4–6 October 2010

Submission deadline of extended abstracts: 7 May 2010
Submission form

Theme

Historical analysis of a broad range of paradigm shifts in science, biology, history, technology, and in particular in computing technology, suggests an accelerating rate of evolution, however measured. John von Neumann projected that the consequence of this trend may be an “essential singularity in the history of the race beyond which human affairs as we know them could not continue”. This notion of singularity coincides in time and nature with Alan Turing (1950) and Stephen Hawking’s (1998) expectation of machines to exhibit intelligence on a par with to the average human no later than 2050. Irving John Good (1965) and Vernor Vinge (1993) expect the singularity to take the form of an ‘intelligence explosion’, a process in which intelligent machines design ever more intelligent machines. Transhumanists suggest a parallel or alternative, explosive process of improvements in human intelligence. And Alvin Toffler’s Third Wave (1980) forecasts “a collision point in human destiny” the scale of which, in the course of history, is on the par only with the agricultural revolution and the industrial revolution.

We invite submissions describing systematic attempts at understanding the likelihood and nature of these projections. In particular, we welcome papers critically analyzing the following issues from a philosophical, computational, mathematical, scientific and ethical standpoints:

  • Claims and evidence to acceleration
  • Technological predictions (critical analysis of past and future)
  • The nature of an intelligence explosion and its possible outcomes
  • The nature of the Technological Singularity and its outcome
  • Safe and unsafe artificial general intelligence and preventative measures
  • Technological forecasts of computing phenomena and their projected impact
  • Beyond the ‘event horizon’ of the Technological Singularity
  • The prospects of transhuman breakthroughs and likely timeframes

Amnon H. Eden, School of Computer Science & Electronic Engineering, University of Essex, UK and Center For Inquiry, Amherst NY

Mar 27, 2010

Critical Request to CERN Council and Member States on LHC Risks

Posted by in categories: complex systems, cosmology, engineering, ethics, existential risks, particle physics, policy

Experts regard safety report on Big Bang Machine as insufficient and one-dimensional

International critics of the high energy experiments planned to start soon at the particle accelerator LHC at CERN in Geneva have submitted a request to the Ministers of Science of the CERN member states and to the delegates to the CERN Council, the supreme controlling body of CERN.

The paper states that several risk scenarios (that have to be described as global or existential risks) cannot currently be excluded. Under present conditions, the critics have to speak out against an operation of the LHC.

The submission includes assessments from expertises in the fields markedly missing from the physicist-only LSAG safety report — those of risk assessment, law, ethics and statistics. Further weight is added because these experts are all university-level experts – from Griffith University, the University of North Dakota and Oxford University respectively. In particular, it is criticised that CERN’s official safety report lacks independence – all its authors have a prior interest in the LHC running and that the report uses physicist-only authors, when modern risk-assessment guidelines recommend risk experts and ethicists as well.

Continue reading “Critical Request to CERN Council and Member States on LHC Risks” »

Mar 23, 2010

Risk intelligence

Posted by in categories: education, events, futurism, geopolitics, policy, polls

A few months ago, my friend Benjamin Jakobus and I created an online “risk intelligence” test at http://www.projectionpoint.com/. It consists of fifty statements about science, history, geography, and so on, and your task is to say how likely you think it is that each of these statements is true. We calculate your risk intelligence quotient (RQ) on the basis of your estimates. So far, over 30,000 people have taken our test, and we’re currently writing up the results for some peer-reviewed journals.

Now we want to take things a step further, and see whether our measure correlates with the ability to make accurate estimates of future events. To this end we’ve created a “prediction game” at http://www.projectionpoint.com/prediction_game.php. The basic idea is the same; we provide you with a bunch of statements, and your task is to say how likely you think it is that each one is true. The difference is that these statements refer not to known facts, but to future events. Unlike the first test, nobody knows whether these statements are true or false yet. For most of them, we won’t know until the end of the year 2010.

For example, how likely do you think it is that this year will be the hottest on record? If you think this is very unlikely you might select the 10% category. If you think it is quite likely, but not very likely, you might put the chances at 60% or 70%. Selecting the 50% category would mean that you had no idea how likely it is.

This is ongoing research, so please feel free to comment, criticise or make suggestions.

Mar 12, 2010

Reduction of human intelligence as global risk

Posted by in categories: existential risks, neuroscience

Another risk is loss of human rationality, while preserving human life. In a society there are always so many people with limited cognitive abilities, and most of the achievements are made by a small number of talented people. Genetic and social degradation, reducing the level of education, loss of skills of logic can lead to a temporary decrease in intelligence of individual groups of people. But as long as humanity is very large in population, it is not so bad, because there will always be enough intelligent people. Significant drop in population after nonglobal disaster may exacerbate this problem. And the low intelligence of the remaining people will reduce their chances of survival. Of course, one can imagine such an absurd situation that people are so degraded that by the evolutionary path new species arise from us, which is not having a full-fledged intelligence — and that back then this kind of evolving reasonable, developed a new intelligence.
More dangerous is decline of intelligence because of the spread of technological contaminants (or use of a certain weapon). For example, I should mention constantly growing global arsenic contamination, which is used in various technological processes. Sergio Dani wrote about this in his article “Gold, coal and oil.” http://sosarsenic.blogspot.com/2009/11/gold-coal-and-oil-reg…is-of.html, http://www.medical-hypotheses.com/article/S0306-9877 (09) 00666–5/abstract
Disengaged during the extraction of gold mines in the arsenic remains in the biosphere for millennia. Dani binds arsenic with Alzheimer’s disease. In his another paper is demonstrated that increasing concentrations of arsenic leads to an exponential increase in incidence of Alzheimer’s disease. He believes that people are particularly vulnerable to arsenic poisoning, as they have large brains and longevity. If, however, according to Denis, in the course of evolution, people will adapt to high levels of arsenic, it will lead to a decline in the brain and life expectancy, resulting in the intellect of people will be lost.
In addition to arsenic contamination occurs among many other neurotoxic substances — CO, CO2, methane, benzene, dioxin, mercury, lead, etc. Although the level of pollution by each of them separately is below health standards, the sum of the impacts may be larger. One reason for the fall of the Roman Empire was called the total poisoning of its citizens (though not barbarians) of lead from water pipes. Of course, they could not have knowledge about these remote and unforeseen consequences — but we also may not know about the many consequences of our affairs.
In addition to dementia is alcohol and most drugs, many drugs (eg, side effect in the accompanying sheets of mixtures of heartburn called dementia). Also rigid ideological system, or memes.
Number of infections, particularly prion, also leads to dementia.
Despite this, the average IQ of people is growing as life expectancy.

Mar 10, 2010

Why AI could fail?

Posted by in category: robotics/AI

AI is our best hope for long term survival. If we fail to create it, it will happened by some reason. Here I suggest the complete list of possible causes of failure, but I do not believe in them. (I was inspired bu V.Vinge artile “What if singularity does not happen”?)

I think most of these points are wrong and AI finaly will be created.

Technical reasons:
1) Moore’s Law will stop by physical causes earlier than would be established sufficiently powerful and inexpensive apparatus for artificial intelligence.
2) Silicon processors are less efficient than neurons to create artificial intelligence.
3) Solution of the AI cannot be algorithmically parallelization and as a result of the AI will be extremely slow.

Philosophy:
4) Human beings use some method of processing information, essentially inaccessible to algorithmic computers. So Penrose believes. (But we can use this method using bioengineering techniques.) Generally, the final recognition of the impossibility of creating artificial intelligence would be tantamount to recognizing the existence of the soul.
5) The system cannot create a system more complex then themselves, and so the people cannot create artificial intelligence, since all the proposed solutions are too simple. That is, AI is in principle possible, but people are too stupid to do it. In fact, one reason for past failures in the creation of artificial intelligence is that people underestimate the complexity of the problem.
6) AI is impossible, because any sufficiently complex system reveals the meaninglessness of existence and stops.
7) All possible ways to optimize are exhausted.AI does not have any fundamental advantage in comparison with the human-machine interface and has a limited scope of use.
8. The man in the body has a maximum level of common sense, and any incorporeal AIs are or ineffective, or are the models of people.
9) AI is created, but has no problems, which he could and should be addressed. All the problems have been solved by conventional methods, or proven uncomputable.
10) AI is created, but not capable of recursive self-optimization, since this would require some radically new ideas, but they had not. As a result, AI is there, or as a curiosity, or as a limited specific applications, such as automatic drivers.
11) The idea of artificial intelligence is flawed, because it has no precise definition or even it is an oxymoron, like “artificial natural.” As a result, developing specific goals or to create models of man, but not universal artificial intelligence.
12) There is an upper limit of the complexity of systems for which they have become chaotic and unstable, and it slightly exceeds the intellect of the most intelligent people. AI is slowly coming to this threshold of complexity.
13) The bearer of intelligence is Qualia. For our level of intelligence should be a lot events that are indescribable and not knowable, but superintellect should understand them, by definition, otherwise it is not superintellect, but simply a quick intellect.

Continue reading “Why AI could fail?” »

Mar 6, 2010

Reflections on Avatar

Posted by in category: futurism

I recently watched James Cameron’s Avatar in 3D. It was an enjoyable experience in some ways, but overall I left dismayed on a number of levels.

It was enjoyable to watch the lush three-dimensional animation and motion capture controlled graphics. I’m not sure that 3D will take over – as many now expect – until we get rid of the glasses (and there are emerging technologies to do that albeit, the 3D effect is not yet quite as good), but it was visually pleasing.

While I’m being positive, I was pleased to see Cameron’s positive view of science in that the scientists are “good” guys (or at least one good gal) with noble intentions on learning the wisdom of the Na’vi natives and on negotiating a diplomatic solution.

The Na’vi were not completely technology-free. They basically used the type of technology that Native Americans used hundreds of years ago – same clothing, domesticated animals, natural medicine, and bows and arrows.

Continue reading “Reflections on Avatar” »

Feb 19, 2010

Small steps that can make difference on global catastrophes

Posted by in category: existential risks

Danila Medvedev asked me to make a list of actual projects that can reduce the likelihood of global catastrophe.

EDITED: This list reflects only my personal opinion and not opinion of LF. Suggeted ideas are not final but futher discussion on them is needed. And these ideas are mutual independed.

1. Create the book “Guide to the restoration of civilization”, which describe all the necessary knowledge of hunting, industry, mining, and all the warnings about the risks for the case of civilization collapse.Test its different sections on volunteers. Print the book in stone / metal / other solid media in many copies throughout the world. Bury treasure with the tools / books / seeds in different parts of the world. 1–100 million USD. Reduction of probability of extinction (assuming that real prior probability is 50% in XXI century): 0.1%.
2. Collect money for the work of Singularity Institute in creating a Friendly AI. They need 3 million dollars. This project has a maximum ratio of the cost-impact. That is, it can really increase the chances of survival of humanity at about 1 percent. (This is determined by the product of estimates of the probabilities of events — the possibility of AI, what SIAI will solve this problem, the fact that it chooses the problem first, and that it solves the problem of friendliness, and the fact that the money they have will be enough.)
3. Krisave in the ice of Antarctica (the temperature of −57 C, in addition, you can create a stable region of lower temperature by use of liquid nitrogen which would be pumped and cooled it) a few people, so that if on earth there another advanced civilization, it could revive them. cost is several million dollars. Another project on the preservation of human knowledge in the spirit of the proposed fund by LongNow titanium discs with recorded information.
4. Send human DNA on the moon in the stable time capsule. Several tens of millions of dollars. You can also send the criopreserved human brain. The idea here is that if mankind would perish, then someday the aliens arrive and revive people based on these data. Expenses is 20–50 million dollars, the probability of success of 0.001%. Send human DNA in space in other ways.
5. Accelerated development of universal vaccines. Creation of the world’s reserves of powerful means of decontamination in the event of a global epidemic, the stockpiling antvirus drugs and vaccines to the majority of known viruses, which would be enough for a large part of humanity. Establishment of virus monitoring and instant diagnosis (test strips). Creation and production of many billions of pieces of advanced disinfecting tools such as personal UV lamps, nanotech dressing for the face, gloves, etc. The billions or hundreds of billions of dollars a year. Creating personal stockpiles of food and water at each house for a month. Development of supply system with no contact of people with one another. Jump to slow global transport (ships) in the event of a pandemic. Training of medical personnel and the creation of spare beds in hospitals. Creating and testing on real problems huge factories, which in a few weeks can develop and produce billions of doses of vaccines. Improvement of legislation in the field of quarantine. There are also risks. Increase the probability of survival 2–3 percent.
6. Creating a self-contained bunker with a supply of food for several decades and with the constant “crews”, able to restore humanity. About $ 1 billion. Save those types of resources that humanity could use the post-apocalyptic stage for recovery.
7. The creation of scientific court for Hadron Collider and other potentially dangerous projects, in which the theoretical physicist will be paid large sums of money for the discovery of potential vulnerabilities.
8. Adaptation of the ISS function for bunker in case of disasters on Earth — the creation of the ISS series of additional modules, which may support the existence of the crew for 10 years. Cost is tens of billions of dollars.
9. Creation of an autonomous self-sustaining base on the Moon. At the present level of technology — about $ 1 trillion or more. Proper development of strategy of space exploration would cheapen it — that is, investments in new types of engines and cheap means of delivery. Increase survival by 1 percent. (But there are also new risks).
10. The same is true on Mars. Several trillion. Increase survival of 1–2 per cent.
11. Creating star nuclear Ark ship- — tens of trillions of dollars. Increase survival of 1–2 per cent.
12. (The following are items for which are not enough money, but political will is also needed.) Destruction of rogue states and the establishment of a world state. 10 percent increase in survival. However, the high risks in the process.
13. Creating a global center for rapid response to global risks. Something like Special Forces or the Ministry of Emergency Situations, which can throw on the global risks. Enable it to instant action, including the hostilities, as well as intelligence. Giving its veto on the dangerous experiments. Strengthening of civil defense in the field.
14. The ban on private science (in the sense in the garage) and the creation of several centers of certified science (science town with centralized control of security in the process) with a high level of funding of breakthrough research. In the field of biotechnology, nuclear technology, artificial intelligence and nano. This will help prevent the dissemination of knowledge of mass destruction, but it will not stop progress. It is only after the abolition of nation states. A few percent increase in survival. These science towns can freely exchange technical information between themselves, but do not have the right to release it into the outside world.
15. The legislation required the duplication of a vital resource and activities — which would make impossible the collapse of civilization in a domino effect on failure at one point. The ban on super complex system of social organization, whose behavior is unpredictable and too prone to a domino effect, and replace them on the linear repetitive production system — that is, opposition to economic globalization.
16. Certification and licensing researchers in bio, nano, AI and nuclear technologies. Legislative requirement to check all their own and others’ inventions for the global risks associated with them, and the commitment to develop both a means of protection in the event of their inventions go out of control.
17. Law on raising intelligence of people half the population of fertilization from a few hundred of the best fathers in terms of intelligence and common sense and dislike of the risks. (Second half of the breed in the usual manner to maintain genetic diversity, the project is implemented without violence due to cash payments.) Plus education reform, where the school is replaced by a system of training, which given the important role of good sense and knowledge of logic.
18. Limitation of capitalist competition as the engine of the economy, because it leads to an underestimation of risk in the long term.
19. Leading investment in the field like nanotechnology breakthrough in the best and most critical facilities, to quickly slip dangerous period.
20. The growth of systems of information control and surveillance of the total, plus the certification data in them, and pattern recognition. Control of the Internet and the personal authorization for network logons. Continuous monitoring of all persons who possess potentially dangerous knowledge.
This could be creating a global think tank from the best experts on global risks and the formulation of their objectives to develop a positive scenario. Thus it is necessary to understand which way to combine these specialists would be most effective, so A) they do not eat each other because of different ideas and feelings of their own importance. B) that it does not become money feedbox. B) but that they received money for it, which would allow them to concentrate fully on this issue. That is, it should be something like edited journal, wiki, scientific trial or predictions market. But the way of association should not be too exotic, as well as exotic ways should be tested on less important matters.
However, the creation of accurate and credible for all models of the global risk would reduce by at least twice the probability of global catastrophe. And we are still at the stage of creating such a model. Therefore, how to create models and ways of authentication are now the most important, though, may have already been lost.
I emphasize that the main problems of global risks lies within the scope of knowledge, rather than to the sphere of action. That is the main problem that we do not know where we should prepare, not that we do not have instrument of defence. Risks are removed by the knowledge and expertise.
Implementation of these measures is technically and economically possible and could reduce the chance of extinction in the XXI century, in my estimation, 10 times.

Any ideas or missed projects?

Jan 18, 2010

Filling the Gaps in “Global Trends 2025″

Posted by in categories: futurism, geopolitics, nanotechnology

Because of the election cycle, the United States Congress and Presidency has a tendency to be short-sighted. Therefore it is a welcome relief when an organization such as the U.S. National Intelligence Council gathers many smart people from around the world to do some serious thinking more than a decade into the future. But while the authors of the NIC report Global Trends 2025: A Transformed World[1] understood the political situations of countries around the world extremely well, their report lacked two things:

1. Sufficient knowledge about technology (especially productive nanosystems) and their second order effects.

2. A clear and specific understanding of Islam and the fundamental cause of its problems. More generally, an understanding of the relationship between its theology, technological progress, and cultural success.
These two gaps need to be filled, and this white paper attempts to do so.

Technology
Christine Peterson, the co-founder and vice-president of the Foresight Nanotech Institute, has said “If you’re looking ahead long-term, and what you see looks like science fiction, it might be wrong. But if it doesn’t look like science fiction, it’s definitely wrong.” None of Global Trends 2025 predictions look like science fiction, though perhaps 15 years from now is not long-term (on the other hand, 15 years is not short-term either).

Continue reading “Filling the Gaps in "Global Trends 2025"” »

Dec 30, 2009

Ark-starship – too early or too late?

Posted by in categories: existential risks, lifeboat, space

It is interesting to note that the technical possibility to send interstellar Ark appeared in 1960th, and is based on the concept of “Blust-ship” of Ulam. This blast-ship uses the energy of nuclear explosions to move forward. Detailed calculations were carried out under the project “Orion”. http://en.wikipedia.org/wiki/Project_Orion_(nuclear_propulsion) In 1968 Dyson published an article “Interstellar Transport”, which shows the upper and lower bounds of the projects. In conservative (ie not imply any technical achievements) valuation it would cost 1 U.S. GDP (600 billion U.S. dollars at the time of writing) to launch the spaceship with mass of 40 million tonnes (of which 5 million tons of payload), and its time of flight to Alpha Centauri would be 1200 years. In a more advanced version the price is 0.1 U.S. GDP, the flight time is 120 years and starting weight 150 000 tons (of which 50 000 tons of payload). In principle, using a two-tier scheme, more advanced thermonuclear bombs and reflectors the flying time to the nearest star can reduce to 40 years.
Of course, the crew of the spaceship is doomed to extinction if they do not find a habitable and fit for human planet in the nearest star system. Another option is that it will colonize uninhabited planet. In 1980, R. Freitas proposed a lunar exploration using self-replicating factory, the original weight of 100 tons, but to control that requires artificial intelligence. “Advanced Automation for Space Missions” http://www.islandone.org/MMSG/aasm/ Artificial intelligence yet not exist, but the management of such a factory could be implemented by people. The main question is how much technology and equipment should be enough to throw at the moonlike uninhabited planet, so that people could build on it completely self-sustaining and growing civilization. It is about creating something like inhabited von Neumann probe. Modern self-sustaining state includes at least a few million people (like Israel), with hundreds of tons of equipment on each person, mainly in the form of houses, roads. Weight of machines is much smaller. This gives us the upper boundary of the able to replicate human colony in the 1 billion tons. The lower estimate is that there would be about 100 people, each of which accounts for approximately 100 tons (mainly food and shelter), ie 10 000 tons of mass. A realistic assessment should be somewhere in between, and probably in the tens of millions of tons. All this under the assumption that no miraculous nanotechnology is not yet open.
The advantage of a spaceship as Ark is that it is non-specific reaction to a host of different threats with indeterminate probabilities. If you have some specific threat (the asteroid, the epidemic), then there is better to spend money on its removal.
Thus, if such a decision in the 1960th years were taken, now such a ship could be on the road.
But if we ignore the technical side of the issue, there are several trade-offs on strategies for creating such a spaceship.
1. The sooner such a project is started, the lesser technically advanced it would be, the lesser would be its chances of success and higher would be cost. But if it will be initiated later, the greater would be chances that it will not be complete until global catastrophe.
2. The later the project starts, the greater are the chance that it will take “diseases” of mother civilization with it (e.g. ability to create dangerous viruses ).
3. The project to create a spaceship could lead to the development of technologies that threaten civilization itself. Blast-ship used as fuel hundreds of thousands of hydrogen bombs. Therefore, it can either be used as a weapon, or other party may be afraid of it and respond. In addition, the spaceship can turn around and hit the Earth, as star-hammer — or there maybe fear of it. During construction of the spaceship could happen man-made accidents with enormous consequences, equal as maximum to detonation of all bombs on board. If the project is implementing by one of the countries in time of war, other countries could try to shoot down the spaceship when it launched.
4. The spaceship is a means of protection against Doomsday machine as strategic response in Khan style. Therefore, the creators of such a Doomsday machine can perceive the Ark as a threat to their power.
5. Should we implement a more expensive project, or a few cheaper projects?
6. Is it sufficient to limit the colonization to the Moon, Mars, Jupiter’s moons or objects in the Kuiper belt? At least it can be fallback position at which you can check the technology of autonomous colonies.
7. The sooner the spaceship starts, the less we know about exoplanets. How far and how fast the Ark should fly in order to be in relative safety?
8. Could the spaceship hide itself so that the Earth did not know where it is, and should it do that? Should the spaceship communicate with Earth? Or there is a risk of attack of a hostile AI in this case?
9. Would not the creation of such projects exacerbate the arms race or lead to premature depletion of resources and other undesirable outcomes? Creating of pure hydrogen bombs would simplify the creation of such a spaceship, or at least reduce its costs. But at the same time it would increase global risks, because nuclear non-proliferation will suffer complete failure.
10. Will the Earth in the future compete with its independent colonies or will this lead to Star Wars?
11. If the ship goes off slowly enough, is it possible to destroy it from Earth, by self-propelling missile or with radiation beam?
12. Is this mission a real chance for survival of the mankind? Flown away are likely to be killed, because the chance of success of the mission is no more than 10 per cent. Remaining on the Earth may start to behave more risky, in logic: “Well, if we have protection against global risks, now we can start risky experiments.” As a result of the project total probability of survival decreases.
13. What are the chances that its computer network of the Ark will download the virus, if it will communicate with Earth? And if not, it will reduce the chances of success. It is possible competition for nearby stars, and faster machines would win it. Eventually there are not many nearby stars at distance of about 5 light years — Alpha Centauri, the Barnard star, and the competition can begin for them. It is also possible the existence of dark lonely planets or large asteroids without host-stars. Their density in the surrounding space should be 10 times greater than the density of stars, but to find them is extremely difficult. Also if nearest stars have not any planets or moons it would be a problem. Some stars, including Barnard, are inclined to extreme stellar flares, which could kill the expedition.
14. The spaceship will not protect people from hostile AI that finds a way to catch up. Also in case of war starships may be prestigious, and easily vulnerable targets — unmanned rocket will always be faster than a spaceship. If arks are sent to several nearby stars, it does not ensure their secrecy, as the destination will be known in advance. Phase transition of the vacuum, the explosion of the Sun or Jupiter or other extreme event can also destroy the spaceship. See e.g. A.Bolonkin “Artificial Explosion of Sun. AB-Criterion for Solar Detonation” http://www.scribd.com/doc/24541542/Artificial-Explosion-of-S…Detonation
15. However, the spaceship is too expensive protection from many other risks that do not require such far removal. People could hide from almost any pandemic in the well-isolated islands in the ocean. People can hide on the Moon from gray goo, collision with asteroid, supervolcano, irreversible global warming. The ark-spaceship will carry with it problems of genetic degradation, propensity for violence and self-destruction, as well as problems associated with limited human outlook and cognitive biases. Spaceship would only burden the problem of resource depletion, as well as of wars and of the arms race. Thus, the set of global risks from which the spaceship is the best protection, is quite narrow.
16. And most importantly: does it make sense now to begin this project? Anyway, there is no time to finish it before become real new risks and new ways to create spaceships using nanotech.
Of course it easy to envision nano and AI based Ark – it would be small as grain of sand, carry only one human egg or even DNA information, and could self-replicate. The main problem with it is that it could be created only ARTER the most dangerous period of human existence, which is the period just before Singularity.