Menu

Blog

Oct 8, 2009

Fermi Paradox and global catastrophes

Posted by in category: existential risks

The main ways of solving the Fermi Paradox are:
1) They are already here (at least in the form of their signals)
2) They do not disseminate in the universe, do not leave traces, and not send signals. That is, they do not start a shock wave of intelligence.
3) The civilizations are extremely rare.
Additional way of thinking is 4): we are unique civilization because of observation selection
All of them have a sad outlook for global risk:
In the first case, we are under threat of conflict with superior aliens.
1A) If they are already here, we can do something that will encourage them to destroy us, or restrict us. For example, turn off the simulation. Or start the program of probes-berserkers. This probes cold be nanobots. In fact it could be something like “Space gray goo” with low intelligence but very wide spreading. It could even be in my room. The only goal of it could be to destroy other nanobots (like our Nanoshield would do). And so we will see it until we create our own nanobots.
1b) If they open up our star system right now and, moreover, focused on total colonization of all systems, we are also will fight with them and are likely to lose. Not probable.
1c) If a large portion of civilization is infected with SETI-virus and distributes signals, specially designed to infect naive civilizations — that is, encourage them to create a computer with AI, aimed at the further replication by SETI channels. This is what I write in the article Is SETI dangerous? http://www.proza.ru/texts/2008/04/12/55.html
1d) By the means of METI signal we attract attention of dangerous civilization and it will send to the solar system a beam of death (probably commonly known as gamma-ray burst). This scenario seems unlikely, since for the time until they receive the signal and have time to react, we have time to fly away from the solar system — if they are far away. And if they are close, it is not clear why they were not here. However, this risk was intensely discussed, for example by D. Brin.
2. They do not disseminate in space. This means that either:
2a) Civilizations are very likely to destroy themselves in very early stages, before it could start wave of robots replicators and we are not exception. This is reinforced by the Doomsday argument – namely the fact that I’m discovering myself in a young civilization suggests that they are much more common than the old. However, based on the expected rate of development of nanotechnology and artificial intelligence, we can start a wave of replicators have in 10–20 years, and even if we die then, this wave will continue to spread throughout the universe. Given the uneven development of civilizations, it is difficult to assume that none of them do not have time to launch a wave of replicators before their death. This is possible only if we a) do not see an inevitable and universal threat looming directly on us in the near future, b) significantly underestimate the difficulty of creating artificial intelligence and nanoreplicators. с) The energy of the inevitable destruction is so great that it manages to destroy all replicators, which were launched by civilization — that is it is of the order of a supernova explosion.
2b) Every civilization sharply limit itself — and this limitation is very hard and long as it is simple enough to run at least one probe-replicator. This restriction may be based either on a powerful totalitarianism, or the extreme depletion of resources. Again in this case, our prospects are quite unpleasant. Bur this solution is not very plausible.
3) If civilization are rare, it means that the universe is much less friendly place to live, and we are on an island of stability, which is likely to be an exception from the rule. This may mean that we underestimate the time of the future sustainability of the important processes for us (the solar luminosity, the earth’s crust), and most importantly, the sustainability of these processes to small influences, that is their fragility. I mean that we can inadvertently break their levels of resistance, carrying out geo-engineering activities, the complex physics experiments and mastering space. More I speak about this in the article: “Why antropic principle stopped to defend us. Observation selection and fragility of our environment”. http://www.scribd.com/doc/8729933/Why-antropic-principle-sto…vironment– See also the works of M.Circovic on the same subject.
However, this fragility is not inevitable and depends on what factors were critical in the Great filter. In addition, we are not necessarily would pressure on this fragile, even if it exist.
4) Observation selection makes us unique civilization.
4a. We are the first civilization, because any civilization which is the first captures the whole galaxy. Likewise, the earthly life is the first life on Earth, because it would require all swimming pools with a nutrient broth, in which could appear another life. In any case, sooner or later we will face another first civilization.
4b. Vast majority of civilizations are being destroyed in the process of colonization of the galaxy, and so we can find ourselves only in the civilization which is not destroyed by chance. Here the obvious risk is that those who made this error, would try to correct it.
4c. We wonder about the absence of contact, because we are not in contact. That is, we are in a unique position, which does not allow any conclusions about the nature of the universe. This clearly contradicts the Copernican principle.
The worst variant for us here is 2a — imminent self-destruction, which, however, has independent confirmation through the Doomsday Argument, but is undermine by the fact that we do not see alien von Neuman probes. I still believe that the most likely scenario is a Rare earth.

6

Comments — comments are now closed.


  1. Ian G says:

    My guess is that firstly intelligent life like ours, that might have the potential to spread through the galaxy, is very rare. Then most do probably destroy themselves in the gap between developing the technology for self — extinction (which we now have) and developing the tech to survive in space and / or go interstellar (unknown time in the future — maybe longer than you think).

    I think this may in effect be a sort of selection filter. I would like to be hopeful that we’ll survive this stage but sadly I fear the odds are not on our side.

    For those that do, there could be many reasons why they then don’t expand or choose to limit their expansion. They could have achieved some sort of long term stable, static society. Or they might consider that those who left would develop soon to be unlike them (their own aliens) and so at best they might lack the motivation. They might fear the future potential risk of extinction from them would outweigh any benefit, including reduced risk from solar flare, supernova ect. Or they might choose to limit their expansion in order to try and remain ‘hidden’ and reduce the risk of coming into contact with other aliens even knowing that they themselves might be the first. Or they might have developed some form of Unity, which limits them from expanding over too large a volume or too high a population.

    It’s possible that some do expand more widely, maybe throughout their whole galaxy and beyond. They or their probes could be observing us now. They might intervene or make contact with us as we become more technologically advanced. Again, there are many reasons why they may not, at least during our current stage of development.

    They may have developed to be non — violent (through the earlier selection filter — and their internal unity and even survival may depend on applying that even to us). Related to this they may not want to interfere with us in any way which might bias the way that filter applies to us one way or the other. Or being so far ahead of us, it is possible that we can’t ever compete with or threaten them (which needn’t necessarily limit our expansion, as they may be using different resources, or only a limited proportion of them, or leave for some reason) Another possibility is that our greatest value to them, is to be able to observe us and our ‘natural’ development without them damaging, limiting or ‘polluting’ it. Perhaps they are comparing us to others and their own distant history.

    That last would be my preferred scenario. Or that we might even be first, and can survive our risks of extinction (greatest currently being from ourselves) and find we can still develop and expand and be doing this ourselves.

  2. John Hunt says:

    Even if rare Earth is true this does not eliminate the possibility that we’ll none-the-less destroy ourselves with self-replicating technology. So we still need to act as though this might happen — which is why I am glad for the Lifeboat Foundation.

    But it seems as though we are rapidly approaching self-replicating technology on a number of fronts (e.g. biotech, nanotech, AI, and self-replicating chemicals). At the same time we are still very far from interstellar travel. So if our existential even also makes our solar system uninhabitable then this seems to me to be a plausible explanation for Fermi’s Paradox.

    The EGR mission is a near-term interstellar mission proposal to deal with this scenario:

    http://www.peregrinus-interstellar.net/index.php?option=com_…;Itemid=60

  3. Alexei Turchin says:

    If we create self-replicatinh technology it could also replicate in space (not any such technology, but some nanobots ai could)

  4. Alexei Turchin says:

    And we could find in space such technologies from other civilizations. So self destruction through self replicating technologies doesnt explain Fermi paradox. And what self replicating chemical did you mean?

  5. If you led, or advised the leader(s) of, a civilization capable of inter-stellar travel, would you (recommend)communicating with an advancing civilization which now lacked that capability? Wouldn’t you rather observe and wait? If your civilization lacked inter-stellar travel but was capable of inter-stellar observation and communication, wouldn’t you do the same (while racing to develop a defensive capability)? I think Gene (Rodenberry) was right.

  6. John Hunt says:

    > If we create self-replicating technology it could also replicate in space (not any such technology, but some nanobots ai could…And we could find in space such technologies from other civilizations. So self destruction through self replicating technologies doesn’t explain Fermi paradox.

    I’m not talking about self-replicating nanobots. IF all intelligent civilizations accidentally produce self-replicating chemical ecophages well before they would develop a space-surviving, ecophagic nanobot then this would be consistent with Fermi’s Paradox.

    > And what self replicating chemical did you mean?

    Obviously this is unknown since we don’t have a self-replicating ecophagic chemical in existence right now. But, if you program a tabletop molecular manufacturing machine to produce chemical of ever increasing size, I think that there is a possibility that at some point it will produce a self-replicating ecophage.