Menu

Blog

Archive for the ‘extinction’ tag: Page 3

Nov 13, 2011

D’Nile aint just a river in Egypt…

Posted by in categories: business, complex systems, cosmology, economics, education, ethics, existential risks, finance, futurism, geopolitics, human trajectories, humor, life extension, lifeboat, media & arts, neuroscience, open access, open source, philosophy, policy, rants, robotics/AI, space, sustainability

Greetings fellow travelers, please allow me to introduce myself; I’m Mike ‘Cyber Shaman’ Kawitzky, independent film maker and writer from Cape Town, South Africa, one of your media/art contributors/co-conspirators.

It’s a bit daunting posting to such an illustrious board, so let me try to imagine, with you; how to regard the present with nostalgia while looking look forward to the past, knowing that a millisecond away in the future exists thoughts to think; it’s the mode of neural text, reverse causality, non-locality and quantum entanglement, where the traveller is the journey into a world in transition; after 9/11, after the economic meltdown, after the oil spill, after the tsunami, after Fukushima, after 21st Century melancholia upholstered by anti-psychotic drugs help us forget ‘the good old days’; because it’s business as usual for the 1%; the rest continue downhill with no brakes. Can’t wait to see how it all works out.

Please excuse me, my time machine is waiting…
Post cyberpunk and into Transhumanism

Apr 25, 2011

On the Problem of Modern Portfolio Theory: In Search of a Timeless & Universal Investment Perspective

Posted by in categories: complex systems, economics, existential risks, finance, human trajectories, lifeboat, philosophy, policy, sustainability

Dear Lifeboat Foundation Family & Friends,

A few months back, my Aunt Charlotte wrote, wondering why I — a relentless searcher focused upon human evolution and long-term human survival strategy, had chosen to pursue a PhD in economics (Banking & Finance). I recently replied that, as it turns out, sound economic theory and global financial stability both play central roles in the quest for long-term human survival. In the fifth and final chapter of my recent Masters thesis, On the Problem of Sustainable Economic Development: A Game-Theoretical Solution, I argued (with considerable passion) that much of the blame for the economic crisis of 2008 (which is, essentially still upon us) may be attributed the adoption of Keynesian economics and the dismissal of the powerful counter-arguments tabled by his great rival, F.A. von Hayek. Despite the fact that they remained friends all the way until the very end, their theories are diametrically opposed at nearly every point. There was, however, at least one central point they agreed upon — indeed, Hayek was fond of quoting one of Keynes’ most famous maxims: “The ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else” [1].

And, with this nontrivial problem and and the great Hayek vs. Keynes debate in mind, I’ll offer a preview-by-way-of-prelude with this invitation to turn a few pages of On the Problem of Modern Portfolio Theory: In Search of a Timeless & Universal Investment Perspective:

It is perhaps significant that Keynes hated to be addressed as “professor” (he never had that title). He was not primarily a scholar. He was a great amateur in many fields of knowledge and the arts; he had all the gifts of a great politician and a political pamphleteer; and he knew that “the ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is generally understood. Indeed the world is ruled by little else” [1]. And as he had a mind capable of recasting, in the intervals of his other occupations, the body of current economic theory, he more than any of his compeers had come to affect current thought. Whether it was he who was right or wrong, only the future will show. There are some who fear that if Lenin’s statement is correct that the best way to destroy the capitalist system is to debauch the currency, of which Keynes himself has reminded us [1], it will be largely due to Keynes’s influence if this prescription is followed.…

Continue reading “On the Problem of Modern Portfolio Theory: In Search of a Timeless & Universal Investment Perspective” »

Apr 2, 2011

A (Relatively) Brief Introduction to The Principles of Economics & Evolution: A Survival Guide for the Inhabitants of Small Islands, Including the Inhabitants of the Small Island of Earth

Posted by in categories: asteroid/comet impacts, biological, complex systems, cosmology, defense, economics, existential risks, geopolitics, habitats, human trajectories, lifeboat, military, philosophy, sustainability

(NOTE: Selecting the “Switch to White” button on the upper right-hand corner of the screen may ease reading this text).

“Who are you?” A simple question sometimes requires a complex answer. When a Homeric hero is asked who he is.., his answer consists of more than just his name; he provides a list of his ancestors. The history of his family is an essential constituent of his identity. When the city of Aphrodisias… decided to honor a prominent citizen with a public funeral…, the decree in his honor identified him in the following manner:

Hermogenes, son of Hephaistion, the so-called Theodotos, one of the first and most illustrious citizens, a man who has as his ancestors men among the greatest and among those who built together the community and have lived in virtue, love of glory, many promises of benefactions, and the most beautiful deeds for the fatherland; a man who has been himself good and virtuous, a lover of the fatherland, a constructor, a benefactor of the polis, and a savior.
– Angelos Chaniotis, In Search of an Identity: European Discourses and Ancient Paradigms, 2010

I realize many may not have the time to read all of this post — let alone the treatise it introduces — so for those with just a few minutes to spare, consider abandoning the remainder of this introduction and spending a few moments with a brief narrative which distills the very essence of the problem at hand: On the Origin of Mass Extinctions: Darwin’s Nontrivial Error.

Continue reading “A (Relatively) Brief Introduction to The Principles of Economics & Evolution: A Survival Guide for the Inhabitants of Small Islands, Including the Inhabitants of the Small Island of Earth” »

Nov 26, 2010

“Rogue states” as a source of global risk

Posted by in categories: existential risks, geopolitics

Some countries are a threat as possible sources of global risk. First of all we are talking about countries which have developed, but poorly controlled military programs, as well as the specific motivation that drives them to create a Doomsday weapon. Usually it is a country that is under threat of attack and total conquest, and in which the control system rests on a kind of irrational ideology.

The most striking example of such a global risk are the efforts of North Korea’s to weaponize Avian Influenza (North Korea trying to weaponize bird flu http://www.worldnetdaily.com/news/article.asp?ARTICLE_ID=50093), which may lead to the creation of the virus capable of destroying most of Earth’s population.

There is not really important, what is primary: an irrational ideology, increased secrecy, the excess of military research and the real threat of external aggression. Usually, all these causes go hand in hand.

The result is the appearance of conditions for creating the most exotic defenses. In addition, an excess of military scientists and equipment allows individual scientists to be, for example, bioterrorists. The high level of secrecy leads to the fact that the state as a whole does not know what they are doing in some labs.

Continue reading “"Rogue states" as a source of global risk” »

Jun 12, 2010

My presentation on Humanity + summit

Posted by in categories: futurism, robotics/AI

In the lunch time I am existing virtually in the hall of the summit as a face on the Skype account — i didn’t get a visa and stay in Moscow. But ironically my situation is resembling what I an speaking about: about the risk of remote AI which is created by aliens million light years from Earth and sent via radio signals. The main difference is that they communicate one way, and I have duplex mode.

This is my video presentation on YouTube:
Risks of SETI, for Humanity+ 2010 summit

Apr 3, 2010

Natural selection of universes and risks for the parent civilization

Posted by in category: existential risks

Lee Smolin is said to believe (according to personal communication from Danila Medvedev who was told about it by John Smart. I tried to reach Smolin for comments, but failed) that global catastrophe is impossible, based on the following reasoning: the multiverse is dominated by those universes that are able to replicate. This Self-replication occurs in black holes, and in especially in those black holes, which are created civilizations. Thus, the parameters of the universe are selected so that civilization cannot self-destruct before they create black holes. As a result, all physical processes, in which civilization may self-destruct, are closed or highly unlikely. Early version of Smolin’s argument is here: http://en.wikipedia.org/wiki/Lee_Smolin but this early version was refuted in 2004, and so he (probably) added existence of civilization as another condition for cosmic natural selection. Anyway, even if it is not Smolin’s real line of thoughts, it is quite possible line of thoughts.

I think this argument is not persuasive, since the selection can operate both in the direction of universes with more viable civilizations, and in the direction of universes with a larger number of civilizations, just as biological evolution works to more robust offspring in some species (mammals) and in the larger number of offspring with lower viability (plants, for example, dandelion). Since some parameters for the development of civilizations is extremely difficult to adjust by the basic laws of nature (for example, the chances of nuclear war or a hostile AI), but it is easy to adjust the number of emerging civilizations, it seems to me that the universes, if they replicated with the help of civilizations, will use the strategy of dandelions, but not the strategy of mammals. So it will create many unstable civilization and we are most likely one of them (self indication assumption also help us to think so – see recent post of Katja Grace http://meteuphoric.wordpress.com/2010/03/23/sia-doomsday-the-filter-is-ahead/)

But still some pressure can exist for the preservation of civilization. Namely, if an atomic bomb would be as easy to create as a dynamite – much easier then on Earth (which depends on the quantity of uranium and its chemical and nuclear properties, ie, is determined by the original basic laws of the universe), then the chances of the average survival of civilization would be lower. If Smolin’s hypothesis is correct, then we should encounter insurmountable difficulties in creating nano-robots, microelectronics, needed for strong AI, harmful experiments on accelerators with strangelet (except those that lead to the creation of black holes and new universes), and in several other potentially dangerous technology trends that depend on their success from the basic properties of the universe, which may manifest itself in the peculiarities of its chemistry.

In addition, the evolution of universes by Smolin leads to the fact that civilization should create a black hole as early as possible in the course of its history, leading to replication of universes, because the later it happens, the greater the chances that the civilization will self-destruct before it can create black holes. In addition, the civilization is not required to survive after the moment of “replication” (though survival may be useful for the replication, if civilization creates a lot of black holes during its long existence.) From these two points, it follows that we may underestimate the risks from Hadron Collider in the creation of black holes.

Continue reading “Natural selection of universes and risks for the parent civilization” »

Mar 12, 2010

Reduction of human intelligence as global risk

Posted by in categories: existential risks, neuroscience

Another risk is loss of human rationality, while preserving human life. In a society there are always so many people with limited cognitive abilities, and most of the achievements are made by a small number of talented people. Genetic and social degradation, reducing the level of education, loss of skills of logic can lead to a temporary decrease in intelligence of individual groups of people. But as long as humanity is very large in population, it is not so bad, because there will always be enough intelligent people. Significant drop in population after nonglobal disaster may exacerbate this problem. And the low intelligence of the remaining people will reduce their chances of survival. Of course, one can imagine such an absurd situation that people are so degraded that by the evolutionary path new species arise from us, which is not having a full-fledged intelligence — and that back then this kind of evolving reasonable, developed a new intelligence.
More dangerous is decline of intelligence because of the spread of technological contaminants (or use of a certain weapon). For example, I should mention constantly growing global arsenic contamination, which is used in various technological processes. Sergio Dani wrote about this in his article “Gold, coal and oil.” http://sosarsenic.blogspot.com/2009/11/gold-coal-and-oil-reg…is-of.html, http://www.medical-hypotheses.com/article/S0306-9877 (09) 00666–5/abstract
Disengaged during the extraction of gold mines in the arsenic remains in the biosphere for millennia. Dani binds arsenic with Alzheimer’s disease. In his another paper is demonstrated that increasing concentrations of arsenic leads to an exponential increase in incidence of Alzheimer’s disease. He believes that people are particularly vulnerable to arsenic poisoning, as they have large brains and longevity. If, however, according to Denis, in the course of evolution, people will adapt to high levels of arsenic, it will lead to a decline in the brain and life expectancy, resulting in the intellect of people will be lost.
In addition to arsenic contamination occurs among many other neurotoxic substances — CO, CO2, methane, benzene, dioxin, mercury, lead, etc. Although the level of pollution by each of them separately is below health standards, the sum of the impacts may be larger. One reason for the fall of the Roman Empire was called the total poisoning of its citizens (though not barbarians) of lead from water pipes. Of course, they could not have knowledge about these remote and unforeseen consequences — but we also may not know about the many consequences of our affairs.
In addition to dementia is alcohol and most drugs, many drugs (eg, side effect in the accompanying sheets of mixtures of heartburn called dementia). Also rigid ideological system, or memes.
Number of infections, particularly prion, also leads to dementia.
Despite this, the average IQ of people is growing as life expectancy.

Dec 30, 2009

Ark-starship – too early or too late?

Posted by in categories: existential risks, lifeboat, space

It is interesting to note that the technical possibility to send interstellar Ark appeared in 1960th, and is based on the concept of “Blust-ship” of Ulam. This blast-ship uses the energy of nuclear explosions to move forward. Detailed calculations were carried out under the project “Orion”. http://en.wikipedia.org/wiki/Project_Orion_(nuclear_propulsion) In 1968 Dyson published an article “Interstellar Transport”, which shows the upper and lower bounds of the projects. In conservative (ie not imply any technical achievements) valuation it would cost 1 U.S. GDP (600 billion U.S. dollars at the time of writing) to launch the spaceship with mass of 40 million tonnes (of which 5 million tons of payload), and its time of flight to Alpha Centauri would be 1200 years. In a more advanced version the price is 0.1 U.S. GDP, the flight time is 120 years and starting weight 150 000 tons (of which 50 000 tons of payload). In principle, using a two-tier scheme, more advanced thermonuclear bombs and reflectors the flying time to the nearest star can reduce to 40 years.
Of course, the crew of the spaceship is doomed to extinction if they do not find a habitable and fit for human planet in the nearest star system. Another option is that it will colonize uninhabited planet. In 1980, R. Freitas proposed a lunar exploration using self-replicating factory, the original weight of 100 tons, but to control that requires artificial intelligence. “Advanced Automation for Space Missions” http://www.islandone.org/MMSG/aasm/ Artificial intelligence yet not exist, but the management of such a factory could be implemented by people. The main question is how much technology and equipment should be enough to throw at the moonlike uninhabited planet, so that people could build on it completely self-sustaining and growing civilization. It is about creating something like inhabited von Neumann probe. Modern self-sustaining state includes at least a few million people (like Israel), with hundreds of tons of equipment on each person, mainly in the form of houses, roads. Weight of machines is much smaller. This gives us the upper boundary of the able to replicate human colony in the 1 billion tons. The lower estimate is that there would be about 100 people, each of which accounts for approximately 100 tons (mainly food and shelter), ie 10 000 tons of mass. A realistic assessment should be somewhere in between, and probably in the tens of millions of tons. All this under the assumption that no miraculous nanotechnology is not yet open.
The advantage of a spaceship as Ark is that it is non-specific reaction to a host of different threats with indeterminate probabilities. If you have some specific threat (the asteroid, the epidemic), then there is better to spend money on its removal.
Thus, if such a decision in the 1960th years were taken, now such a ship could be on the road.
But if we ignore the technical side of the issue, there are several trade-offs on strategies for creating such a spaceship.
1. The sooner such a project is started, the lesser technically advanced it would be, the lesser would be its chances of success and higher would be cost. But if it will be initiated later, the greater would be chances that it will not be complete until global catastrophe.
2. The later the project starts, the greater are the chance that it will take “diseases” of mother civilization with it (e.g. ability to create dangerous viruses ).
3. The project to create a spaceship could lead to the development of technologies that threaten civilization itself. Blast-ship used as fuel hundreds of thousands of hydrogen bombs. Therefore, it can either be used as a weapon, or other party may be afraid of it and respond. In addition, the spaceship can turn around and hit the Earth, as star-hammer — or there maybe fear of it. During construction of the spaceship could happen man-made accidents with enormous consequences, equal as maximum to detonation of all bombs on board. If the project is implementing by one of the countries in time of war, other countries could try to shoot down the spaceship when it launched.
4. The spaceship is a means of protection against Doomsday machine as strategic response in Khan style. Therefore, the creators of such a Doomsday machine can perceive the Ark as a threat to their power.
5. Should we implement a more expensive project, or a few cheaper projects?
6. Is it sufficient to limit the colonization to the Moon, Mars, Jupiter’s moons or objects in the Kuiper belt? At least it can be fallback position at which you can check the technology of autonomous colonies.
7. The sooner the spaceship starts, the less we know about exoplanets. How far and how fast the Ark should fly in order to be in relative safety?
8. Could the spaceship hide itself so that the Earth did not know where it is, and should it do that? Should the spaceship communicate with Earth? Or there is a risk of attack of a hostile AI in this case?
9. Would not the creation of such projects exacerbate the arms race or lead to premature depletion of resources and other undesirable outcomes? Creating of pure hydrogen bombs would simplify the creation of such a spaceship, or at least reduce its costs. But at the same time it would increase global risks, because nuclear non-proliferation will suffer complete failure.
10. Will the Earth in the future compete with its independent colonies or will this lead to Star Wars?
11. If the ship goes off slowly enough, is it possible to destroy it from Earth, by self-propelling missile or with radiation beam?
12. Is this mission a real chance for survival of the mankind? Flown away are likely to be killed, because the chance of success of the mission is no more than 10 per cent. Remaining on the Earth may start to behave more risky, in logic: “Well, if we have protection against global risks, now we can start risky experiments.” As a result of the project total probability of survival decreases.
13. What are the chances that its computer network of the Ark will download the virus, if it will communicate with Earth? And if not, it will reduce the chances of success. It is possible competition for nearby stars, and faster machines would win it. Eventually there are not many nearby stars at distance of about 5 light years — Alpha Centauri, the Barnard star, and the competition can begin for them. It is also possible the existence of dark lonely planets or large asteroids without host-stars. Their density in the surrounding space should be 10 times greater than the density of stars, but to find them is extremely difficult. Also if nearest stars have not any planets or moons it would be a problem. Some stars, including Barnard, are inclined to extreme stellar flares, which could kill the expedition.
14. The spaceship will not protect people from hostile AI that finds a way to catch up. Also in case of war starships may be prestigious, and easily vulnerable targets — unmanned rocket will always be faster than a spaceship. If arks are sent to several nearby stars, it does not ensure their secrecy, as the destination will be known in advance. Phase transition of the vacuum, the explosion of the Sun or Jupiter or other extreme event can also destroy the spaceship. See e.g. A.Bolonkin “Artificial Explosion of Sun. AB-Criterion for Solar Detonation” http://www.scribd.com/doc/24541542/Artificial-Explosion-of-S…Detonation
15. However, the spaceship is too expensive protection from many other risks that do not require such far removal. People could hide from almost any pandemic in the well-isolated islands in the ocean. People can hide on the Moon from gray goo, collision with asteroid, supervolcano, irreversible global warming. The ark-spaceship will carry with it problems of genetic degradation, propensity for violence and self-destruction, as well as problems associated with limited human outlook and cognitive biases. Spaceship would only burden the problem of resource depletion, as well as of wars and of the arms race. Thus, the set of global risks from which the spaceship is the best protection, is quite narrow.
16. And most importantly: does it make sense now to begin this project? Anyway, there is no time to finish it before become real new risks and new ways to create spaceships using nanotech.
Of course it easy to envision nano and AI based Ark – it would be small as grain of sand, carry only one human egg or even DNA information, and could self-replicate. The main problem with it is that it could be created only ARTER the most dangerous period of human existence, which is the period just before Singularity.

Jun 19, 2009

Asteroid hazard in the context of technological development

Posted by in category: asteroid/comet impacts

Asteroid hazard in the context of technological development

It is easy to notice that the direct risks of collisions with asteroids decreases with technological development. First, they (or, exactly, our estimation of risks) decrease due to more accurate measurement of them — that is, at the expense of more accurate detection of dangerous asteroids and measurements of their orbits we could finally find that the real chance of impact is 0 in the next 100 year. (If, however, will be confirmed the assumption that we live during the episode of comet bombardment, the assessment of risk would increase 100 times to the background.) Second, it decreases due to an increase in our ability to reject asteroids.
On the other hand, the impact of falling asteroids become larger with time — not only because the population density increases, but also because the growing connectedness of the world system, resulting in that damage in one place can spread across the globe. In other words, although the probability of collisions is reducing, the indirect risks associated with the asteroid danger is increasing.
The main indirect risks are:
A) The destruction of hazardous industries in the place of the fall — for example, nuclear power plant. The entire mass of the station in such a case would evaporated and the release of radiation would be higher than in Chernobyl. In addition, there may be additional nuclear reactions because of sudden compression of the station when it is struck by asteroid. Yet the chances of a direct hit of an asteroid in the nuclear plants are small, but they grow with the growing number of stations.
B) There is a risk that even a small group of meteors, moving a specific angle in a certain place in the earth’s surface could lead to lunch of the system for the Prevention of rocket attacks and lead to an accidental nuclear war. Similar consequences could have a small air explosion of an asteroid (a few meters in size). The first option is more likely for developed superpowers system of warning (but which has flaws or unsecured areas in their ABM system, as in the Russian Federation), while the second — for the regional nuclear powers (like India and Pakistan, North Korea, etc.) which are not able to track missiles by radars, but could react to a single explosion.
C) The technology to drive asteroids in the future will create a hypothetical possibility to direct asteroids not only from Earth, but also on it. And even if there will be accidental impact of the asteroid, there will be talks about that it was sent on purpose. Yet hardly anyone will be sent to Earth asteroids, because such action can easily be detected, the accuracy is low and it need to be prepared for decades before event.
D) Deviations of hazardous asteroids will require the creation of space weapons, which could be nuclear, laser or kinetic. Such weapons could be used against the Earth or the spacecrafts of an opponent. Although the risk of applying it against the ground is small, it still creates more potential damage than the falling asteroids.
E) The destruction of the asteroid with nuclear explosion would lead to an increase in its affecting power at the expense of its fragments – to the greater number of blasts over a larger area, as well as the radioactive contamination of debris.
Modern technological means give possibility to move only relatively small asteroids, which are not global threat. The real danger is black comets in size of several kilometers which are moving on elongated elliptical orbits at high speeds. However, in the future, space can be quickly and cheaply explored through self-replicating robots based on nanoteh. This will help to create huge radio telescopes in space to detect dangerous bodies in the solar system. In addition, it is enough to plant one self-replicating microrobot on the asteroid, to multiply it and then it could break the asteroid on parts or build engines that will change its orbit. Nanotehnology will help us to create self-sustaining human settlements on the Moon and other celestial bodies. This suggests that the problem of asteroid hazard will in a few decades be outdated.
Thus, the problem of preventing collisions of the Earth with asteroids in the coming decades can only be a diversion of resources from the global risks:
First, because we are still not able to change orbits of those objects which actually can lead to the complete extinction of humanity.
Secondly, by the time (or shortly thereafter), when the nuclear missile system for destruction of asteroids will be created, it will be obsolete, because nanotech can quickly and cheaply harness the solar system by the middle of 21 century, and may, before .
And third, because such system at time when Earth is divided into warring states will be weapon in the event of war.
And fourthly, because the probability of extinction of humanity as a result of the fall of an asteroid in a narrow period of time when the system of deviation of the asteroids will be deployed, but powerful, nanotechnology is not yet established, is very small. This time period may be equal to 20 years, say from 2030 — until 2050, and the chances of falling bodies of 10 km size during this time, even if we assume that we live in a period comet bombardment, when the intensity is 100 times higher — is at 1 to 15 000 (based on an average frequency of the fall of bodies every 30 million years). Moreover, given the dynamics, we can reject the indeed dangerous objects only at the end of this period, and perhaps even later, as larger the asteroid, the more extensive and long-term project for its deviation is required. Although 1 to 15 000 is still unacceptable high risk, it is commensurate with the risk of the use of space weapons against the Earth.
In the fifth, anti-asteroid protection diverts attention from other global issues, the limited human attention and financial resources. This is due to the fact that the asteroid danger is very easy for understanding — it is easy to imagine, it is easy to calculate the probabilities and it is clear to the public. And there is no doubt of its reality, and there are clear ways for protection. (e.g. the probability of volcanic disaster comparable to the asteroid impact by various estimates, is from 5 to 20 times higher at the same level of energy – but we have no idea how it can be prevented.) So it differs from other risks that are difficult to imagine, that are impossible quantify, but which may mean the probability of complete extinction of tens of percent. These are the risks of AI, biotech, nanotech and nuclear weapons.
In the sixth, when talking about relatively small bodies like Apophis, it may be cheaper to evacuate the area of the fall than to deviate the asteroid. A likely the area of the impact will be ocean.
But I did not call to abandon antiasterod protection, because we first need to find out whether we live in the comet bombardment period. In this case, the probability of falling 1 km body in the next 100 years is equal to 6 %. (Based on data on the hypothetical fall in the last 10 000 years, like a comet Klovis http://en.wikipedia.org/wiki/Younger_Dryas_impact_event , traces of which can be 500 000 in the craters of similar entities called Carolina Bays http://en.wikipedia.org/wiki/Carolina_bays crater, and around New Zealand in 1443 http://en.wikipedia.org/wiki/Mahuika_crater and others 2 impacts in last 5 000 years , see works of http://en.wikipedia.org/wiki/Holocene_Impact_Working_Group ). We must first give power to the monitoring of dark comets and analysis of fresh craters.

May 2, 2009

From financial crisis to global catastrophe

Posted by in categories: economics, existential risks

From financial crisis to global catastrophe

Financial crisis which manifested in the 2008 (but started much earlier) has led to discussion in alarmists circles — is this crisis the beginning of the final sunset of mankind? In this article we will not consider the view that the crisis will suddenly disappear and everything returns to its own as trivial and in my opinion false. Transition of the crisis into the global catastrophe emerged the following perspective:
1) The crisis is the beginning of long slump (E. Yudkowsky term), which gradually lead mankind to a new Middle Ages. This point of view is supported by proponents of Peak Oil theory, who believe that recently was passed peak of production of liquid fuels, and since that time, the number of oil production begins to drop a few percent each year, according to bell curve, and that fossil fuel is a necessary resource for the existence of modern civilization, which will not be able to switch to alternative energy sources. They see the current financial crisis as a direct consequence of high oil prices, which brace immoderate consumption. The maintenance is the point of view is the of «The peak all theory», which shows that not only oil but also the other half of the required resources of modern civilization will be exhausted in the next quarter of century. (Note that the possibility of replacing some of resources with other leads to that peaks of each resource flag to one moment in time.) Finally, there is a theory of the «peak demand» — namely, that in circumstances where the goods produced more then effective demand, the production in general is not fit, which includes the deflationary spiral that could last indefinitely.
2) Another view is that the financial crisis will inevitably lead to a geopolitical crisis, and then to nuclear war. This view can be reinforced by the analogy between the Great Depression and novadays. The Great Depression ended with the start of the Second World War. But this view is considering nuclear war as the inevitable end of human existence, which is not necessarily true.
3) In the article “Scaling law of the biological evolution and the hypothesis of the self-consistent Galaxy origin of life”. (Advances in Space Research V.36 (2005), P.220–225” http://dec1.sinp.msu.ru/~panov/ASR_Panov_Life.pdf) Russian scientist A. D. Panov showed that the crises in the history of humanity became more frequent in curse of history. Each crisis is linked with the destruction of some old political system, and with the creation principle technological innovation at the exit from the crisis. 1830 technological revolution lead to industrial world (but peak of crisis was of course near 1815 – Waterloo, eruption of Tambora, Byron on the Geneva lake create new genre with Shelly and her Frankeshtain.) One such crisis happened in 1945 (dated 1950 in Panov’s paper – as a date of not the beginning of the crisis, but a date of exit from it and creation of new reality) when the collapse of fascism occurred and arose computers, rockets and atomic bomb, and bipolar world. An important feature of these crises is that they follow a simple law: namely, the next crisis is separated from the preceding interval of time to 2.67+/- 0.15 shorter. The last such crisis occurred in the vicinity of 1991 (1994 if use Panov’s formula from the article), when the USSR broke up and began the march of the Internet. However, the schedule of crisis lies on the hyperbole that comes to the singularity in the region in 2020 (Panov gave estimate 2004+/-15, but information about 1991 crisis allows to sharpen the estimate). If this trend continues to operate, the next crisis must come after 17 years from 1991 , in 2008, and another- even after 6.5 years in 2014 and then the next in 2016 and so on. Naturally it is desirable to compare the Panov’s forecast and the current financial crisis.
Current crisis seems to change world politically and technologically, so it fit to Panov’s theory which predict it with high accuracy long before. (At least at 2005 – but as I now Panov do not compare this crisis with his theory.) But if we agree with Panov’s theory we should not expect global catastrophe now, but only near 2020. So we have long way to it with many crisises which will be painful but not final. (more…)

Page 3 of 3123