Menu

Blog

Archive for the ‘existential risks’ category: Page 121

Feb 14, 2009

Russian Lifeboat Foundation NanoShield

Posted by in categories: cybercrime/malcode, existential risks, nanotechnology, policy

I have translated into Russian “Lifeboat Foundation Nanoshield” http://www.scribd.com/doc/12113758/Nano-Shield and I have some thoughts about it:

1) The effective mean of defense against ecofagy would be to turn in advance all the matter on the Earth into nanorobots. Just as every human body is composed of living cells (although this does not preclude the emergence of cancer cells). The visible world would not change. All object will consist of nano-cells, which would have sufficient immune potential to resist almost any foreseeable ecofagy. (Except purely informational like computer viruses). Even in each leaving cell would be small nanobot, which would control it. Maybe the world already consists of nanobots.
2) The authors of the project suggest that ecofagic attack would consist of two phases — reproduction and destruction. However, creators of ecofagy, could make three phases — first phase would be a quiet distribution throughout the Earth’s surface, under surfase, in the water and air. In this phase nanorobots will multiply in slow rate, and most importantly, sought to be removed from each other on the maximum distance. In this case, their concentration everywhere on the Earth as a result would be 1 unit on the cube meter (which makes them unrecognazible). And only after it they would start to proliferate intensely, simultaneously creating nanorobots soldiers who did not replicate, but attack the defensive system. In doing so, they first have to suppress protection systems, like AIDS. Or as a modern computer viruses switches off the antivirus. Creators of the future ecofagy must understand it. As the second phase of rapid growth begins everywhere on the surface of the Earth, then it would be impossible to apply the tools of destruction such as nuclear strikes or aimed rays, as this would mean the death of the planet in any case — and simply would not be in store enough bombs.
3) The authors overestimate the reliability of protection systems. Any system has a control center, which is a blank spot. The authors implicitly assume that any person with a certain probability can suddenly become terrorist willing to destroy the world (and although the probability is very small, a large number of people living on Earth make it meaningful). But because such a system will be managed by people, they may also want to destroy the world. Nanoshield could destroy the entire world after one erroneous command. (Even if the AI manages it, we cannot say a priori that the AI cannot go mad.) The authors believe that multiple overlapping of Nanoshield protection from hackers will make it 100 % safe, but no known computer system is 100 % safe – but all major computer programs were broken by hackers, including Windows and IPod.
4) Nanoshield could develop something like autoimmunity reaction. The author’s idea that it is possible to achieve 100 % reliability by increasing the number of control systems is very superficial, as well as the more complex is the system, the more difficult is to calculate all the variants of its behavior, and the more likely it will fail in the spirit of the chaos theory.
5) Each cubic meter of oceanic water contains 77 million living beings (on the northern Atlantic, as the book «Zoology of Invertebrates» tells). Hostile ecofages can easily camouflage under natural living beings, and vice versa; the ability of natural living beings to reproduce, move and emit heat will significantly hamper detection of ecofages, creating high level of false alarms. Moreover, ecofages may at some stage in their development be fully biological creatures, where all blueprints of nanorobot will be recorded in DNA, and thus be almost no distinguishable from the normal cell.
6) There are significant differences between ecofages and computer viruses. The latter exist in the artificial environment that is relatively easy to control — for example, turn off the power, get random access to memory, boot from other media, antivirus could be instantaneous delivered to any computer. Nevertheless, a significant portion of computers were infected with a virus, but many users are resigned to the presence of a number of malware on their machines, if it does not slow down much their work.
7) Compare: Stanislaw Lem wrote a story “Darkness and mold” with main plot about ecofages.
8 ) The problem of Nanoshield must be analyzed dynamically in time — namely, the technical perfection of Nanoshield should precede technical perfection of nanoreplikators in any given moment. From this perspective, the whole concept seems very vulnerable, because to create an effective global Nanoshield require many years of development of nanotechnology — the development of constructive, and political development — while creating primitive ecofages capable, however, completely destroy the biosphere, is required much less effort. Example: Creating global missile defense system (ABM – still not exist) is much more complex technologically and politically, than the creation of intercontinental nuclear missiles.
9) You should be aware that in the future will not be the principal difference between computer viruses and biological viruses and nanorobots — all them are information, in case of availability of any «fabs» which can transfer information from one carrier to another. Living cells could construct nanorobots, and vice versa; spreading over computer networks, computer viruses can capture bioprinters or nanofabs and force them to perform dangerous bioorganizms or nanorobots (or even malware could be integrated into existing computer programs, nanorobots or DNA of artificial organisms). These nanorobots can then connect to computer networks (including the network which control Nanoshield) and send their code in electronic form. In addition to these three forms of the virus: nanotechnology, biotechnology and computer, are possible other forms, for example, cogno — that is transforming the virus in some set of ideas in the human brain which push the man to re-write computer viruses and nanobots. Idea of “hacking” is now such a meme.
10) It must be noted that in the future artificial intelligence will be much more accessible, and thus the viruses would be much more intelligent than today’s computer viruses, also applies to nanorobots: they will have a certain understanding of reality, and the ability to quickly rebuild itself, even to invent its innovative design and adapt to new environments. Essential question of ecofagy is whether individual nanorobots are independent of each other, as the bacteria cells, or they will act as a unified army with a single command and communication systems. In the latter case, it is possible to intercept the management of hostile army ecofages.
11) All that is suitable to combat ecofagy, is suitable as a defensive (and possibly offensive) weapons in nanowar.
12) Nanoshield is possible only as global organization. If there is part of the Earth which is not covered by it, Nanoshield will be useless (because there nanorobots will multiply in such quantities that it would be impossible to confront them). It is an effective weapon against people and organizations. So, it should occur only after full and final political unification of the globe. The latter may result from either World War for the unification of the planet, either by merging of humanity in the face of terrible catastrophes, such as flash of ecofagy. In any case, the appearance of Nanoshield must be preceded by some accident, which means a great chance of loss of humanity.
13) Discovery of «cold fusion» or other non-conventional energy sources will make possible much more rapid spread of ecofagy, as they will be able to live in the bowels of the earth and would not require solar energy.
14) It is wrong to consider separately self-replicating and non-replitcating nanoweapons. Some kinds of ecofagy can produce nano-soldiers attacking and killing all life. (This ecofagy can become a global tool of blackmail.) It has been said that to destroy all people on the Earth can be enough a few kilograms of nano-soldiers. Some kinds of ecofagy in early phase could dispersed throughout the world, very slowly and quietly multiply and move, and then produce a number of nano-soldiers and attack humans and defensive systems, and then begin to multiply intensively in all areas of the globe. But man, stuffed with nano-medicine, can resist attack of nanosoldier as well as medical nanorobots will be able to neutralize any poisons and tears arteries. In this small nanorobot must attack primarily informational, rather than from a large selection of energy.
15) Did the information transparency mean that everyone can access code of dangerous computer virus, or description of nanorobot-ecofage? A world where viruses and knowledge of mass destruction could be instantly disseminated through the tools of information transparency is hardly possible to be secure. We need to control not only nanorobots, but primarily persons or other entities which may run ecofagy. The smaller is the number of these people (for example, scientists-nanotechnologist), the easier would be to control them. On the contrary, the diffusion of knowledge among billions of people will make inevitable emergence of nano-hackers.
16) The allegation that the number of creators of defense against ecofagy will exceed the number of creators of ecofagy in many orders of magnitude, seems doubtful, if we consider an example of computer viruses. Here we see that, conversely, the number of virus writers in the many orders of magnitude exceeds the number of firms and projects on anti-virus protection, and moreover, the majority of anti-virus systems cannot work together as they stops each other. Terrorists may be masked by people opposing ecofagy and try to deploy their own system for combat ecofagy, which will contain a tab that allows it to suddenly be reprogrammed for the hostile goal.
17) The text implicitly suggests that Nanoshield precedes to the invention of self improving AI of superhuman level. However, from other prognosis we know that this event is very likely, and most likely to occur simultaneously with the flourishing of advanced nanotechnology. Thus, it is not clear in what timeframe the project Nanoshield exist. The developed artificial intelligence will be able to create a better Nanoshield and Infoshield, and means to overcome any human shields.
18) We should be aware of equivalence of nanorobots and nanofabrics — first can create second, and vice versa. This erases the border between the replicating and non-replicating nanomachines, because a device not initially intended to replicate itself can construct somehow nanorobot or to reprogram itself into capable for replication nanorobot.

Jan 27, 2009

Finding a Cure for Collective Neurosis in the Attention Economy

Posted by in categories: economics, existential risks, futurism, media & arts

(This essay has been published by the Innovation Journalism Blog — here — Deutsche Welle Global Media Forum — here — and the EJC Magazine of the European Journalism Centre — here)

Thousands of lives were consumed by the November terror attacks in Mumbai.

“Wait a second”, you might be thinking. “The attacks were truly horrific, but all news reports say around two hundred people were killed by the terrorists, so thousands of lives were definitely not consumed.”

You are right. And you are wrong.

Continue reading “Finding a Cure for Collective Neurosis in the Attention Economy” »

Jan 15, 2009

What should be at the center of the U.S. stimulus package

Posted by in categories: existential risks, geopolitics, habitats, lifeboat, space, sustainability

The projected size of Barack Obama’s “stimulus package” is heading north, from hundreds of billions of dollars into the trillions. And the Obama program comes, of course, on top of the various Bush administration bailouts and commitments, estimated to run as high as $8.5 trillion.

Will this money be put to good use? That’s an important question for the new President, and an even more important question for America. The metric for all government spending ultimately comes down to a single query: What did you get for it?

If such spending was worth it, that’s great. If the country gets victory in war, or victory over economic catastrophe, well, obviously, it was worthwhile. The national interest should never be sacrificed on the altar of a balanced budget.

So let’s hope we get the most value possible for all that money–and all that red ink. Let’s hope we get a more prosperous nation and a cleaner earth. Let’s also hope we get a more secure population and a clear, strategic margin of safety for the United States. Yet how do we do all that?

Continue reading “What should be at the center of the U.S. stimulus package” »

Dec 9, 2008

Why antropic principle stops to defend us

Posted by in categories: existential risks, futurism, space

In the volume “Global catastrophic risks” you could find excellent article of Milan Circovic “Observation selection effects and global catastrophic risks”, where he shows that we can’t use information from past records to estimating future rate of global catastrophes.
This has one more consequence which I investigate in my article: “Why antropic principle stops to defend us. Observation selection, future rate of natural disasters and fragility of our environment” — that is we could be in the end of the long period of stability, and some catastrophes may be long overdue and what is most important we could underestimate fragility of our environment which could be on the verge of bifurcation. It is because origination of intellectual life on the Earth is very rare event and it means that some critical parameters may lay near their bounds of stability and small anthropogenic influences could start catastrophic process in this century.

http://www.scribd.com/doc/8729933/Why-antropic-principle-sto…vironment–

Why antropic principle stops to defend us
Observation selection, future rate of natural disasters and fragility of our environment.

Alexei Turchin,
Russian Transhumanist movement

Continue reading “Why antropic principle stops to defend us” »

Nov 26, 2008

What are the Risks of Failure of Nuclear Deterrence?

Posted by in categories: existential risks, geopolitics, nuclear weapons

Nuclear warheads

Martin Hellman is a professor at Stanford, one of the co-inventors of public-key cryptography, and the creator of NuclearRisks.org. He has recently published an excellent essay about the risks of failure of nuclear deterrence: Soaring, Cryptography and Nuclear Weapons. (also available on PDF)

I highly recommend that you read it, along with the other resources on NuclearRisks.org, and also subscribe to their newsletter (on the left on the frontpage).

There are also chapters on Nuclear War and Nuclear Terrorism in Global Catastrophic Risks (intro freely available as PDF here).

Continue reading “What are the Risks of Failure of Nuclear Deterrence?” »

Nov 25, 2008

Giant planets ignition

Posted by in categories: biotech/medical, existential risks, futurism, geopolitics, nanotechnology, nuclear weapons, rants, space

I wrote an essay on the theme of the possibility of artificial initiation and fusion explosion of giants planets and other objects of Solar system. It is not a scientific article, but an atempt to collect all nesessary information about this existential risk. I conclude that it could not be ruled out as technical possibility, and could be made later as act of space war, which could clean entire Solar system.

Where are some events which are very improbable, but which consequence could be infinitely large (e.g. black holes on LHC.) Possibility of nuclear ignition of self-containing fusion reaction in giant planets like Jupiter and Saturn which could lead to the explosion of the planet, is one of them.

Inside the giant planets is thermonuclear fuel under high pressure and at high density. This density for certain substances is above (except water, perhaps) than the density of these substances on Earth. Large quantities of the substance would not have fly away from reaction zone long enough for large energy relize. This fuel has never been involved in fusion reactions, and it remained easy combustible components, namely, deuterium, helium-3 and lithium, which have burned at all in the stars. In addition, the subsoil giant planets contain fuel for reactions, which may prompt an explosive fire — namely, the tri-helium reaction (3 He 4 = C12) and for reactions to the accession of hydrogen to oxygen, which, however, required to start them much higher temperature. Substance in the bowels of the giant planets is a degenerate form of a metal sea, just as the substance of white dwarfs, which regularly takes place explosive thermonuclear burning in the form of helium flashes and the flashes of the first type of supernova.
The more opaque is environment, the greater are the chances for the reaction to it, as well as less scattering, but in the bowels of the giant planets there are many impurities and can be expected to lower transparency. Gravitational differentiation and chemical reactions can lead to the allocation of areas within the planet that is more suitable to run the reaction in its initial stages.

The stronger will be an explosion of fuse, the greater will be amount of the initial field of burning, and the more likely that the response would be self-sustaining, as the energy loss will be smaller and the number of reaction substances and reaction times greater. It can be assumed that if at sufficiently powerful fuse the reaction will became self-sustaining.

Continue reading “Giant planets ignition” »

Oct 26, 2008

Refuges and bunkers

Posted by in categories: asteroid/comet impacts, cybercrime/malcode, defense, existential risks, habitats, lifeboat, sustainability, treaties

Here I would like to suggest readers a quotation from my book “Structure of the global catastrophe” (http://www.scribd.com/doc/7529531/-) there I discuss problems of preventing catastrophes.

Refuges and bunkers

Different sort of a refuge and bunkers can increase chances of survival of the mankind in case of global catastrophe, however the situation with them is not simple. Separate independent refuges can exist for decades, but the more they are independent and long-time, the more efforts are necessary for their preparation in advance. Refuges should provide ability for the mankind to the further self-reproduction. Hence, they should contain not only enough of capable to reproduction people, but also a stock of technologies which will allow to survive and breed in territory which is planned to render habitable after an exit from the refuge. The more this territory will be polluted, the higher level of technologies is required for a reliable survival.
Very big bunker will appear capable to continue in itself development of technologies and after catastrophe. However in this case it will be vulnerable to the same risks, as all terrestrial civilisation — there can be internal terrorists, AI, nanorobots, leaks etc. If the bunker is not capable to continue itself development of technologies it, more likely, is doomed to degradation.
Further, the bunker can be or «civilizational», that is keep the majority of cultural and technological achievements of the civilisation, or “specific”, that is keep only human life. For “long” bunkers (which are prepared for long-term stay) the problem of formation and education of children and risks of degradation will rise. The bunker can or live for the account of the resources which have been saved up before catastrophe, or be engaged in own manufacture. In last case it will be simply underground civilisation on the infected planet.
The more a bunker is constructed on modern technologies and independent cultural and technically, the higher ammount of people should live there (but in the future it will be not so: the bunker on the basis of advanced nanotechnology can be even at all deserted, — only with the frozen human embryos). To provide simple reproduction by means of training to the basic human trades, thousand people are required. These people should be selected and be in the bunker before final catastrophe, and, it is desirable, on a constant basis. However it is improbable, that thousand intellectually and physically excellent people would want to sit in the bunker “just in case”. In this case they can be in the bunker in two or three changes and receive for it a salary. (Now in Russia begins experiment «Mars 500» in which 6 humans will be in completely independent — on water, to meal, air — for 500 days. Possibly, it is the best result which we now have. In the early nineties in the USA there was also a project «Biosphera-2» in which people should live two years on full self-maintenance under a dome in desert. The project has ended with partial failure as oxygen level in system began to fall because of unforeseen reproduction of microorganisms and insects.) As additional risk for bunkers it is necessary to note fact of psychology of the small groups closed in one premise widely known on the Antarctic expeditions — namely, the increase of animosities fraught with destructive actions, reducing survival rate.
The bunker can be either unique, or one of many. In the first case it is vulnerable to different catastrophes, and in the second is possible struggle between different bunkers for the resources which have remained outside. Or is possible war continuation if catastrophe has resulted from war.
The bunker, most likely, will be either underground, or in the sea, or in space. But the space bunker too can be underground of asteroids or the Moon. For the space bunker it will be more difficult to use the rests of resources on the Earth. The bunker can be completely isolated, or to allow “excursion” in the external hostile environment.
As model of the sea bunker can serve the nuclear submarine possessing high reserve, autonomy, manoeuvrability and stability to negative influences. Besides, it can easily be cooled at ocean (the problem of cooling of the underground closed bunkers is not simple), to extract from it water, oxygen and even food. Besides, already there are ready boats and technical decisions. The boat is capable to sustain shock and radiating influence. However the resource of independent swimming of modern submarines makes at the best 1 year, and in them there is no place for storage of stocks.
Modern space station ISS could support independently life of several humans within approximately year though there are problems of independent landing and adaptation. Not clearly, whether the certain dangerous agent, capable to get into all cracks on the Earth could dissipate for so short term.
There is a difference between gaso — and bio — refuges which can be on a surface, but are divided into many sections for maintenance of a mode of quarantine, and refuges which are intended as a shelter from in the slightest degree intelligent opponent (including other people who did not manage to get a place in a refuge). In case of biodanger island with rigid quarantine can be a refuge if illness is not transferred by air.
A bunker can possess different vulnerabilities. For example, in case of biological threat, is enough insignificant penetration to destroy it. Only hi-tech bunker can be the completely independent. Energy and oxygen are necessary to the bunker. The system on a nuclear reactor can give energy, but modern machines hardly can possess durability more than 30–50 years. The bunker cannot be universal — it should assume protection against the certain kinds of threats known in advance — radiating, biological etc.
The more reinforced is a bunker, the smaller number of bunkers can prepare mankind in advance, and it will be more difficult to hide such bunker. If after a certain catastrophe there was a limited number of the bunkers which site is known, the secondary nuclear war can terminate mankind through countable number of strikes in known places.
The larger is the bunker, the less amount of such bunkers is possible to construct. However any bunker is vulnerable to accidental destruction or contamination. Therefore the limited number of bunkers with certain probability of contamination unequivocally defines the maximum survival time of mankind. If bunkers are connected among themselves by trade and other material distribution, contamination between them is more probable. If bunkers are not connected, they will degrade faster. The more powerfully and more expensively is the bunker, the more difficult is to create it imperceptibly for the probable opponent and so it easeir becomes the goal for an attack. The more cheaply the bunker, the less it is durable.
Casual shelters — the people who have escaped in the underground, mines, submarines — are possible. They will suffer from absence of the central power and struggle for resources. The people, in case of exhaustion of resources in one bunker, can undertake the armed attempts to break in other next bunker. Also the people who have escaped casually (or under the threat of the comong catastrophe), can attack those who was locked in the bunker.
Bunkers will suffer from necessity of an exchange of heat, energy, water and air with an external world. The more independent is the bunker, the less time it can exist in full isolation. Bunkers being in the Earth will deeply suffer from an overheating. Any nuclear reactors and other complex machines will demand external cooling. Cooling by external water will unmask them, and it is impossible to have energy sources lost-free in the form of heat, while on depth of earth there are always high temperatures. Temperature growth, in process of deepening in the Earth, limits depth of possible bunkers. (The geothermal gradient on the average makes 30 degrees C/kilometers. It means, that bunkers on depth more than 1 kilometre are impossible — or demand huge cooling installations on a surface, as gold mines in the republic of South Africa. There can be deeper bunkers in ices of Antarctica.)
The more durable, more universal and more effective, should be a bunker, the earlier it is necessary to start to build it. But in this case it is difficult to foresee the future risks. For example, in 1930th years in Russia was constructed many anti-gase bombproof shelters which have appeared useless and vulnerable to bombardments by heavy demolition bombs.
Efficiency of the bunker which can create the civilisation, corresponds to a technological level of development of this civilisation. But it means that it possesses and corresponding means of destruction. So, especially powerful bunker is necessary. The more independently and more absolutely is the bunker (for example, equipped with AI, nanorobots and biotechnologies), the easier it can do without, eventually, people, having given rise to purely computer civilisation.
People from different bunkers will compete for that who first leaves on a surface and who, accordingly, will own it — therefore will develop the temptation for them to go out to still infected sites of the Earth.
There are possible automatic robotic bunkers: in them the frozen human embryos are stored in a certain artificial uterus and through hundreds or thousand years start to be grown up. (Technology of cryonics of embryos already exists, and works on an artificial uterus are forbidden for bioethics reasons, but basically such device is possible.) With embryos it is possible to send such installations in travel to other planets. However, if such bunkers are possible, the Earth hardly remains empty — most likely it will be populated with robots. Besides, if the human cub who has been brought up by wolves, considers itself as a wolf as whom human who has been brought up by robots will consider itself?
So, the idea about a survival in bunkers contains many reefs which reduce its utility and probability of success. It is necessary to build long-term bunkers for many years, but they can become outdated for this time as the situation will change and it is not known to what to prepare. Probably, that there is a number of powerful bunkers which have been constructed in days of cold war. A limit of modern technical possibilities the bunker of an order of a 30-year-old autonomy, however it would take long time for building — decade, and it will demand billions dollars of investments.
Independently there are information bunkers, which are intended to inform to the possible escaped descendants about our knowledge, technologies and achievements. For example, in Norway, on Spitsbergen have been created a stock of samples of seeds and grain with these purposes (Doomsday Vault). Variants with preservation of a genetic variety of people by means of the frozen sperm are possible. Digital carriers steady against long storage, for example, compact discs on which the text which can be read through a magnifier is etched are discussed and implemented by Long Now Foundation. This knowledge can be crucial for not repeating our errors.

Sep 10, 2008

Global risk researches in Russia

Posted by in categories: defense, existential risks, geopolitics, military, nuclear weapons

1. Language and cultural isolation lead to the situation then Russian researches are not known in West and vice versa. I spent a lot of time translating into Russian and promoting works of Bostrom, Yudkowsky, Circovic, D.Brin, Freitas, A.Kent and other writers on global risks. Here I would like to tell you about some Russian researchers. Though I can’t prove validity of their ideas I think they should be checked internationally in order to roll out them or to take preventive measures. A. V. Karnauhov created a theory of “green house” catastrophe. He shows that climate is non linear system this many positive feedbacks and one of them is often missed – it is that water vapor is also greenhouse gas and growing temperatures would lead to injection of more and more water vapor into atmosphere. Also current level of carbon dioxid should lead to much more temperature increase, but inertia of ocean temperature makes current temperature smaller. But ocean temperature will rise, especially in Arctic, where large amounts of methane stored under seebed on the low shallow waters. This would lead to clarhat gun explosion of metane. Cumulative effect of water vapor, CO2, Metane and surmounting of ocean inertia will lead to very quick exponential global warming, which could have devastating effects as early as in 2020th and make global temperature higher not on 6 degrees but on several tens to the end of the century – which would mean human extinction, and after 200 years all life extinction on Earth Some his ideas you could see in the article: http://www.poteplenie.ru/doc/role.pdf Karnaukhov A.V. Role of the biosphere in the formation of the Earth’s Climate: The Greenhouse Catastrophe, Biophysics, Vol.46, No 6, 2001, pp. 1078–1088. Also I should mention works of Drobishevsky “Danger of the explosion of Callisto and the priority of space missions” http://www.springerlink.com/content/584mw0407824nt72/ He thinks that Jovian satellite Callisto could soon explode because of H and O reaction in its ice. Such explosion will lead to bombardment of the earth by comets and “nuclear winter” for 60 years. He suggested to send there a space mission. But I wrote him that, if he is write, it is very dangerous to send where mission, because it could trigger the explosion by drilling the ice crust. And the last man, about whom I would like to tell you, is a reviewer of my book “the Structure of the global catastrophe” Aranowich, who told me by way that his group has created much more effective way to penetrate the earth crust the Stevenson’s probe. Stevenson’s probe require 10 million ton of melted iron. His probe will weight only 10 tons and will use an energy of radioactive decay. It could reach 1000 km depth by one month – and the main danger is creation of supervolcano. Then I asked him, was any safety analysis done – he said not. But this is only theoretical work and no practical realization is planned.
2. I have wrote a book “The structure of global catastrophe” which aim was to investigate how different scenarios of global risks could interact in time, because all of them could realize in the XXI century. This book is sponsored by Russian Transhumanist movemet. Nick Bostrom wrote short preface to it. The book is mostly ready, but some editorial and organizational problems still persists. I hope it will be published by the end of the year.
3. I am started to translate this book into English. I have translated it by computer and then edit the result – now I am on the page 27 of 390. I need someone with native English who could help me to edit this translation. The book is here: http://avturchin.narod.ru/sgkengl2.doc I hope to finish English translation (in readable, but not high literature quality:) of the book until winter.
4. The shorter version of this book is already published on the name “War and 25 other scenarios of the End of the world”. This name was suggested by editorial house, the original name was: “Gnoseology of catastrophes”. The main idea of the book is that our inability to predict the future is equal to the end of the world.
5. I have translated the most part of Lifeboat site on Russian and I expect it will appear in the Net soon.
6. I have wrote several articles on the theme of global catastrophe: “Is SETI dangerous? English translation — http://www.proza.ru/texts/2008/04/12/55.html, “Atrophic principle and natural catastrophes” http://www.proza.ru/texts/2007/04/12-13.html and “About possibilities of manmade ignition of giant planets and other objects of Solar system” http://www.proza.ru/texts/2008/07/19/466.html which are in Russian.
7. I have created “Global catastrophic risks and human extinction library” there you could find many interesting literature on English and Russian. http://avturchin.narod.ru/Global.htm
8. I think that it is provable that if humanity will unite, it will have a chance to resist global risks, but if it will be divided on military competing parts, it almost doomed. Resent events on Caucasus put again in agenda the question of New cold war. Here we should ask what is the worst outcome of possible Cold war? Common answer is that Nuclear war is that worst outcome. But Nuclear war will not terminate all human population in most realistic scenarios. Much worse outcome is, I think, new arm race, which could lead to quick creation of much more destructive weapons, than nuclear. And the worst outcome of arm race is creation of Doomsday machine. Doomsday machine (DM) is ultimate defense weaponry. The example of such strategy was depicted by Kubrick in his genius movie “Dr. Strangelove”. Here we should say that DM-strategy is more suitable for a weaker state, which is in danger of aggression (or feels so). Quality of Russian nuclear forces is continue to deteriorating and minimum is expected around the year 2010 then most of old soviets rockets should be out of order. Simultaneously after the year 2010 US will rich a peak of their supremacy (because of thousand non nuclear cruise missiles, unique GPS system and antimissile shield it will have ability to make first strike without answer.), but later could lose supremacy because of economic crisis in US and growing arsenal of new Russian missiles. This situation looks dangerous, because from chess we know the principle: “Someone must attack under threat of losing his supremacy”. And antiballistic missile shield (ABM), which is developing now by NATO is very dangerous because it makes direct way to the creation of Doomsday Machine. Before ABM rockets were good as a mean of defense. But now only large underground bomb (gigaton order and with cobalt shield) could be a strategic defense. Such ideas is not only my creation but they are circulating in the air. Of course nobody is going to actually use such weapon, but it could be lunched accidentally. It should not be nuclear – it could be also large stockpile of anthrax, manmade supervulcano-threat or something more sophisticated. DM also could be used as a offensive mean. If Osama get it, he could say: everybody should convert in Islam, or I detonate it. The really big problem arise if in answer someone Catholic say: if anyone convert in Islam I will detonate my own Doomsday machine. In this case we finally doomed. But worst case scenarios are low probability ones, so I hope we have a chance to unite.

Jul 31, 2008

SRA Proposal Accepted

Posted by in categories: existential risks, lifeboat

My proposal for the Society for Risk Analysis’s annual meeting in Boston has been accepted, in oral presentation format, for the afternoon of Wednesday, December 10th, 2008. Any Lifeboat members who will be in the area at the time are more than welcome to attend. Any suggestions for content would also be greatly appreciated; speaking time is limited to 15 minutes, with 5 minutes for questions. The abstract for the paper is as follows:

Global Risk: A Quantitative Analysis

The scope and possible impact of global, long-term risks presents a unique challenge to humankind. The analysis and mitigation of such risks is extremely important, as such risks have the potential to affect billions of people worldwide; however, little systematic analysis has been done to determine the best strategies for overall mitigation. Direct, case-by-case analysis can be combined with standard probability theory, particularly Laplace’s rule of succession, to calculate the probability of any given risk, the scope of the risk, and the effectiveness of potential mitigation efforts. This methodology can be applied both to well-known risks, such as global warming, nuclear war, and bio-terrorism, and lesser-known or unknown risks. Although well-known risks are shown to be a significant threat, analysis strongly suggests that avoiding the risks of technologies which have not yet been developed may pose an even greater challenge. Eventually, some type of further quantitative analysis will be necessary for effective apportionment of government resources, as traditional indicators of risk level- such as press coverage and human intuition- can be shown to be inaccurate, often by many orders of magnitude.

More details are available online at the Society for Risk Analysis’s website. James Blodgett will be presenting on the precautionary principle two days earlier (Monday, Dec. 8th).

Jul 30, 2008

30 days to make antibodies to limit Pandemics

Posted by in categories: biological, biotech/medical, defense, existential risks, lifeboat

Researchers have devised a rapid and efficient method for generating protein sentinels of the immune system, called monoclonal antibodies, which mark and neutralize foreign invaders.

For both ethical and practical reasons, monoclonals are usually made in mice. And that’s a problem, because the human immune system recognizes the mouse proteins as foreign and sometimes attacks them instead. The result can be an allergic reaction, and sometimes even death.

To get around that problem, researchers now “humanize” the antibodies, replacing some or all of mouse-derived pieces with human ones.

Wilson and Ahmed were interested in the immune response to vaccination. Conventional wisdom held that the B-cell response would be dominated by “memory” B cells. But as the study authors monitored individuals vaccinated against influenza, they found that a different population of B cells peaked about one week after vaccination, and then disappeared, before the memory cells kicked in. This population of cells, called antibody-secreting plasma cells (ASCs), is highly enriched for cells that target the vaccine, with vaccine-specific cells accounting for nearly 70 percent of all ASCs.

Continue reading “30 days to make antibodies to limit Pandemics” »