Toggle light / dark theme

The projected size of Barack Obama’s “stimulus package” is heading north, from hundreds of billions of dollars into the trillions. And the Obama program comes, of course, on top of the various Bush administration bailouts and commitments, estimated to run as high as $8.5 trillion.

Will this money be put to good use? That’s an important question for the new President, and an even more important question for America. The metric for all government spending ultimately comes down to a single query: What did you get for it?

If such spending was worth it, that’s great. If the country gets victory in war, or victory over economic catastrophe, well, obviously, it was worthwhile. The national interest should never be sacrificed on the altar of a balanced budget.

So let’s hope we get the most value possible for all that money–and all that red ink. Let’s hope we get a more prosperous nation and a cleaner earth. Let’s also hope we get a more secure population and a clear, strategic margin of safety for the United States. Yet how do we do all that?

There’s only one best way: Put space exploration at the center of the new stimulus package. That is, make space the spearhead rationale for the myriad technologies that will provide us with jobs, wealth, and vital knowhow in the future. By boldly going where no (hu)man has gone before, we will change life here on earth for the better.

To put it mildly, space was not high on the national agenda during 2008. But space and rocketry, broadly defined, are as important as ever. As Cold War arms-control theology fades, the practical value of missile defense–against superpowers, also against rogue states, such as Iran, and high-tech terrorist groups, such as Hezbollah and Hamas–becomes increasingly obvious. Clearly Obama agrees; it’s the new President, after all, who will be keeping pro-missile defense Robert Gates on the job at the Pentagon.

The bipartisan reality is that if missile offense is on the rise, then missile defense is surely a good idea. That’s why increasing funding for missile defense engages the attention of leading military powers around the world. And more signs appear, too, that the new administration is in that same strategic defense groove. A January 2 story from Bloomberg News, headlined “Obama Moves to Counter China With Pentagon-NASA Link,” points the way. As reported by Demian McLean, the incoming Obama administration is looking to better coordinate DOD and NASA; that only makes sense: After all, the Pentagon’s space expenditures, $22 billion in fiscal year 2008, are almost a third more than NASA’s. So it’s logical, as well as economical, to streamline the national space effort.

That’s good news, but Obama has the opportunity to do more. Much more.

Throughout history, exploration has been a powerful strategic tool. Both Spain and Portugal turned themselves into superpowers in the 15th and 16th century through overseas expansion. By contrast, China, which at the time had a technological edge over the Iberian states, chose not to explore and was put on the defensive. Ultimately, as we all know, China’s retrograde policies pushed the Middle Kingdom into a half-millennium-long tailspin.

Further, we might consider the enormous advantages that England reaped by colonizing a large portion of the world. Not only did Britain’s empire generate wealth for the homeland, albeit often cruelly, but it also inspired technological development at home. And in the world wars of the 20th century, Britain’s colonies, past and present, gave the mother country the “strategic depth” it needed for victory.

For their part, the Chinese seem to have absorbed these geostrategic lessons. They are determined now to be big players in space, as a matter of national grand strategy, independent of economic cycles. In 2003, the People’s Republic of China powered its first man into space, becoming only the third country to do so. And then, more ominously, in 2007, China shot down one of their own weather satellites, just to prove that they had robust satellite-killing capacity.

Thus the US and all the other space powers are on notice: In any possible war, the Chinese have the capacity to “blind” our satellites. And now they plan to put a man on the moon in the next decade. “The moon landing is an extremely challenging and sophisticated task,” declared Wang Zhaoyao, a spokesman for China’s space program, in September, “and it is also a strategically important technological field.”

India, the other emerging Asian superpower, is paying close attention to its rival across the Himalayas. Back in June, The Washington Times ran this thought-provoking headline: “China, India hasten arms race in space/U.S. dominance challenged.” According to the Times report, India, possessor of an extensive civilian satellite program, means to keep up with emerging space threats from China, by any means necessary. Army Chief of Staff Gen. Deepak Kapoor said that his country must “optimize space applications for military purposes,” adding, “the Chinese space program is expanding at an exponentially rapid pace in both offensive and defensive content.” In other words, India, like every other country, must compete–because the dangerous competition is there, like it or not.

India and China have fought wars in the past; they obviously see “milspace” as another potential theater of operations. And of course, Japan, Russia, Brazil, and the European Union all have their own space programs.

Space exploration, despite all the bonhomie about scientific and economic benefit for the common good, has always been driven by strategic competition. Beyond mere macho “bragging rights” about being first, countries have understood that controlling the high ground, or the high frontier, is a vital military imperative. So we, as a nation, might further consider the value of space surveillance and missile defense. It’s hard to imagine any permanent peace deal in the Middle East, for example, that does not include, as an additional safeguard, a significant commitment to missile and rocket defense, overseen by impervious space satellites. So if the U.S. and Israel, for example, aren’t there yet, well, they need to get there.

Americans, who have often hoped that space would be a demilitarized preserve for peaceful cooperation, need to understand that space, populated by humans and their machines, will be no different from earth, populated by humans and their machines. That is, every virtue, and every evil, that is evident down here will also be evident up there. If there have been, and will continue to be, arms races on earth, then there will be arms races in space. As we have seen, other countries are moving into space in a big way–and they will continue to do so, whether or not the U.S. participates.

Meanwhile, in the nearer term, if the Bush administration’s “forward strategy of freedom”–the neoconservative idea that we would make America safe by transforming the rest of the world–is no longer an operative policy, then we will, inevitably, fall back on “defense” as the key idea for making America safe.

But in the short run, of course, the dominant issue is the economy. Aside from the sometimes inconvenient reality that national defense must always come first, the historical record shows that high-tech space work is good for the economy; the list of spinoffs from NASA, spanning the last half-century, is long and lucrative.

Moreover, a great way to guarantee that the bailout/stimulus money is well spent is to link it to a specific goal–a goal which will in turn impose discipline on the spenders. During the New Deal, for example, there were many accusations of malfeasance against FDR’s “alphabet soup” of agencies, and yet the tangible reality, in the 30s, was that things were actually getting done. Jobs were created, and, just as more important, enduring projects were being built; from post offices to Hoover Dam to the Tennessee Valley Authority, America was transformed.

Even into the 50s and 60s, the federal government was spending money on ambitious and successful projects. The space program was one, but so was the interstate highway program, as well as that new government startup, ARPANET.

Indeed, it could be argued that one reason the federal government has grown less competent and more flabby over the last 30 years is the relative lack of “hard” Hamiltonian programs–that is, nuts and bolts, cement and circuitry–to provide a sense of bottom-line rigor to the spending process.

And so, for example, if America were to succeed in building a space elevator–in its essence a 22,000-mile cable, operating like a pulley, dangling down from a stationary satellite, a concept first put forth in the late 19th century–that would be a major driver for economic growth. Japan has plans for just such a space elevator; aren’t we getting a little tired of losing high-tech economic competitions to the Japanese?

So a robust space program would not only help protect America; it would also strengthen our technological economy.

But there’s more. In the long run, space spending would be good for the environment. Here’s why:

History, as well as common sense, tells us that the overall environmental footprint of the human race rises alongside wealth. That’s why, for example, the average American produces five times as much carbon dioxide per year as the average person dwelling anywhere else on earth. Even homeless Americans, according to an MIT study–and even the most scrupulously green Americans–produce twice as much CO2, per person, as the rest of the world. Around the planet, per capita carbon dioxide emissions closely track per capita income.

A holistic understanding of homo sapiens in his environment will acknowledge the stubbornly acquisitive and accretive reality of human nature. And so a truly enlightened environmental policy will acknowledge another blunt reality: that if the carrying capacity of the earth is finite, then it makes sense, ultimately, to move some of the population of the earth elsewhere–into the infinity of space.

The ZPG and NPG advocates have their own ideas, of course, but they don’t seem to be popular in America, let alone the world. But in the no-limits infinity of space, there is plenty of room for diversity and political experimentation in the final frontier, just as there were multiple opportunities in centuries past in the New World. The main variable is developing space-traveling capacity to get up there–to the moon, Mars, and beyond–to see what’s possible.

Instead, the ultimately workable environmental plan–the ultimate vision for preserving the flora, the fauna, and the ice caps–is to move people, and their pollution, off this earth.

Indeed, space travel is surely the ultimate plan for the survival of our species, too. Eventually, through runaway WMD, or runaway pollution, or a stray asteroid, or some Murphy-esque piece of bad luck, we will learn that our dominion over this planet is fleeting. That’s when we will discover the grim true meaning of Fermi’s Paradox.

In various ways, humankind has always anticipated apocalypse. And so from Noah’s Ark to “Silent Running” to “Wall*E,” we have envisioned ways for us and all other creatures, great and small, to survive. The space program, stutteringly nascent as it might be, can be seen as a slow-groping understanding that lifeboat-style compartmentalization, on earth and in the heavens, is the key to species survival. It’s a Darwinian fitness test that we ought not to flunk.

Barack Obama, who has blazed so many trails in his life, can blaze still more, including a track to space, over the far horizon of the future. In so doing, he would be keeping faith with a figure that he in many ways resembles, John F. Kennedy. It was the 35th President who declared that not only would America go to the moon, but that we would lead the world into space.

As JFK put it so ringingly back in 1962:

The vows of this Nation can only be fulfilled if we in this Nation are first, and, therefore, we intend to be first. In short, our leadership in science and in industry, our hopes for peace and security, our obligations to ourselves as well as others, all require us to make this effort, to solve these mysteries, to solve them for the good of all men, and to become the world’s leading space-faring nation.

Today the 44th President must spend a lot of money to restore our prosperity, but he must spend it wisely. He must also keep America secure against encroaching threats, even as he must improve the environment in the face of a burgeoning global economy.

Accomplishing all these tasks is possible, but not easy. Yes, of course he will need new ideas, but he will also need familiar and proven ideas. One of the best is fostering and deploying profound new technology in pursuit of expansion and exploration.

The stars, one might hope, are aligning for just such a rendezvous with destiny.

Tracking your health is a growing phenomenon. People have historically measured and recorded their health using simple tools: a pencil, paper, a watch and a scale. But with custom spreadsheets, streaming wifi gadgets, and a new generation of people open to sharing information, this tracking is moving online. Pew Internet reports that 70–80% of Internet users go online for health reasons, and Health 2.0 websites are popping up to meet the demand.

David Shatto, an online health enthusiast, wrote in to CureTogether, a health-tracking website, with a common question: “I’m ‘healthy’ but would be interested in tracking my health online. Not sure what this means, or what a ‘healthy’ person should track. What do you recommend?”

There are probably as many answers to this question as there are people who track themselves. The basic measure that apply to most people are:
- sleep
- weight
- calories
- exercise
People who have an illness or condition will also measure things like pain levels, pain frequency, temperature, blood pressure, day of cycle (for women), and results of blood and other biometric tests. Athletes track heart rate, distance, time, speed, location, reps, and other workout-related measures.

Another answer to this question comes from Karina, who writes on Facebook: “It’s just something I do, and need to do, and it’s part of my life. So, in a nutshell, on most days I write down what I ate and drank, how many steps I walked, when I went to bed and when I woke up, my workouts and my pain/medication/treatments. I also write down various comments about meditative activities and, if it’s extreme, my mood.”

David’s question is being asked by the media too. Thomas Goetz, deputy editor of Wired Magazine, writes about it in his blog The Decision Tree. Jamin Brophy-Warren recently wrote about the phenomenon of personal data collection in the Wall Street Journal, calling it the “New Examined Life”. Writers and visionaries Kevin Kelly and Gary Wolf have started a growing movement called The Quantified Self, which holds monthly meetings about self-tracking activities and devices. And self-experimenters like David Ewing Duncan (aka “Experimental Man”) and Seth Roberts (of the “Shangri-La Diet”) are writing books about their experiences.

In the end, what to track really depends on what each person wants to get out of it:
- Greater self-awareness and a way to stick to New Year’s resolutions?
- Comparing data to other self-trackers to see where you fit on the health curve?
- Contributing health data to research into finding cures for chronic conditions?

Based on answers to these questions, you can come up with your own list of things to track, or take some of the ideas listed above. Whatever the reason, tracking is the new thing to do online and can be a great way to optimize and improve your health.

Alexandra Carmichael is co-founder of CureTogether, a Mountain View, CA startup that launched in 2008 to help people optimize their health by anonymously comparing symptoms, treatments, and health data. Its members track their health online and share their experience with 186 different health conditions. She is also the author of The Collective Well and Ecnalab blogs, and a guest blogger at the Quantified Self.

The year 2008 saw the hype fall away from virtual worlds but in contrast social networks are going from strength to strength and are being increasingly used as protest vehicles around the world. While the utility of Facebook and Twitter (using the #griot descriptor to report on the riots in Greece) have been widely reported upon some of the more interesting and interactive information can still be found in Second Life, which bodes well for the future of virtual worlds. Full report and links relating to this phenomena over at the MetaSecurity blog. Whether it be web-forums, Facebook or Second Life, virtual communities will continue to be an increasingly important part of the National Security picture in 2009.

In the volume “Global catastrophic risks” you could find excellent article of Milan Circovic “Observation selection effects and global catastrophic risks”, where he shows that we can’t use information from past records to estimating future rate of global catastrophes.
This has one more consequence which I investigate in my article: “Why antropic principle stops to defend us. Observation selection, future rate of natural disasters and fragility of our environment” — that is we could be in the end of the long period of stability, and some catastrophes may be long overdue and what is most important we could underestimate fragility of our environment which could be on the verge of bifurcation. It is because origination of intellectual life on the Earth is very rare event and it means that some critical parameters may lay near their bounds of stability and small anthropogenic influences could start catastrophic process in this century.

http://www.scribd.com/doc/8729933/Why-antropic-principle-sto…vironment–

Why antropic principle stops to defend us
Observation selection, future rate of natural disasters and fragility of our environment.

Alexei Turchin,
Russian Transhumanist movement

The previous version of this article was published on Russian in «Problems of management of risks and safety», Works of Institute of the System Analysis of the Russian Academy of Sciences, v. 31, 2007, p. 306–332.

Abstract:

The main idea of this article is not only that observation selection leads to underestimation of future rate of natural disasters, but that our environment is much more fragile to antropic influences (like overinflated toy balloon), also because of observation selection, and so we should much more carefully think about global warming and deep earth drilling.
The main idea of antropic principle (AP) is that our Universe has qualities that allow existence of the observers. In particular this means that global natural disasters that could prevent developing of intellectual life on the Earth never happened here. This is true only for the past but not for the future. So we cannot use information about frequency of global natural disasters in the past for extrapolation it to the future, except some special cases then we have additional information, as Circovic shoes in his paper. Therefore, an observer could find that all the important parametres for his/her survival (sun, temperature, asteroid risk etc.) start altogether inexplicably and quickly deteriorating – and possibly we could already find the signs of this process. In a few words: The anthropic principle has stopped to ‘defend’ humanity and we should take responsibility for our survival. Moreover, as origination of intellectual life on the Earth is very rare event it means that some critical parameters may lay near their bounds of stability and small antropogenic influences could start catastrophic process in this century.

Nuclear warheads

Martin Hellman is a professor at Stanford, one of the co-inventors of public-key cryptography, and the creator of NuclearRisks.org. He has recently published an excellent essay about the risks of failure of nuclear deterrence: Soaring, Cryptography and Nuclear Weapons. (also available on PDF)

I highly recommend that you read it, along with the other resources on NuclearRisks.org, and also subscribe to their newsletter (on the left on the frontpage).

There are also chapters on Nuclear War and Nuclear Terrorism in Global Catastrophic Risks (intro freely available as PDF here).

Update: Here’s a Martin Hellman quote from a piece he wrote called Work on Technology, War & Peace:

You have a right to know the risk of locating a nuclear power plant near your home and to object if you feel that risk is too high. Similarly, you should have a right to know know the risk of relying on nuclear weapons for our national security and to object if you feel that risk is too high. But almost no effort has gone into estimating that risk. To remedy that lack of information, this effort urgently calls for in-depth studies of the risk associated with nuclear deterrence.

While this new project may seem to have a much more modest goal than Beyond War, there is tremendous hidden potential: My preliminary analysis indicates that the risk from relying on nuclear weapons is thousands of times greater than is prudent. If the results of the proposed studies are anywhere near my preliminary estimate, those studies then become merely the first step in a long-term process of risk reduction. Because many later steps in that process seem impossible from our current vantage point, it is better to leave them to be discovered as the process unfolds, thereby removing objections that the effort is not rooted in reality.

I wrote an essay on the theme of the possibility of artificial initiation and fusion explosion of giants planets and other objects of Solar system. It is not a scientific article, but an atempt to collect all nesessary information about this existential risk. I conclude that it could not be ruled out as technical possibility, and could be made later as act of space war, which could clean entire Solar system.

Where are some events which are very improbable, but which consequence could be infinitely large (e.g. black holes on LHC.) Possibility of nuclear ignition of self-containing fusion reaction in giant planets like Jupiter and Saturn which could lead to the explosion of the planet, is one of them.

Inside the giant planets is thermonuclear fuel under high pressure and at high density. This density for certain substances is above (except water, perhaps) than the density of these substances on Earth. Large quantities of the substance would not have fly away from reaction zone long enough for large energy relize. This fuel has never been involved in fusion reactions, and it remained easy combustible components, namely, deuterium, helium-3 and lithium, which have burned at all in the stars. In addition, the subsoil giant planets contain fuel for reactions, which may prompt an explosive fire — namely, the tri-helium reaction (3 He 4 = C12) and for reactions to the accession of hydrogen to oxygen, which, however, required to start them much higher temperature. Substance in the bowels of the giant planets is a degenerate form of a metal sea, just as the substance of white dwarfs, which regularly takes place explosive thermonuclear burning in the form of helium flashes and the flashes of the first type of supernova.
The more opaque is environment, the greater are the chances for the reaction to it, as well as less scattering, but in the bowels of the giant planets there are many impurities and can be expected to lower transparency. Gravitational differentiation and chemical reactions can lead to the allocation of areas within the planet that is more suitable to run the reaction in its initial stages.

The stronger will be an explosion of fuse, the greater will be amount of the initial field of burning, and the more likely that the response would be self-sustaining, as the energy loss will be smaller and the number of reaction substances and reaction times greater. It can be assumed that if at sufficiently powerful fuse the reaction will became self-sustaining.

Recently Galileo spacecraft was drawn in the Jupiter. Galileo has nuclear pellets with plutonium-238 which under some assumption could undergo chain reaction and lead to nuclear explosion. It is interesting to understand if it could lead to the explosion of giant planet. Spacecraft Cassini may soon enter Saturn with unknown consequences. In the future deliberate ignition of giant planet may become a mean of space war. Such event could sterilize entire Solar system.

Scientific basis for our study could be found in the article “Necessary conditions for the initiation and propagation of nuclear detonation waves in plane atmospheras”.
Tomas Weaver and A. Wood, Physical review 20 – 1 Jule 1979,
http://www.lhcdefense.org/pdf/LHC%20-%20Sancho%20v.%20Doe%20…tion-1.pdf

It rejected the possibility of extending the thermonuclear detonation in the Earth’s atmosphere in Earth’s oceans to balance the loss of radiation (one that does not exclude the possibility of reactions, which take little space comparing the amount of earthly matter — but it’s enough to disastrous consequences and human extinction.)

There it is said: “We, therefore, conclude that thermonuclear-detonation waves cannot propagate in the terrestrial ocean by any mechanism by an astronomically large margin.

It is worth noting, in conclusion, that the susceptability to thermonuclear detonation of a large body of hydrogenous material is an ex¬ceedingly sensitive function of its isotopic com¬position, and, specifically, to the deuterium atom fraction, as is implicit in the discussion just preceding. If, for instance, the terrestrial oceans contained deuterium at any atom fraction greater than 1:300 (instead of the actual value of 1: 6000), the ocean could propagate an equilibrium thermonuclear-detonation wave at a temperature £2 keV (although a fantastic 10**30 ergs—2 × 10**7 MT, or the total amount of solar energy incident on the Earth for a two-week period—would be required to initiate such a detonation at a deuter¬*ium concentration of 1: 300). Now a non-neg-ligible fraction of the matter in our own galaxy exists at temperatures much less than 300 °K, i.e., the gas-giant planets of our stellar system, nebulas, etc. Furthermore, it is well known that thermodynamically-governed isotopic fractionation ever more strongly favors higher relative concentration of deuterium as the temperature decreases, e.g., the D:H concentration ratio in the ~10**2 К Great Nebula in Orion is about 1:200.45 Finally, orbital velocities of matter about the galactic center of mass are of the order of 3 × 10**7 cm /sec at our distance from the galactic core.

It is thus quite conceivable that hydrogenous matter (e.go, CH4, NH3, H2O, or just H2) relatively rich in deuterium (1 at. %) could accumulate at its normal, zero-pressure density in substantial thicknesses or planetary surfaces, and such layering might even be a fairly common feature of the colder, gas-giant planets. If thereby highly enriched in deuterium (£10 at. %), thermonuclear detonation of such layers could be initiated artificially with attainable nuclear explosives. Even with deuterium atom fractions approaching 0.3 at. % (less than that observed over multiparsec scales in Orion), however, such layers might be initiated into propagating thermonuclear detonation by the impact of large (diam 10**2 m), ultra-high velocity (^Зх 10**7 cm/sec) meteors or comets originating from nearer the galactic center. Such events, though exceedingly rare, would be spectacularly visible on distance scales of many parsecs.”

Full text of my essay is here: http://www.scribd.com/doc/8299748/Giant-planets-ignition

There continues to be some discussion and rejection of the idea that terrorists would be able to exploit new technology platforms such as social networking and virtual worlds. In a recent post the blogger Abu Aardvark (aka Marc Lynch from GW University) goes some way in debunking ideas surrounding terrorist use of social networking, Wiki’s and virtual worlds. He further states that Al Qaeda is now behind the curve in using the area of user-generated content and interactivity. While, the aardvark’s media analysis relating to ‘al-Qaeda outreach’ appears to be sound I think he misses a fundamental point about terrorists and technology.

The defining feature of terrorism and technology is its adaptive quality. It is highly unlikely that individual terrorists or terrorist groups would exactly replicate the mainstream functions of the technology abu aardvark highlights in his post. It is more likely they would take certain elements from the various innovations and mesh them together or otherwise distort them. So an al-Qaeda Facebook isn’t going to happen anytime soon but using the system to identify IDF soldiers for possible assassination already has. Similarly an ‘AQThirdlife’, which replicates the virtual world Second Life seems unlikely but using some of its key features still seems probable. The virtual money transfer aspect continues to be a high on most peoples list of concerns (this is discussed in a recent SSRN paper written by Stephen Landman, Funding Bin Laden’s Avatar: A proposal for the regulation of Virtual Hawalas, which he has kind enough to share with me). Aardvark’s point about an AQThird life also fails to account for phenomena such as the virtual caliphate, which is running in the UK, where users log into areas to see and hear sermons by dead or expelled radical preachers — there continues to be a market for extremism and virtual exposure to it is potentially more powerful than real exposure.

As ever the central point is that given rapid and increasing virtualization flexible thinking and planning is required to conceptualize the next form of terrorist threat — blogs appear to be a great enabler of this practice.

Referring to the seizure of more than 400 fake routers so far, Melissa E. Hathaway, head of cyber security in the Office of the Director of National Intelligence, says: “Counterfeit products have been linked to the crash of mission-critical networks, and may also contain hidden ‘back doors’ enabling network security to be bypassed and sensitive data accessed [by hackers, thieves, and spies].” She declines to elaborate. In a 50-page presentation for industry audiences, the FBI concurs that the routers could allow Chinese operatives to “gain access to otherwise secure systems” (page 38).

Read the entire report in Business Week. See a TV news report about the problem on YouTube.

Here I would like to suggest readers a quotation from my book “Structure of the global catastrophe” (http://www.scribd.com/doc/7529531/-) there I discuss problems of preventing catastrophes.

Refuges and bunkers

Different sort of a refuge and bunkers can increase chances of survival of the mankind in case of global catastrophe, however the situation with them is not simple. Separate independent refuges can exist for decades, but the more they are independent and long-time, the more efforts are necessary for their preparation in advance. Refuges should provide ability for the mankind to the further self-reproduction. Hence, they should contain not only enough of capable to reproduction people, but also a stock of technologies which will allow to survive and breed in territory which is planned to render habitable after an exit from the refuge. The more this territory will be polluted, the higher level of technologies is required for a reliable survival.
Very big bunker will appear capable to continue in itself development of technologies and after catastrophe. However in this case it will be vulnerable to the same risks, as all terrestrial civilisation — there can be internal terrorists, AI, nanorobots, leaks etc. If the bunker is not capable to continue itself development of technologies it, more likely, is doomed to degradation.
Further, the bunker can be or «civilizational», that is keep the majority of cultural and technological achievements of the civilisation, or “specific”, that is keep only human life. For “long” bunkers (which are prepared for long-term stay) the problem of formation and education of children and risks of degradation will rise. The bunker can or live for the account of the resources which have been saved up before catastrophe, or be engaged in own manufacture. In last case it will be simply underground civilisation on the infected planet.
The more a bunker is constructed on modern technologies and independent cultural and technically, the higher ammount of people should live there (but in the future it will be not so: the bunker on the basis of advanced nanotechnology can be even at all deserted, — only with the frozen human embryos). To provide simple reproduction by means of training to the basic human trades, thousand people are required. These people should be selected and be in the bunker before final catastrophe, and, it is desirable, on a constant basis. However it is improbable, that thousand intellectually and physically excellent people would want to sit in the bunker “just in case”. In this case they can be in the bunker in two or three changes and receive for it a salary. (Now in Russia begins experiment «Mars 500» in which 6 humans will be in completely independent — on water, to meal, air — for 500 days. Possibly, it is the best result which we now have. In the early nineties in the USA there was also a project «Biosphera-2» in which people should live two years on full self-maintenance under a dome in desert. The project has ended with partial failure as oxygen level in system began to fall because of unforeseen reproduction of microorganisms and insects.) As additional risk for bunkers it is necessary to note fact of psychology of the small groups closed in one premise widely known on the Antarctic expeditions — namely, the increase of animosities fraught with destructive actions, reducing survival rate.
The bunker can be either unique, or one of many. In the first case it is vulnerable to different catastrophes, and in the second is possible struggle between different bunkers for the resources which have remained outside. Or is possible war continuation if catastrophe has resulted from war.
The bunker, most likely, will be either underground, or in the sea, or in space. But the space bunker too can be underground of asteroids or the Moon. For the space bunker it will be more difficult to use the rests of resources on the Earth. The bunker can be completely isolated, or to allow “excursion” in the external hostile environment.
As model of the sea bunker can serve the nuclear submarine possessing high reserve, autonomy, manoeuvrability and stability to negative influences. Besides, it can easily be cooled at ocean (the problem of cooling of the underground closed bunkers is not simple), to extract from it water, oxygen and even food. Besides, already there are ready boats and technical decisions. The boat is capable to sustain shock and radiating influence. However the resource of independent swimming of modern submarines makes at the best 1 year, and in them there is no place for storage of stocks.
Modern space station ISS could support independently life of several humans within approximately year though there are problems of independent landing and adaptation. Not clearly, whether the certain dangerous agent, capable to get into all cracks on the Earth could dissipate for so short term.
There is a difference between gaso — and bio — refuges which can be on a surface, but are divided into many sections for maintenance of a mode of quarantine, and refuges which are intended as a shelter from in the slightest degree intelligent opponent (including other people who did not manage to get a place in a refuge). In case of biodanger island with rigid quarantine can be a refuge if illness is not transferred by air.
A bunker can possess different vulnerabilities. For example, in case of biological threat, is enough insignificant penetration to destroy it. Only hi-tech bunker can be the completely independent. Energy and oxygen are necessary to the bunker. The system on a nuclear reactor can give energy, but modern machines hardly can possess durability more than 30–50 years. The bunker cannot be universal — it should assume protection against the certain kinds of threats known in advance — radiating, biological etc.
The more reinforced is a bunker, the smaller number of bunkers can prepare mankind in advance, and it will be more difficult to hide such bunker. If after a certain catastrophe there was a limited number of the bunkers which site is known, the secondary nuclear war can terminate mankind through countable number of strikes in known places.
The larger is the bunker, the less amount of such bunkers is possible to construct. However any bunker is vulnerable to accidental destruction or contamination. Therefore the limited number of bunkers with certain probability of contamination unequivocally defines the maximum survival time of mankind. If bunkers are connected among themselves by trade and other material distribution, contamination between them is more probable. If bunkers are not connected, they will degrade faster. The more powerfully and more expensively is the bunker, the more difficult is to create it imperceptibly for the probable opponent and so it easeir becomes the goal for an attack. The more cheaply the bunker, the less it is durable.
Casual shelters — the people who have escaped in the underground, mines, submarines — are possible. They will suffer from absence of the central power and struggle for resources. The people, in case of exhaustion of resources in one bunker, can undertake the armed attempts to break in other next bunker. Also the people who have escaped casually (or under the threat of the comong catastrophe), can attack those who was locked in the bunker.
Bunkers will suffer from necessity of an exchange of heat, energy, water and air with an external world. The more independent is the bunker, the less time it can exist in full isolation. Bunkers being in the Earth will deeply suffer from an overheating. Any nuclear reactors and other complex machines will demand external cooling. Cooling by external water will unmask them, and it is impossible to have energy sources lost-free in the form of heat, while on depth of earth there are always high temperatures. Temperature growth, in process of deepening in the Earth, limits depth of possible bunkers. (The geothermal gradient on the average makes 30 degrees C/kilometers. It means, that bunkers on depth more than 1 kilometre are impossible — or demand huge cooling installations on a surface, as gold mines in the republic of South Africa. There can be deeper bunkers in ices of Antarctica.)
The more durable, more universal and more effective, should be a bunker, the earlier it is necessary to start to build it. But in this case it is difficult to foresee the future risks. For example, in 1930th years in Russia was constructed many anti-gase bombproof shelters which have appeared useless and vulnerable to bombardments by heavy demolition bombs.
Efficiency of the bunker which can create the civilisation, corresponds to a technological level of development of this civilisation. But it means that it possesses and corresponding means of destruction. So, especially powerful bunker is necessary. The more independently and more absolutely is the bunker (for example, equipped with AI, nanorobots and biotechnologies), the easier it can do without, eventually, people, having given rise to purely computer civilisation.
People from different bunkers will compete for that who first leaves on a surface and who, accordingly, will own it — therefore will develop the temptation for them to go out to still infected sites of the Earth.
There are possible automatic robotic bunkers: in them the frozen human embryos are stored in a certain artificial uterus and through hundreds or thousand years start to be grown up. (Technology of cryonics of embryos already exists, and works on an artificial uterus are forbidden for bioethics reasons, but basically such device is possible.) With embryos it is possible to send such installations in travel to other planets. However, if such bunkers are possible, the Earth hardly remains empty — most likely it will be populated with robots. Besides, if the human cub who has been brought up by wolves, considers itself as a wolf as whom human who has been brought up by robots will consider itself?
So, the idea about a survival in bunkers contains many reefs which reduce its utility and probability of success. It is necessary to build long-term bunkers for many years, but they can become outdated for this time as the situation will change and it is not known to what to prepare. Probably, that there is a number of powerful bunkers which have been constructed in days of cold war. A limit of modern technical possibilities the bunker of an order of a 30-year-old autonomy, however it would take long time for building — decade, and it will demand billions dollars of investments.
Independently there are information bunkers, which are intended to inform to the possible escaped descendants about our knowledge, technologies and achievements. For example, in Norway, on Spitsbergen have been created a stock of samples of seeds and grain with these purposes (Doomsday Vault). Variants with preservation of a genetic variety of people by means of the frozen sperm are possible. Digital carriers steady against long storage, for example, compact discs on which the text which can be read through a magnifier is etched are discussed and implemented by Long Now Foundation. This knowledge can be crucial for not repeating our errors.

November 14, 2008
Computer History Museum, Mountain View, CA

http://ieet.org/index.php/IEET/eventinfo/ieet20081114/

Organized by: Institute for Ethics and Emerging Technologies, the Center for Responsible Nanotechnology and the Lifeboat Foundation

A day-long seminar on threats to the future of humanity, natural and man-made, and the pro-active steps we can take to reduce these risks and build a more resilient civilization. Seminar participants are strongly encouraged to pre-order and review the Global Catastrophic Risks volume edited by Nick Bostrom and Milan Cirkovic, and contributed to by some of the faculty for this seminar.

This seminar will precede the futurist mega-gathering Convergence 08, November 15–16 at the same venue, which is co-sponsored by the IEET, Humanity Plus (World Transhumanist Association), the Singularity Institute for Artificial Intelligence, the Immortality Institute, the Foresight Institute, the Long Now Foundation, the Methuselah Foundation, the Millenium Project, Reason Foundation and the Accelerating Studies Foundation.

SEMINAR FACULTY

  • Nick Bostrom Ph.D., Director, Future of Humanity Institute, Oxford University
  • Jamais Cascio, research affiliate, Institute for the Future
  • James J. Hughes Ph.D., Exec. Director, Institute for Ethics and Emerging Technologies
  • Mike Treder, Executive Director, Center for Responsible Nanotechnology
  • Eliezer Yudkowsky, Research Associate. Singularity Institute for Artificial Intelligence
  • William Potter Ph.D., Director, James Martin Center for Nonproliferation Studies

REGISTRATION:
Before Nov 1: $100
After Nov 1 and at the door: $150