Toggle light / dark theme

I wrote an essay on the theme of the possibility of artificial initiation and fusion explosion of giants planets and other objects of Solar system. It is not a scientific article, but an atempt to collect all nesessary information about this existential risk. I conclude that it could not be ruled out as technical possibility, and could be made later as act of space war, which could clean entire Solar system.

Where are some events which are very improbable, but which consequence could be infinitely large (e.g. black holes on LHC.) Possibility of nuclear ignition of self-containing fusion reaction in giant planets like Jupiter and Saturn which could lead to the explosion of the planet, is one of them.

Inside the giant planets is thermonuclear fuel under high pressure and at high density. This density for certain substances is above (except water, perhaps) than the density of these substances on Earth. Large quantities of the substance would not have fly away from reaction zone long enough for large energy relize. This fuel has never been involved in fusion reactions, and it remained easy combustible components, namely, deuterium, helium-3 and lithium, which have burned at all in the stars. In addition, the subsoil giant planets contain fuel for reactions, which may prompt an explosive fire — namely, the tri-helium reaction (3 He 4 = C12) and for reactions to the accession of hydrogen to oxygen, which, however, required to start them much higher temperature. Substance in the bowels of the giant planets is a degenerate form of a metal sea, just as the substance of white dwarfs, which regularly takes place explosive thermonuclear burning in the form of helium flashes and the flashes of the first type of supernova.
The more opaque is environment, the greater are the chances for the reaction to it, as well as less scattering, but in the bowels of the giant planets there are many impurities and can be expected to lower transparency. Gravitational differentiation and chemical reactions can lead to the allocation of areas within the planet that is more suitable to run the reaction in its initial stages.

The stronger will be an explosion of fuse, the greater will be amount of the initial field of burning, and the more likely that the response would be self-sustaining, as the energy loss will be smaller and the number of reaction substances and reaction times greater. It can be assumed that if at sufficiently powerful fuse the reaction will became self-sustaining.

Recently Galileo spacecraft was drawn in the Jupiter. Galileo has nuclear pellets with plutonium-238 which under some assumption could undergo chain reaction and lead to nuclear explosion. It is interesting to understand if it could lead to the explosion of giant planet. Spacecraft Cassini may soon enter Saturn with unknown consequences. In the future deliberate ignition of giant planet may become a mean of space war. Such event could sterilize entire Solar system.

Scientific basis for our study could be found in the article “Necessary conditions for the initiation and propagation of nuclear detonation waves in plane atmospheras”.
Tomas Weaver and A. Wood, Physical review 20 – 1 Jule 1979,
http://www.lhcdefense.org/pdf/LHC%20-%20Sancho%20v.%20Doe%20…tion-1.pdf

It rejected the possibility of extending the thermonuclear detonation in the Earth’s atmosphere in Earth’s oceans to balance the loss of radiation (one that does not exclude the possibility of reactions, which take little space comparing the amount of earthly matter — but it’s enough to disastrous consequences and human extinction.)

There it is said: “We, therefore, conclude that thermonuclear-detonation waves cannot propagate in the terrestrial ocean by any mechanism by an astronomically large margin.

It is worth noting, in conclusion, that the susceptability to thermonuclear detonation of a large body of hydrogenous material is an ex¬ceedingly sensitive function of its isotopic com¬position, and, specifically, to the deuterium atom fraction, as is implicit in the discussion just preceding. If, for instance, the terrestrial oceans contained deuterium at any atom fraction greater than 1:300 (instead of the actual value of 1: 6000), the ocean could propagate an equilibrium thermonuclear-detonation wave at a temperature £2 keV (although a fantastic 10**30 ergs—2 × 10**7 MT, or the total amount of solar energy incident on the Earth for a two-week period—would be required to initiate such a detonation at a deuter¬*ium concentration of 1: 300). Now a non-neg-ligible fraction of the matter in our own galaxy exists at temperatures much less than 300 °K, i.e., the gas-giant planets of our stellar system, nebulas, etc. Furthermore, it is well known that thermodynamically-governed isotopic fractionation ever more strongly favors higher relative concentration of deuterium as the temperature decreases, e.g., the D:H concentration ratio in the ~10**2 К Great Nebula in Orion is about 1:200.45 Finally, orbital velocities of matter about the galactic center of mass are of the order of 3 × 10**7 cm /sec at our distance from the galactic core.

It is thus quite conceivable that hydrogenous matter (e.go, CH4, NH3, H2O, or just H2) relatively rich in deuterium (1 at. %) could accumulate at its normal, zero-pressure density in substantial thicknesses or planetary surfaces, and such layering might even be a fairly common feature of the colder, gas-giant planets. If thereby highly enriched in deuterium (£10 at. %), thermonuclear detonation of such layers could be initiated artificially with attainable nuclear explosives. Even with deuterium atom fractions approaching 0.3 at. % (less than that observed over multiparsec scales in Orion), however, such layers might be initiated into propagating thermonuclear detonation by the impact of large (diam 10**2 m), ultra-high velocity (^Зх 10**7 cm/sec) meteors or comets originating from nearer the galactic center. Such events, though exceedingly rare, would be spectacularly visible on distance scales of many parsecs.”

Full text of my essay is here: http://www.scribd.com/doc/8299748/Giant-planets-ignition

November 14, 2008
Computer History Museum, Mountain View, CA

http://ieet.org/index.php/IEET/eventinfo/ieet20081114/

Organized by: Institute for Ethics and Emerging Technologies, the Center for Responsible Nanotechnology and the Lifeboat Foundation

A day-long seminar on threats to the future of humanity, natural and man-made, and the pro-active steps we can take to reduce these risks and build a more resilient civilization. Seminar participants are strongly encouraged to pre-order and review the Global Catastrophic Risks volume edited by Nick Bostrom and Milan Cirkovic, and contributed to by some of the faculty for this seminar.

This seminar will precede the futurist mega-gathering Convergence 08, November 15–16 at the same venue, which is co-sponsored by the IEET, Humanity Plus (World Transhumanist Association), the Singularity Institute for Artificial Intelligence, the Immortality Institute, the Foresight Institute, the Long Now Foundation, the Methuselah Foundation, the Millenium Project, Reason Foundation and the Accelerating Studies Foundation.

SEMINAR FACULTY

  • Nick Bostrom Ph.D., Director, Future of Humanity Institute, Oxford University
  • Jamais Cascio, research affiliate, Institute for the Future
  • James J. Hughes Ph.D., Exec. Director, Institute for Ethics and Emerging Technologies
  • Mike Treder, Executive Director, Center for Responsible Nanotechnology
  • Eliezer Yudkowsky, Research Associate. Singularity Institute for Artificial Intelligence
  • William Potter Ph.D., Director, James Martin Center for Nonproliferation Studies

REGISTRATION:
Before Nov 1: $100
After Nov 1 and at the door: $150

Open source has emerged as a powerful set of principles for solving complex problems in fields as diverse as education and physical security. With roughly 60 million Americans suffering from a chronic health condition, traditional research progressing slowly, and personalized medicine on the horizon, the time is right to apply open source to health research. Advances in technology enabling cheap, massive data collection combined with the emerging phenomena of self quantification and crowdsourcing make this plan feasible today. We can all work together to cure disease, and here’s how.

Read more…

Following is a discussion of two potential threats to humanity – one which has existed for eons, the second we have seen recently resurfacing having thought it had been laid to rest.

First, a recent story on PhysOrg describes the work researchers at Vanderbilt University have performed in isolating antibodies from elderly people who had survived the 1918 flu pandemic. This comes three years after researchers at Mount Sinai and the Armed Forces Institute of Pathology in Washington, D.C isolated the same virus which caused this outbreak from the frozen bodies of people in Alaska who had died in the pandemic.

In addition to being an impressive achievement of biomedical science, which involved isolating antibody-secreting B cells from donors and generating “immortalized” cell lines to produce large amounts of antibodies, this research also demonstrates the amazing memory the immune system has (90 years!), as well as the ability scientists have to use tissue samples from people born nearly a century ago and fashion them into a potential weapon against future similar outbreaks. Indeed, these manufactured antibodies proved effective against 1918 flu virus when tested in mice.

Furthermore, such research provides tools which could help generate antibodies to treat other viruses which still blight humanity (such as HIV) or are seen as potential threats, such as avian influenza.

http://www.physorg.com/news138198336.html

Second, nuclear annihilation. Russia’s recent foray into Georgia and the ensuing tensions with the west have brought the specter of the cold war back from the dead, and with it increasing levels of aggressive rhetoric from both sides and more or less veiled threats of action, some of it diplomatic, some military.

During the past twenty years, ever since the fall of the former Soviet Union, we have become used to living in a world no longer directly and overtly threatened by complete annihilation through world war III. Is this about to change? It would seem that despite current tensions, present conditions are far from fostering a renewed cold war.

Modern day Russia (and China can be described along similar lines) is inexorably tied to the world economy and does not represent a conflicting ideology striving for world domination as was the case during the most of the latter half of the twentieth century. This deep international integration stems from the almost global acceptance of the market economy as the preferred driving force for economic growth, albeit under different forms of government. Both Russia and China are (currently) fueled more by the will being recognized as premier global forces rather than the will to rule the world, the former wishing to return to its previous position and reclaim the respect it feels it lost during the last couple of decades, and the latter rising anew after centuries in the shadows.

Of course, the coming elections in the US may change the tone prevalent in the international brinkmanship game, although the involvement of the EU, led by French premier Sarkozy means that such strong statements coming from Western Europe are not set to change fundamentally.

So, unless further surprises are in store for us (a possibility which cannot be ignored when dealing with political and military maneuvering, especially those involving the tense conditions prevalent in many of the former Soviet republics), a compromise will eventually be reached and respected. The seeds of a calming effort have already been felt in recent days with much less inflammatory declarations from both sides, and signs of a Russian willingness to tone down at least the public face of disagreements with the EU and US. This is likely set to continue…at least until the next outbreak of nationalistic violence or political sword-brandishing in a region in which tensions run high.

An interesting analysis of the current situation can be found at: http://www.cnn.com/2008/WORLD/europe/08/29/oakley.eu.russia/

Researchers have devised a rapid and efficient method for generating protein sentinels of the immune system, called monoclonal antibodies, which mark and neutralize foreign invaders.

For both ethical and practical reasons, monoclonals are usually made in mice. And that’s a problem, because the human immune system recognizes the mouse proteins as foreign and sometimes attacks them instead. The result can be an allergic reaction, and sometimes even death.

To get around that problem, researchers now “humanize” the antibodies, replacing some or all of mouse-derived pieces with human ones.

Wilson and Ahmed were interested in the immune response to vaccination. Conventional wisdom held that the B-cell response would be dominated by “memory” B cells. But as the study authors monitored individuals vaccinated against influenza, they found that a different population of B cells peaked about one week after vaccination, and then disappeared, before the memory cells kicked in. This population of cells, called antibody-secreting plasma cells (ASCs), is highly enriched for cells that target the vaccine, with vaccine-specific cells accounting for nearly 70 percent of all ASCs.

“That’s the trick,” said Wilson. “So instead of one cell in 1,000 binding to the vaccines, now it is seven in 10 cells.”

All of a sudden, the researchers had access to a highly enriched pool of antibody-secreting cells, something that is relatively easy to produce in mice, but hard to come by for human B cells.

To ramp up the production and cloning of these antibodies, the researchers added a second twist. Mouse monoclonal antibodies are traditionally produced in the lab from hybridomas, which are cell lines made by fusing the antibody-producing cell with a cancer cell. But human cells don’t respond well to this treatment. So Wilson and his colleagues isolated the ASC antibody genes and transferred them into an “immortalized” cell line. The result was the generation of more than 100 different monoclonals in less than a year, with each taking just a few weeks to produce.

In the event of an emerging flu pandemic, for instance, this approach could lead to faster production of human monoclonals to both diagnose and protect against the disease.

Journal Nature article: Rapid cloning of high-affinity human monoclonal antibodies against influenza virus

Nature 453, 667–671 (29 May 2008) | doi:10.1038/nature06890; Received 16 October 2007; Accepted 4 March 2008; Published online 30 April 2008

Pre-existing neutralizing antibody provides the first line of defence against pathogens in general. For influenza virus, annual vaccinations are given to maintain protective levels of antibody against the currently circulating strains. Here we report that after booster vaccination there was a rapid and robust influenza-specific IgG+ antibody-secreting plasma cell (ASC) response that peaked at approximately day 7 and accounted for up to 6% of peripheral blood B cells. These ASCs could be distinguished from influenza-specific IgG+ memory B cells that peaked 14–21 days after vaccination and averaged 1% of all B cells. Importantly, as much as 80% of ASCs purified at the peak of the response were influenza specific. This ASC response was characterized by a highly restricted B-cell receptor (BCR) repertoire that in some donors was dominated by only a few B-cell clones. This pauci-clonal response, however, showed extensive intraclonal diversification from accumulated somatic mutations. We used the immunoglobulin variable regions isolated from sorted single ASCs to produce over 50 human monoclonal antibodies (mAbs) that bound to the three influenza vaccine strains with high affinity. This strategy demonstrates that we can generate multiple high-affinity mAbs from humans within a month after vaccination. The panel of influenza-virus-specific human mAbs allowed us to address the issue of original antigenic sin (OAS): the phenomenon where the induced antibody shows higher affinity to a previously encountered influenza virus strain compared with the virus strain present in the vaccine1. However, we found that most of the influenza-virus-specific mAbs showed the highest affinity for the current vaccine strain. Thus, OAS does not seem to be a common occurrence in normal, healthy adults receiving influenza vaccination.

What is metabolomics?

Genes are similar to the plans for a house; they show what it looks like, but not what people are getting up to inside. One way of getting a snapshot of their lives would be to rummage through their rubbish, and that is pretty much what metabolomics does. […]

Metabolomics studies metabolites, the by-products of the hundreds of thousands of chemical reactions that continuously go on in every cell of the human body. Because blood and urine are packed with these compounds, it should be possible to detect and analyse them. If, say, a tumour was growing somewhere then, long before any existing methods can detect it, the combination of metabolites from the dividing cancer cells will produce a new pattern, different from that seen in healthy tissue. Such metabolic changes could be picked up by computer programs, adapted from those credit-card companies use to detect crime by spotting sudden and unusual spending patterns amid millions of ordinary transactions.

This could be used for traditional medicine, both to prevent pathologies and to detect those that are already present so they can be treated. But another use would be as part of an early-detection system to defend against pandemics and biological attacks. As mentioned previously, network-theory can help us better use vaccines. But once you have a cure or antidote, you also need to identify people that are already infected but haven’t died yet, and the earlier you can do that after the infection, the more chances they have to live.

Once the techniques of metabolomics are sufficiently advanced and inexpensive to use, they might provide better data than simply relying on reported symptoms (might be too late by then), and might scale better than traditional detection methods (not sure yet — something else might make more economic sense). It’s a bit too early to tell, but it’s a very promising field.

For more information, see Douglas Kell’s site or Wikipedia: Metabolomics.

Source: The Economist. See also Lifeboat’s BioShield program.

If a pandemic strikes and hundreds of millions are at risk, we won’t have enough vaccines for everybody, at least not within the time window where vaccines would help. But a new strategy could help use the vaccines we have more effectively:

Researchers are now proposing a new strategy for targeting shots that could, at least in theory, stop a pandemic from spreading along the network of social interactions. Vaccinating selected people is essentially equivalent to cutting out nodes of the social network. As far as the pandemic is concerned, it’s as if those people no longer exist. The team’s idea is to single out people so that immunizing them breaks up the network into smaller parts of roughly equal sizes. Computer simulations show that this strategy could block a pandemic using 5 to 50 percent fewer doses than existing strategies, the researchers write in an upcoming Physical Review Letters.

vaccine-targeting.jpg

So you break up the general social network into sub-networks, and then you target the most important nodes of these sub-networks and so on until you run out of vaccines. The challenge will be to get good information about social networks, something not quite as easy as mapping computer networks, but there is progress on that front.

In one of the most dramatic illustrations of their technique, the researchers simulated the spread of a pandemic using data from a Swedish study of social connections, in which more than 310,000 people are represented and connected based on whether they live in the same household or they work in the same place. With the new method, the epidemic spread to about 4 percent of the population, compared to nearly 40 percent for more standard strategies, the team reports.

Source: ScienceNews. See also Lifeboat’s BioShield program.

There is a strong overlap between those concerned about extinction risk prevention and healthy life extension. Accordingly, many supporters of the Lifeboat Foundation will be attending an open event on regenerative medicine taking place on the UCLA campus on the 27th of June. Here is the blurb:

On Friday, June 27th, leading scientists and thinkers in stem cell research and regenerative medicine will gather in Los Angeles at UCLA for Aging 2008 to explain how their work can combat human aging, and the sociological implications of developing rejuvenation therapies. Aging 2008 is free, with advance registration required.

UCLA Royce Hall
Friday June 27th | Doors open 4pm
405 Hilgard Ave, Los Angeles, CA 90024

This special public event is being organized by the Methuselah Foundation. Dr. Aubrey de Grey, chairman and chief science officer of the Methuselah Foundation, said, “Our organization has raised over $10 million to crack open the logjams in longevity science. With the two-armed strategy of direct investments into key research projects, and a competitive prize to spur on competing scientists’ race to break rejuvenation and longevity records in lab mice, the Foundation is actively accelerating the drive toward a future free of age-related degeneration.” The Methuselah Foundation has been covered by “60 Minutes,” Popular Science, the Wall Street Journal, and other top-flight media outlets.

If any Lifeboat Foundation supporters are interesting in meeting up before or after the event, comment on this post.

The report, “Synthetic Biology: Social and Ethical Challenges”, highlights concerns about ownership, misuse, unintended consequences, and accidental release of synthetic organisms into the environment.

Andrew Balmer and Professor Paul Martin, the report’s authors, suggest a threat from “garage biology”, with people experimenting at home. They also emphasise that there is no policy on the impact of synthetic biology on international bioweapons conventions.

Read the entire report here (PDF).

Cross posted from Next big future by Brian Wang, Lifeboat foundation director of Research

I am presenting disruption events for humans and also for biospheres and planets and where I can correlating them with historical frequency and scale.

There has been previous work on categorizing and classifying extinction events. There is Bostroms paper and there is also the work by Jamais Cascio and Michael Anissimov on classification and identifying risks (presented below).

A recent article discusses the inevtiable “end of societies” (it refers to civilizations but it seems to be referring more to things like the end of the roman empire, which still ends up later with Italy, Austria Hungary etc… emerging)

The theories around complexity seem me that to be that core developments along connected S curves of technology and societal processes cap out (around key areas of energy, transportation, governing efficiency, agriculture, production) and then a society falls back (soft or hard dark age, reconstitutes and starts back up again).

Here is a wider range of disruption. Which can also be correlated to frequency that they have occurred historically.

High growth drop to Low growth (short business cycles, every few years)
Recession (soft or deep) Every five to fifteen years.
Depressions (50−100 years, can be more frequent)

List of recessions for the USA (includes depressions)

Differences recession/depression

Good rule of thumb for determining the difference between a recession and a depression is to look at the changes in GNP. A depression is any economic downturn where real GDP declines by more than 10 percent. A recession is an economic downturn that is less severe. By this yardstick, the last depression in the United States was from May 1937 to June 1938, where real GDP declined by 18.2 percent. Great Depression of the 1930s can be seen as two separate events: an incredibly severe depression lasting from August 1929 to March 1933 where real GDP declined by almost 33 percent, a period of recovery, then another less severe depression of 1937–38. (Depressions every 50–100 years. Were more frequent in the past).

Dark age (period of societal collapse, soft/light or regular)
I would say the difference between a long recession and a dark age has to do with breakdown of societal order and some level of population decline / dieback, loss of knowledge/education breakdown. (Once per thousand years.)

I would say that a soft dark age is also something like what China had from the 1400’s to 1970.
Basically a series of really bad societal choices. Maybe something between depressions and dark age or something that does not categorize as neatly but an underperformance by twenty times versus competing groups. Perhaps there should be some kind of societal disorder, levels and categories of major society wide screw ups — historic level mistakes. The Chinese experience I think was triggered by the renunciation of the ocean going fleet, outside ideas and tech, and a lot of other follow on screw ups.

Plagues played a part in weakening the Roman and Han empires.

Societal collapse talk which includes Toynbee analysis.

Toynbee argues that the breakdown of civilizations is not caused by loss of control over the environment, over the human environment, or attacks from outside. Rather, it comes from the deterioration of the “Creative Minority,” which eventually ceases to be creative and degenerates into merely a “Dominant Minority” (who forces the majority to obey without meriting obedience). He argues that creative minorities deteriorate due to a worship of their “former self,” by which they become prideful, and fail to adequately address the next challenge they face.

My take is that the Enlightenment would strengthened with a larger creative majority, where everyone has a stake and capability to creatively advance society. I have an article about who the elite are now.

Many now argue about how dark the dark ages were not as completely bad as commonly believed.
The dark ages is also called the Middle Ages

Population during the middle ages

Between dark age/social collapse and extinction. There are levels of decimation/devastation. (use orders of magnitude 90+%, 99%, 99.9%, 99.99%)

Level 1 decimation = 90% population loss
Level 2 decimation = 99% population loss
Level 3 decimation = 99.9% population loss

Level 9 population loss (would pretty much be extinction for current human civilization). Only 6–7 people left or less which would not be a viable population.

Can be regional or global, some number of species (for decimation)

Categorizations of Extinctions, end of world categories

Can be regional or global, some number of species (for extinctions)

== The Mass extinction events have occurred in the past (to other species. For each species there can only be one extinction event). Dinosaurs, and many others.

Unfortunately Michael’s accelerating future blog is having some issues so here is a cached link.

Michael was identifying manmade risks
The Easier-to-Explain Existential Risks (remember an existential risk
is something that can set humanity way back, not necessarily killing
everyone):

1. neoviruses
2. neobacteria
3. cybernetic biota
4. Drexlerian nanoweapons

The hardest to explain is probably #4. My proposal here is that, if
someone has never heard of the concept of existential risk, it’s
easier to focus on these first four before even daring to mention the
latter ones. But here they are anyway:

5. runaway self-replicating machines (“grey goo” not recommended
because this is too narrow of a term)
6. destructive takeoff initiated by intelligence-amplified human
7. destructive takeoff initiated by mind upload
8. destructive takeoff initiated by artificial intelligence

Another classification scheme: the eschatological taxonomy by Jamais
Cascio on Open the Future. His classification scheme has seven
categories, one with two sub-categories. These are:

0:Regional Catastrophe (examples: moderate-case global warming,
minor asteroid impact, local thermonuclear war)
1: Human Die-Back (examples: extreme-case global warming,
moderate asteroid impact, global thermonuclear war)
2: Civilization Extinction (examples: worst-case global warming,
significant asteroid impact, early-era molecular nanotech warfare)
3a: Human Extinction-Engineered (examples: targeted nano-plague,
engineered sterility absent radical life extension)
3b: Human Extinction-Natural (examples: major asteroid impact,
methane clathrates melt)
4: Biosphere Extinction (examples: massive asteroid impact,
“iceball Earth” reemergence, late-era molecular nanotech warfare)
5: Planetary Extinction (examples: dwarf-planet-scale asteroid
impact, nearby gamma-ray burst)
X: Planetary Elimination (example: post-Singularity beings
disassemble planet to make computronium)

A couple of interesting posts about historical threats to civilization and life by Howard Bloom.

Natural climate shifts and from space (not asteroids but interstellar gases).

Humans are not the most successful life, bacteria is the most successful. Bacteria has survived for 3.85 billion years. Humans for 100,000 years. All other kinds of life lasted no more than 160 million years. [Other species have only managed to hang in there for anywhere from 1.6 million years to 160 million. We humans are one of the shortest-lived natural experiments around. We’ve been here in one form or another for a paltry two and a half million years.] If your numbers are not big enough and you are not diverse enough then something in nature eventually wipes you out.

Following the bacteria survival model could mean using transhumanism as a survival strategy. Creating more diversity to allow for better survival. Humans adapted to living under the sea, deep in the earth, in various niches in space, more radiation resistance,non-biological forms etc… It would also mean spreading into space (panspermia). Individually using technology we could become very successful at life extension, but it will take more than that for a good plan for human (civilization, society, species) long term survival planning.

Other periodic challenges:
142 mass extinctions, 80 glaciations in the last two million years, a planet that may have once been a frozen iceball, and a klatch of global warmings in which the temperature has soared by 18 degrees in ten years or less.

In the last 120,000 years there were 20 interludes in which the temperature of the planet shot up 10 to 18 degrees within a decade. Until just 10,000 years ago, the Gulf Stream shifted its route every 1,500 years or so. This would melt mega-islands of ice, put out our coastal cities beneath the surface of the sea, and strip our farmlands of the conditions they need to produce the food that feeds us.

The solar system has a 240-million-year-long-orbit around the center of our galaxy, an orbit that takes us through interstellar gas clusters called local fluff, interstellar clusters that strip our planet of its protective heliosphere, interstellar clusters that bombard the earth with cosmic radiation and interstellar clusters that trigger giant climate change.