Archive for the ‘existential risks’ category: Page 105

Sep 22, 2013

Peer-to-Peer Science: The Century-Long Challenge to Respond to Fukushima

Posted by in categories: engineering, existential risks, nuclear energy, open access

Peer-to-Peer Science

The Century-Long Challenge to Respond to Fukushima

Emanuel Pastreich (Director)

Layne Hartsell (Research Fellow)

Continue reading “Peer-to-Peer Science: The Century-Long Challenge to Respond to Fukushima” »

Aug 12, 2013

Micro Black Holes in the Taillights — Another Glance Back

Posted by in categories: existential risks, particle physics, physics

Recent discussions on the properties of micro-black-holes threw open sufficient question to reignite some interest in the subject (pardon to those exhausted of reading on the subject here at the Lifeboat Foundation). A claim made by physicists at the University of Innsbruck in Austria, that a new attractive force arises from black-body radiation [1] makes one speculate if a similar effect could result from hawking radiation theorized to be emitted from micro-black-holes. An unlikely scenario due to the very different nature supposed on hawking radiation and black-body radiation, but a curious thought none-the-less. If a light component of hawking radiation could replicate this net attractive force, accepted accretion and radiation rates could be revised to consider such new additional forces hypothesized.

Not so fast — Even if such a new force did take effect in these scenarios, one would expect such to have negligible impact on safety assurances. Official estimated accretion rates are many many orders of magnitude lower than estimated radiation rates — and are estimates which concur with observational evidence in the longevity of white-dwarf stars.

That is not to conclude such new forces are necessary to continue debate. Certain old disputed parameter ranges suggest different accretion rates relative to radiative rates which could bridge that vast breadth between such estimates, theorizing catastrophic outcomes [3] are not necessarily refuted by safety assurances — least on white-dwarf longevity.

Indeed a more pertinent point, that if equilibrium could manifest between radiation and accretion rates, micro-black-holes trapped in Earth’s gravitation could become persistent heat engines with considerable flux [2] to cause environmental concern in planetary heating.

Continue reading “Micro Black Holes in the Taillights — Another Glance Back” »

Jun 8, 2013

Update on the LHC-Danger – after Half a Year

Posted by in categories: existential risks, particle physics

1) CERN officially attempted to produce ultraslow miniature black holes on earth. It has announced to continue doing so after the current more than a year-long break for upgrading.

2) Miniature black holes possess radically new properties according to published scientific results that go unchallenged in the literature for 5 years: no Hawking evaporation; unchargedness; invisibility to CERN’s detectors; enhanced chance of being produced.

3) Of the millions of miniature black holes hoped to have been produced, at least one is bound to be slow enough to stay inside earth to circulate there.

4) This miniature black hole circulates undisturbed – until it captures the first charged quark. From then on it grows exponentially doubling in size in months at first, later in weeks.

Continue reading “Update on the LHC-Danger – after Half a Year” »

May 31, 2013

How Could WBE+AGI be Easier than AGI Alone?

Posted by in categories: complex systems, engineering, ethics, existential risks, futurism, military, neuroscience, singularity, supercomputing

This essay was also published by the Institute for Ethics & Emerging Technologies and by Transhumanity under the title “Is Price Performance the Wrong Measure for a Coming Intelligence Explosion?”.


Most thinkers speculating on the coming of an intelligence explosion (whether via Artificial-General-Intelligence or Whole-Brain-Emulation/uploading), such as Ray Kurzweil [1] and Hans Moravec [2], typically use computational price performance as the best measure for an impending intelligence explosion (e.g. Kurzweil’s measure is when enough processing power to satisfy his estimates for basic processing power required to simulate the human brain costs $1,000). However, I think a lurking assumption lies here: that it won’t be much of an explosion unless available to the average person. I present a scenario below that may indicate that the imminence of a coming intelligence-explosion is more impacted by basic processing speed – or instructions per second (ISP), regardless of cost or resource requirements per unit of computation, than it is by computational price performance. This scenario also yields some additional, counter-intuitive conclusions, such as that it may be easier (for a given amount of “effort” or funding) to implement WBE+AGI than it would be to implement AGI alone – or rather that using WBE as a mediator of an increase in the rate of progress in AGI may yield an AGI faster or more efficiently per unit of effort or funding than it would be to implement AGI directly.

Loaded Uploads:

Continue reading “How Could WBE+AGI be Easier than AGI Alone?” »

May 23, 2013

Comic: Rationality Matters

Posted by in categories: education, existential risks, fun, humor

May 19, 2013

Who Wants To Live Forever?

Posted by in categories: business, ethics, existential risks, futurism, homo sapiens, human trajectories, life extension, philosophy, sustainability

Medical science has changed humanity. It changed what it means to be human, what it means to live a human life. So many of us reading this (and at least one person writing it) owe their lives to medical advances, without which we would have died.

Live expectancy is now well over double what it was for the Medieval Briton, and knocking hard on triple’s door.

What for the future? Extreme life extension is no more inherently ridiculous than human flight or the ability to speak to a person on the other side of the world. Science isn’t magic – and ageing has proven to be a very knotty problem – but science has overcome knotty problems before.

A genuine way to eliminate or severely curtail the influence of ageing on the human body is not in any sense inherently ridiculous. It is, in practice, extremely difficult, but difficult has a tendency to fall before the march of progress. So let us consider what implications a true and seismic advance in this area would have on the nature of human life.

Continue reading “Who Wants To Live Forever?” »

Apr 11, 2013

Faith in the Fat of Fate may be Fatal for Humanity

Posted by in categories: existential risks, futurism, human trajectories, philosophy

This essay was originally published at Transhumanity.

They don’t call it fatal for nothing. Infatuation with the fat of fate, duty to destiny, and belief in any sort of preordainity whatsoever – omnipotent deities notwithstanding – constitutes an increase in Existential Risk, albeit indirectly. If we think that events have been predetermined, it follows that we would think that our actions make no difference in the long run and that we have no control over the shape of those futures still fetal. This scales to the perceived ineffectiveness of combating or seeking to mitigate existential risk for those who have believe so fatalistically. Thus to combat belief in fate, and resultant disillusionment in our ability to wreak roiling revisement upon the whorl of the world, is to combat existential risk as well.

It also works to undermine the perceived effectiveness of humanity’s ability to mitigate existential risk along another avenue. Belief in fate usually correlates with the notion that the nature of events is ordered with a reason on purpose in mind, as opposed to being haphazard and lacking a specific projected end. Thus believers-in-fate are not only more likely to doubt the credibility of claims that existential risk could even occur (reasoning that if events have purpose, utility and conform to a mindfully-created order then they would be good things more often than bad things) but also to feel that if they were to occur it would be for a greater underlying reason or purpose.

Thus, belief in fate indirectly increases existential risk both a. by undermining the perceived effectiveness of attempts to mitigate existential risk, deriving from the perceived ineffectiveness of humanity’s ability to shape the course and nature of events and effect change in the world in general, and b. by undermining the perceived likelihood of any existential risks culminating in humanity’s extinction, stemming from connotations of order and purpose associated with fate.

Continue reading “Faith in the Fat of Fate may be Fatal for Humanity” »

Mar 20, 2013

An Upside to Fukushima: Japan’s Robot Renaissance

Posted by in categories: engineering, existential risks, nuclear energy, robotics/AI

Fukushima’s Second Anniversary…

Two years ago the international robot dorkosphere was stunned when, in the aftermath of the Tohoku Earthquake and Tsunami Disaster, there were no domestically produced robots in Japan ready to jump into the death-to-all-mammals radiation contamination situation at the down-melting Fukushima Daiichi nuclear power plant.

…and Japan is Hard at Work.
Suffice it to say, when Japan finds out its robots aren’t good enough — JAPAN RESPONDS! For more on how Japan has and is addressing the situation, have a jump on over to

Oh, and here’s some awesome stuff sourced from the

Larger Image
- PDF With Links

Mar 19, 2013

Ten Commandments of Space

Posted by in categories: asteroid/comet impacts, biological, biotech/medical, cosmology, defense, education, engineering, ethics, events, evolution, existential risks, futurism, geopolitics, habitats, homo sapiens, human trajectories, life extension, lifeboat, military, neuroscience, nuclear energy, nuclear weapons, particle physics, philosophy, physics, policy, robotics/AI, singularity, space, supercomputing, sustainability, transparency

1. Thou shalt first guard the Earth and preserve humanity.

Impact deflection and survival colonies hold the moral high ground above all other calls on public funds.

2. Thou shalt go into space with heavy lift rockets with hydrogen upper stages and not go extinct.

Continue reading “Ten Commandments of Space” »

Mar 4, 2013

Human Brain Mapping & Simulation Projects: America Wants Some, Too?

Posted by in categories: biological, biotech/medical, complex systems, ethics, existential risks, homo sapiens, neuroscience, philosophy, robotics/AI, singularity, supercomputing

The Brain Games Begin
Europe’s billion-Euro science-neuro Human Brain Project, mentioned here amongst machine morality last week, is basically already funded and well underway. Now the colonies over in the new world are getting hip, and they too have in the works a project to map/simulate/make their very own copy of the universe’s greatest known computational artifact: the gelatinous wad of convoluted electrical pudding in your skull.

The (speculated but not yet public) Brain Activity Map of America
About 300 different news sources are reporting that a Brain Activity Map project is outlined in the current administration’s to-be-presented budget, and will be detailed sometime in March. Hoards of journalists are calling it “Obama’s Brain Project,” which is stoopid, and probably only because some guy at the New Yorker did and they all decided that’s what they had to do, too. Or somesuch lameness. Or laziness? Deference? SEO?

For reasons both economic and nationalistic, America could definitely use an inspirational, large-scale scientific project right about now. Because seriously, aside from going full-Pavlov over the next iPhone, what do we really have to look forward to these days? Now, if some technotards or bible pounders monkeywrench the deal, the U.S. is going to continue that slide toward scientific… lesserness. So, hippies, religious nuts, and all you little sociopathic babies in politics: zip it. Perhaps, however, we should gently poke and prod the hard of thinking toward a marginally heightened Europhobia — that way they’ll support the project. And it’s worth it. Just, you know, for science.

Going Big. Not Huge, But Big. But Could be Massive.
Both the Euro and American flavors are no Manhattan Project-scale undertaking, in the sense of urgency and motivational factors, but more like the Human Genome Project. Still, with clear directives and similar funding levels (€1 billion Euros & $1–3 billion US bucks, respectively), they’re quite ambitious and potentially far more world changing than a big bomb. Like, seriously, man. Because brains build bombs. But hopefully an artificial brain would not. Spaceships would be nice, though.

Continue reading “Human Brain Mapping & Simulation Projects: America Wants Some, Too?” »