Blog

Archive for the ‘existential risks’ category

Dec 17, 2014

Elon Musk named Lifeboat Foundation 2014 Guardian Award winner

Posted by in categories: existential risks, lifeboat, robotics/AI, solar power, space travel, sustainability

elon.musk

The Lifeboat Foundation Guardian Award is annually bestowed upon a respected scientist or public figure who has warned of a future fraught with dangers and encouraged measures to prevent them.

The 2014 Lifeboat Foundation Guardian Award has been given to Elon Musk in recognition of his warnings about artificial intelligence, his promotion of space exploration including the creation of self-sustaining space colonies, and his efforts to improve our environment with electric cars and to expand solar energy generation.

Elon is often likened to a real-life Tony Stark from Marvel’s Iron Man comics for his role in cutting-edge companies including SpaceX, a private space exploration company that holds the first private contracts from NASA for resupply of the International Space Station, and the electric car company Tesla Motors. Watch Elon in Iron Man 2!

Continue reading “Elon Musk named Lifeboat Foundation 2014 Guardian Award winner” »


Dec 14, 2014

Elon Musk Is Right: Colonizing the Solar System Is Humankind’s Insurance Policy Against Extinction

Posted by in categories: existential risks, human trajectories, space, space travel

Written By: — Singularity Hub

http://cdn.singularityhub.com/wp-content/uploads/2014/10/space-exploration-extinction-insurance-1.jpg

Why blow billions of dollars on space exploration when billions of people are living in poverty here on Earth?

You’ve likely heard the justifications. The space program brings us useful innovations and inventions. Space exploration delivers perspective, inspiration, and understanding. Because it’s the final frontier. Because it’s there.

Continue reading “Elon Musk Is Right: Colonizing the Solar System Is Humankind’s Insurance Policy Against Extinction” »


Dec 2, 2014

UNO Stops Food Aid everywhere so it is reported

Posted by in category: existential risks

So at the very moment that the only way to stop Ebola is a Berlin-like airlift of food and water twice weekly, flown to 3 countries to enable people to stay home to break the contagion — the only efficient way so far.

This is the most cynical news on the planet for years. A petty 65 million $ are allegedly missing on our planet. UNO is that you?

Is there no one in America with a heart for African citizens?

Nov 24, 2014

I need Advice from the Young

Posted by in category: existential risks

If I were young, I would jump the bandwagon of cryodynamics and the implied new cosmology and free energy.
And I would motivate friends to help me write down the new global–c transform of general relativity.

And above all, I would tell my elders that f&w – food and water – is the only way to stop the exponential growth of Ebola on a continent.

You can make a difference, my young friends, please, start. Politics is boring and sterile. This is a young planet on which you can make a difference. Don’t let the establishment kill Africa. You can stop CERN and you can save Liberia. Monrovia is about to die, see J.A. Lewnard, M.L. Ndefoh Mbah, Yale University, http://biostat.gru.edu/Journal%20Club/Rao_2014.pdf

Be human. Don’t kill by inaction like your elders do. f&w is the only lifeline of a mega city and, at the moment, its West Point area. Please, start.

What advice can you hand out to me in return?

Nov 5, 2014

The Exponential Nature of Ebola

Posted by in categories: biological, existential risks

The Exponential Nature of Ebola

Otto E. Rossler

Institute for Physical and Theoretical Chemistry, University of Tubingen, Auf der Morgenstelle 8, 72076 Tubingen, Germany

Inscribed on the UN Building:
Human beings are members of a whole,
In creation of one essence and soul;
If one member is afflicted with pain,
Other members uneasy will remain;
If you have no sympathy for human pain,
The name of human you cannot retain.
(Saadi, 1210–1292)

Continue reading “The Exponential Nature of Ebola” »


Oct 18, 2014

Marilyn Monroe in London and Continuous Performance Improvement

Posted by in categories: business, economics, education, energy, existential risks, futurism

Marilyn Monroe in London and Continuous Performance Improvement

0   a   MARILYN

This is an actual story.

I was the Insurance Broker House EVP for the world’s global oil corporation number two and got asked a delicate official favor from the Client.

To give you an idea of this piece of business, the Client was paying cash US$ 100 million for insured and re-insured premiums over their fixed and liquid assets. The latter via a major and reputable London Reinsurance Brokerage House.

Continue reading “Marilyn Monroe in London and Continuous Performance Improvement” »


Oct 9, 2014

Dying Twice

Posted by in categories: biotech/medical, existential risks

Dying Twice

Of course, Ebola can be stopped: By rigorously restricting locomotion for the population. But this presupposes heavy support networks. At the present moment in time, this strategy is still feasible. However, since the disease is spreading exponentially in both number and area, very soon the resources of aid-giving nations will be overtaxed. Then the Big Dying from Ebola will be accompanied by the Big Dying from hunger and thirst due to restricted locomotion without the necessary support services.

There exist institutions that could help. But they all do not know what “dying twice” means:
FIRST: from having become untouchable and unapproachable;
SECOND: from thirst, hunger and the pangs of the disease which are intolerably ugly and stenching.
Mothers do not mind. But where are all the mothers for the dying? Even Jesus’ mother was there at the cross.

Since the disease is doubling every 3 weeks (only at some places the rate is slowed due to restricted locomotion), someone must make a “war plan.” I am sure some wonderful organizations have already done so, but they must join forces, resources and above all: information.
One joint plan must be negotiated, maybe under the auspices of the Vatican?, or the CDC?, or Castroland? It need not be the money-stripped United Nations. Only: SOON!

Continue reading “Dying Twice” »


Oct 4, 2014

Method of Sustainable Fuel-less Terra-forming of Venus & Mars

Posted by in categories: existential risks, futurism, human trajectories, solar power, space, sustainability

Terra Forming Venus & Mars by leveraging Asteroids
Inspired by: Lifeboat Foundation

Both Mars and Venus can be terra-formed to provide Earth-like gravity and atmospheres; Venus with an effort of about 100 years to terra-form the atmosphere, and Mars with an effort of about 2,000 years to terra-form the atmosphere. These are both potentially realized through the use of systems of solar sails. Asteroids provide many of the resources needed to seed related development.

Business model for interplanetary transport without fuel

Conceptual Space Elevator

Continue reading “Method of Sustainable Fuel-less Terra-forming of Venus & Mars” »


Sep 25, 2014

Question: A Counterpoint to the Technological Singularity?

Posted by in categories: defense, disruptive technology, economics, education, environmental, ethics, existential risks, finance, futurism, lifeboat, policy, posthumanism, science, scientific freedom

Question: A Counterpoint to the Technological Singularity?

0  wildest

Douglas Hofstadter, a professor of cognitive science at Indiana University, indicated about The Singularity is Near Book (ISBN: 978–0143037880),

“ … A very bizarre mixture of ideas that are solid and good with ideas that are crazy. It’s as if you took a lot of very good food and some dog excrement and blended it all up so that you can’t possibly figure out what’s good or bad …”

Continue reading “Question: A Counterpoint to the Technological Singularity?” »


Sep 18, 2014

Why Superintelligence May Not Help Us Think about Existential Risks — or Transhumanism

Posted by in categories: alien life, biological, cyborg, defense, disruptive technology, ethics, existential risks, futurism, homo sapiens, human trajectories, internet, military, philosophy, policy, posthumanism, science, singularity, transhumanism

Among transhumanists, Nick Bostrom is well-known for promoting the idea of ‘existential risks’, potential harms which, were they come to pass, would annihilate the human condition altogether. Their probability may be relatively small, but the expected magnitude of their effects are so great, so Bostrom claims, that it is rational to devote some significant resources to safeguarding against them. (Indeed, there are now institutes for the study of existential risks on both sides of the Atlantic.) Moreover, because existential risks are intimately tied to the advancement of science and technology, their probability is likely to grow in the coming years.

Contrary to expectations, Bostrom is much less concerned with ecological suicide from humanity’s excessive carbon emissions than with the emergence of a superior brand of artificial intelligence – a ‘superintelligence’. This creature would be a human artefact, or at least descended from one. However, its self-programming capacity would have run amok in positive feedback, resulting in a maniacal, even self-destructive mission to rearrange the world in the image of its objectives. Such a superintelligence may appear to be quite ruthless in its dealings with humans, but that would only reflect the obstacles that we place, perhaps unwittingly, in the way of the realization of its objectives. Thus, this being would not conform to the science fiction stereotype of robots deliberately revolting against creators who are now seen as their inferiors.

I must confess that I find this conceptualisation of ‘existential risk’ rather un-transhumanist in spirit. Bostrom treats risk as a threat rather than as an opportunity. His risk horizon is precautionary rather than proactionary: He focuses on preventing the worst consequences rather than considering the prospects that are opened up by whatever radical changes might be inflicted by the superintelligence. This may be because in Bostrom’s key thought experiment, the superintelligence turns out to be the ultimate paper-clip collecting machine that ends up subsuming the entire planet to its task, destroying humanity along the way, almost as an afterthought.

But is this really a good starting point for thinking about existential risk? Much more likely than total human annihilation is that a substantial portion of humanity – but not everyone – is eliminated. (Certainly this captures the worst case scenarios surrounding climate change.) The Cold War remains the gold standard for this line of thought. In the US, the RAND Corporation’s chief analyst, Herman Kahn — the model for Stanley Kubrick’s Dr Strangelove – routinely, if not casually, tossed off scenarios of how, say, a US-USSR nuclear confrontation would serve to increase the tolerance for human biological diversity, due to the resulting proliferation of genetic mutations. Put in more general terms, a severe social disruption provides a unique opportunity for pursuing ideals that might otherwise be thwarted by a ‘business as usual’ policy orientation.

Continue reading “Why Superintelligence May Not Help Us Think about Existential Risks -- or Transhumanism” »


Page 1 of 3912345678Last