Menu

Blog

Archive for the ‘existential risks’ category: Page 109

Mar 24, 2011

The Existential Importance of Life Extension

Posted by in categories: biological, biotech/medical, ethics, existential risks, life extension
The field of life extension is broad and ranges from regenerative medicine to disease prevention by nutritional supplements and phytomedicine. Although the relevance of longevity and disease prevention to existential risks is less apparent than the prevention of large-scale catastrophic scenarios, it does have a high relevance to the future of our society. The development of healthy longevity and the efficiency of modern medicine in treating age-related diseases and the question of how well we can handle upcoming issues related to public health will have a major impact on our short-term future in the next few decades. Therefore, the prospect of healthy life extension plays important roles at both a personal and a societal level.
From a personal perspective, a longevity-compatible lifestyle, nutrition and supplementary regimen may not only help us to be active and to live longer, but optimizing our health and fitness also increase our energy, mental performance and capacities for social interaction. This aids our ability to work on the increasingly complex tasks of a 21st-century world that can make a positive impact in society, such as work on existential risk awareness and problem-solving. Recently, I wrote a basic personal orientation on the dietary supplement aspect of basic life extension with an audience of transhumanists, technology advocates with a high future shock level and open-minded scientists in mind, which is available here.
On a societal level, however, aging population and public health issues are serious. A rapid increase of some diseases of civilization, whose prevalence also climbs rapidly with advanced age, is on the march. For example, Type-II-Diabetes is rapidly on its way to becoming an insurmountable problem for China and the WHO projects COPD, the chronic lung disease caused by smoking and pollution, as the third leading cause of death in 2030.

Continue reading “The Existential Importance of Life Extension” »

Mar 14, 2011

“CERN Ignores Scientific Proof That Its Current Experiment Puts Earth in Jeopardy”

Posted by in categories: existential risks, particle physics

I deeply feel with the Japanese victims of a lack of human caution regarding nuclear reactors. Is it compatible with this atonement if I desperately ask the victims to speak up with me against the next consciously incurred catastrophe made in Switzerland? If the proof of danger stays un-disproved, CERN is currently about to melt the earth’s mantle along with its core down to a 2-cm black hole in perhaps 5 years time at a probability of 8 percent. A million nuclear power plants pale before the “European Centre for Nuclear Research.” CERN must not be allowed to go on shunning the scientific safety conference sternly advised by a Cologne court only six weeks ago.

I thank Lifeboat for distributing this message worldwide.

Mar 12, 2011

Five Results on Mini-Black Holes Left Undiscussed by CERN for 3 Years

Posted by in categories: existential risks, particle physics

1) Mini black holes are both non-evaporating and uncharged.

2) The new unchargedness makes them much more likely to arise in the LHC (since electrons are no longer point-shaped in confirmation of string theory).

3) When stuck inside matter, mini black holes grow exponentially as “miniquasars” to shrink earth to 2 cm in perhaps 5 years time.

4) They go undetected by CERN’s detectors.

Continue reading “Five Results on Mini-Black Holes Left Undiscussed by CERN for 3 Years” »

Mar 10, 2011

“Too Late for the Singularity?”

Posted by in categories: existential risks, lifeboat, particle physics

Ray Kurzweil is unique for having seen the unstoppable exponential growth of the computer revolution and extrapolating it correctly towards the attainment of a point which he called “singularity” and projects about 50 years into the future. At that point, the brain power of all human beings combined will be surpassed by the digital revolution.

The theory of the singularity has two flaws: a reparable and a hopefully not irreparable one. The repairable one has to do with the different use humans make of their brains compared to that of all animals on earth and presumably the universe. This special use can, however, be clearly defined and because of its preciousness be exported. This idea of “galactic export” makes Kurzweil’s program even more attractive.

The second drawback is nothing Ray Kurzweil has anything to do with, being entirely the fault of the rest of humankind: The half century that the singularity still needs to be reached may not be available any more.

The reason for that is CERN. Even though presented in time with published proofs that its proton-colliding experiment will with a probability of 8 percent produce a resident exponentially growing mini black hole eating earth inside out in perhaps 5 years time, CERN prefers not to quote those results or try and dismantle them before acting. Even the call by an administrative court (Cologne) to convene the overdue scientific safety conference before continuing was ignored when CERN re-ignited the machine a week ago.

Continue reading “"Too Late for the Singularity?"” »

Feb 25, 2011

Security and Complexity Issues Implicated in Strong Artificial Intelligence, an Introduction

Posted by in categories: complex systems, existential risks, information science, robotics/AI

Strong AI or Artificial General Intelligence (AGI) stands for self-improving intelligent systems possessing the capacity to interact with theoretical- and real-world problems with a similar flexibility as an intelligent living being, but the performance and accuracy of a machine. Promising foundations for AGI exist in the current fields of stochastic- and cognitive science as well as traditional artificial intelligence. My aim in this post is to give a very basic insight into- and feeling for the issues involved in dealing with the complexity and universality of an AGI for a general readership.

Classical AI, such as machine learning algorithms and expert systems, are already heavily utilized in today’s real-world problems, in the form of mature machine learning algorithms, which may profitably exploit patterns in customer behaviour, find correlations in scientific data or even predict negotiation strategies, for example [1] [2], or in the form of genetic algorithms. With the next upcoming technology for organizing knowledge on the net, which is called the semantic web and deals with machine-interpretable understanding of words in the context of natural language, we may start inventing early parts of technology playing a role in the future development of AGI. Semantic approaches come from computer science, sociology and current AI research, but promise to describe and ‘understand’ real-world concepts and to enable our computers to build interfaces to real world concepts and coherences more autonomously. Actually getting from expert systems to AGI will require approaches to bootstrap self-improving systems and more research on cognition, but must also involve crucial security aspects. Institutions associated with this early research include the Singularity Institute [3] and the Lifeboat Foundation [4].

In the recent past, we had new kinds of security challenges: DoS attacks, eMail- and PDF-worms and a plethora of other malware, which sometimes even made it into military and other sensitive networks, and stole credit cards and private data en masse. These were and are among the first serious incidents related to the Internet. But still, all of these followed a narrow and predictable pattern, constrained by our current generation of PCs, (in-)security architecture, network protocols, software applications, and of course human flaws (e.g. the emotional response exploited by the “ILOVEYOU virus”). To understand the implications in strong AI first means to realize that probably there won’t be any human-predictable hardware, software, interfaces around for longer periods of time as long as AGI takes off hard enough.

To grasp the new security implications, it’s important to understand how insecurity can arise from the complexity of technological systems. The vast potential of complex systems oft makes their effects hard to predict for the human mind which is actually riddled with biases based on its biological evolution. For example, the application of the simplest mathematical equations can produce complex results hard to understand and predict by common sense. Cellular automata, for example, are simple rules for generating new dots, based on which dots, generated by the same rule, are observed in the previous step. Many of these rules can be encoded in as little as 4 letters (32 bits), and generate astounding complexity.

Continue reading “Security and Complexity Issues Implicated in Strong Artificial Intelligence, an Introduction” »

Feb 10, 2011

New Implication of Einstein’s Happiest Thought Is Last Hope for Planet

Posted by in categories: existential risks, particle physics

Einstein saw that clocks located “more downstairs” in an accelerating rocket predictably tick slower. This was his “happiest thought” as he often said.

However,as everything looks normal on the lower floor, the normal-appearing photons generated there do actually have less mass-energy. So do all local masses there by general covariance, and hence also all associated charges down there.

The last two implications were overlooked for a century. “This cannot be,” more than 30 renowned scientists declared, to let a prestigious experiment with which they have ties appear innocuous.

This would make for an ideal script to movie makers and for a bonanza to metrologists. But why the political undertones above? Because, like the bomb, this new crumb from Einstein’s table has a potentially unbounded impact. Only if it gets appreciated within a few days time, all human beings — including the Egyptians — can breathe freely again.

Continue reading “New Implication of Einstein's Happiest Thought Is Last Hope for Planet” »

Jan 30, 2011

Summary of My Scientific Results on the LHC-Induced Danger to the Planet

Posted by in categories: existential risks, particle physics

- submitted to the District Attorney of Tubingen, to the Administrative Court of Cologne, to the Federal Constitutional Court (BVerfG) of Germany, to the International Court for Crimes Against Humanity, and to the Security Council of the United Nations -

by Otto E. Rössler, Institute for Physical and Theoretical Chemistry, University of Tubingen, Auf der Morgenstelle A, 72076 Tubingen, Germany

The results of my group represent fundamental research in the fields of general relativity, quantum mechanics and chaos theory. Several independent findings obtained in these disciplines do jointly point to a danger — almost as if Nature had posed a trap for humankind if not watching out.

MAIN RESULT. It concerns BLACK HOLES and consists of 10 sub-results

Continue reading “Summary of My Scientific Results on the LHC-Induced Danger to the Planet” »

Jan 17, 2011

Stories We Tell

Posted by in categories: complex systems, existential risks, futurism, lifeboat, policy


What do Singularitarianism and popular Western religion have in common? More than you might imagine. A thumbnail evaluation of both ends of the American technocentric intelligence spectrum reveals both remarkable similarities in their respective narrative constructions and, naturally, amusing disparities. It would appear that all humans, regardless of our respective beliefs, seem to express goal-oriented hardwiring that demands a neatly constructed story to frame our experiences.

Be you a technophile, you are eagerly awaiting, with perhaps equal parts hope and fear, the moment when artificial general intelligence surpasses human intelligence. You don’t know exactly how this new, more cunning intelligence will react to humans, but you’re fairly certain that humanity might well be in a bit of trouble, or at very least, have some unique competition.

Be you a technophobe, you shun the trappings of in-depth science and technology involvement, save for a superficial interaction with the rudimentary elements of technology which likely do not extend much further than your home computer, cell phone, automobile, and/or microwave oven. As a technophobe, you might even consider yourself religious, and if you’re a Christian, you might well be waiting for the second-coming, the rapture.

Both scenarios lead humanity to ironically similar destinations, in which humankind becomes either marginalized or largely vestigial.

Continue reading “Stories We Tell” »

Nov 26, 2010

“Rogue states” as a source of global risk

Posted by in categories: existential risks, geopolitics

Some countries are a threat as possible sources of global risk. First of all we are talking about countries which have developed, but poorly controlled military programs, as well as the specific motivation that drives them to create a Doomsday weapon. Usually it is a country that is under threat of attack and total conquest, and in which the control system rests on a kind of irrational ideology.

The most striking example of such a global risk are the efforts of North Korea’s to weaponize Avian Influenza (North Korea trying to weaponize bird flu http://www.worldnetdaily.com/news/article.asp?ARTICLE_ID=50093), which may lead to the creation of the virus capable of destroying most of Earth’s population.

There is not really important, what is primary: an irrational ideology, increased secrecy, the excess of military research and the real threat of external aggression. Usually, all these causes go hand in hand.

The result is the appearance of conditions for creating the most exotic defenses. In addition, an excess of military scientists and equipment allows individual scientists to be, for example, bioterrorists. The high level of secrecy leads to the fact that the state as a whole does not know what they are doing in some labs.

Continue reading “"Rogue states" as a source of global risk” »

Nov 21, 2010

TSA and the Coming Great Filter

Posted by in categories: existential risks, policy

Many people think that the issues Lifeboat Foundation is discussing will not be relevant for many decades to come. But recently a major US Governmental Agency, the TSA, decided to make life hell for 310 million Americans (and anyone who dares visit the USA) as it reacts to the coming Great Filter.

What is the Great Filter? Basically it is whatever has caused our universe to be dead with no advanced civilizations in it. (An advanced civilization is defined as a civilization advanced enough to be self-sustaining outside its home planet.)

The most likely explanation for this Great Filter is that civilizations eventually develop technologies so powerful that they provide individuals with the means to destroy all life on the planet. Technology has now become powerful enough that the TSA even sees 3-year-old girls as threats who may take down a plane so they take away her teddy bear and grope her.

Continue reading “TSA and the Coming Great Filter” »