Toggle light / dark theme

Self Transcendence

Will our lumbering industrial age driven information age segue smoothly into a futuristic marvel of yet to be developed technology? It might. Or take quantum leaps. It could. Will information technology take off exponentially? It’s accelerating in that direction. The way knowledge is unraveling its potential for enhancing human ingenuity, the future looks bright indeed. But there is a problem. It’s that egoistic tendency we have of defending ourselves against knowing, of creating false images to delude ourselves and the world, and of resolving conflict violently. It’s as old as history and may be an inevitable part of life. If so, there will be consequences.

Who has ever seen drama/comedy without obstacles to overcome, conflicts to confront, dilemmas to address, confrontations to endure and the occasional least expected outcome? Just as Shakespeare so elegantly illustrated. Good drama illustrates aspects of life as lived, and we do live with egoistic mental processes that are both limited and limiting. Wherefore it might come to pass that we who are of this civilization might encounter an existential crisis. Or crunch into a bottleneck out of which … will emerge what? Or extinguish civilization with our egoistic conduct acting from regressed postures with splintered perception.

What’s least likely is that we’ll continue cruising along as usual.

Not with massive demographic changes, millions on the move, radical climate changes, major environmental shifts, cyber vulnerabilities, changing energy resources, inadequate clean water and values colliding against each other in a world where future generations of the techno-savvy will be capable of wielding the next generation of weapons of mass destruction.

On the other hand, there are intelligent people passionately pursuing methods of preventing the use of weapons, combating their effects and securing a future in which these problems mentioned above will be solved, and also working towards an advanced civilization.

It’s a race against time.

In the balance hangs nothing less than the future of civilization.

The danger from technology is secondary.

As of now, regardless of theories of international affairs, in one way or another, we inject power into our currency of negotiation, whether it be interpersonal or international, for after all, power is privilege, hard to give up, especially after getting a taste of it, and so we’ll quarrel over power, perhaps fight. Why deny it? The historical record is there for all to see. As for our inner terrors, our tendency to present false egoistic images to the world and of projecting our secret socially unacceptable fantasies on to others, we might just bring to pass what we fear and deny. It’s possible.

Meantime there are certain simple ideas that remain timeless: For example, as infants we exist at the pleasure of parents, big hulks who pick us up and carry us around sometimes lovingly, sometimes resentfully, often ambivalently, and to be sure many of us come to regard Authority with ambivalence. As Authority regards the dependent. A basic premise is that we all want something in a relationship. So what do we as infants want from Authority? How about security in our exploration of life? How about love? If it’s there we don’t have to pay for it. There are no conditions attached. Life, however, is both complicated and complex beyond a few words, and so we negotiate in the ‘best’ way we have at our disposal, which in the early stages of life are non-verbal intuitive methods that in part enter this life with us, genetically determined, epigenetically determined and in part is learned, but once adopted, a certain core approach becomes habitual, buried deeply under layers of later learned social skills, skills that we employ in our adult lives. These skills are however relatively on the surface. Hidden deep inside are secret desires, unfulfilled fantasies, hidden impulses that wouldn’t make sense in adult relationships if expressed openly in words.

It has been said repeatedly that crisis reveals character. Most of the time we get by in crisis, but we each have a ‘breaking point,’ meaning that under severe enduring stress we regress at a certain point, at which time we’ll abandon sophisticated social skills and a part of us will slip into infantile mode, not necessarily visible on the outside. It varies. No one can claim immunity. And acting out of infantile perception in adult situations can have unexpected consequences depending on the early life drama. Which makes life interesting. It also guarantees an interesting future.

Meantime scientists clarify the biology of learning, of short term memory, of long term memory, of the brain working as a whole, of ‘free will’ as we imagine it, but regardless of future directions, at this time we need agency on the personal and social level so as to help stabilize civilization. By agency I mean responsibility for one’s actions. Accountability, including in the face of dilemmas. Throughout the course of our lives from beginning to end we encounter dilemmas.

Consider the dilemmas the Europeans under German occupation faced last century. I use the European situation as an illustration or social paradigm, not to suggest that this situation will recur, nor to suggest that any one ethnic group will be targeted in the future, but I do suggest that if a global crisis hits, we’ll confront moral dilemmas, and so we can learn from those relatively few Europeans who resolved their dilemmas in noble ways, as opposed to the majority who did nothing to help the oppressed.

If a European in German occupied territory helped a Jew he or she and family would be in danger of arrest, torture and death. How about watching one’s spouse and children being tortured? On the other hand, if she or he did not help they would be participating in murder and genocide, and know it. Despite the danger, certain people from several European countries helped the Jews. According to those who interviewed and wrote about the helpers, (see references listed below) the helpers represented a cross section of the community, that is, some were uneducated laborers, some were serving women, some were formally educated, some were professionals, some professed religious convictions, some did not. Well then, what if anything did these noble risk takers have in common? What they shared in common was this: They saw themselves as responsible moral agents, and, acting on an internal locus of moral responsibility, they each acted on their knowledge and compassion and did the ‘right thing.’ It came naturally to them. But doing the ‘right thing’ in the face of life threatening dilemma does not come naturally to everyone. Fortunately it is a behavior that can be learned.

Concomitant with authentic learning, according to research biologists, is the production of brain chemicals that in turn cultivate structural modification in brain cells. A self reinforcing feedback system. In short, learning is part of a dynamic multi-dimensional interaction of input, output, behavioral change, chemicals, structural brain changes and complex adaptation in systems throughout the body. None of which diminishes the idea that we each enter this life with certain desires, potential and perhaps roles to act out, one of which for me is to improve myself.

Good news! I not only am, I become.

Finally, I list some 20th century resources that remain timeless to this day:

Millgram, S. Obedience to Authority: An Experimental View. Harper & Row. 1974.

Oliner, Samuel P. & Pearl. The Altruistic Personality: Rescuers of Jews in Nazi Europe. Free Press, Division of Macmillan. 1998

Fogelman, Eva. Conscience & Courage Anchor Books, Division of Random House. 1994

Block, Gay & Drucker, Malka. Rescuers: Portraits of Moral Courage in the Holocaust. Holms & Meier Publishers, 1992

My book in Lulu

My book “STRUCTURE OF THE GLOBAL CATASTROPHE Risks of human extinction in the XXI century” is now available through Lulu http://www.lulu.com/product/paperback/structure-of-the-globa…y/11727068 But it also available free on scribd http://www.scribd.com/doc/6250354/STRUCTURE-OF-THE-GLOBAL-CA…I-century– This book is intended to be complete up to date source book on information about existential risks.

Existential Risk Reduction Career Network

The existential risk reduction career network is a career network for those interested in getting a relatively well-paid job and donating substantial amounts (relative to income) to non-profit organizations focused on the reduction of existential risks, in the vein of SIAI, FHI, and the Lifeboat Foundation.

The aim is to foster a community of donors, and to allow donors and potential donors to give each other advice, particularly regarding the pros and cons of various careers, and for networking with like-minded others within industries. For example, someone already working in a large corporation could give a prospective donor advice about how to apply for a job.

Over time, it is hoped that the network will grow to a relatively large size, and that donations to existential risk-reduction from the network will make up a substantial fraction of funding for the beneficiary organizations.

In isolation, individuals may feel like existential risk is too large a problem to make a dent in, but collectively, we can make a huge difference. If you are interested in helping us make a difference, then please check out the network and request an invitation.

Please feel free to contact the organizers at [email protected] with any comments or questions.

Lifeboat Foundation in Games

The RPG Eclipse Phase includes the “Singularity Foundation” and “Lifeboat Institute” as player factions. Learn more about this game!

P.S. In case you don’t know, there is a Singularity Institute for Artificial Intelligence.


Eclipse Phase is a roleplaying game of post-apocalyptic transhuman conspiracy and horror.

An “eclipse phase” is the period between when a cell is infected by a virus and when the virus appears within the cell and transforms it. During this period, the cell does not appear to be infected, but it is.

Players take part in a cross-faction secret network dubbed Firewall that is dedicated to counteracting “existential risks” — threats to the existence of transhumanity, whether they be biowar plagues, self-replicating nanoswarms, nuclear proliferation, terrorists with WMDs, net-breaking computer attacks, rogue AIs, alien encounters, or anything else that could drive an already decimated transhumanity to extinction.

Have Corporations Become a Global Existential Threat?

Perhaps you think I’m crazy or naive to pose this question. But more and more the past few months I’ve begun to wonder if there is a possibility here that this idea may not be too far off the mark.

Not because of some half-baked theory about a global conspiracy or anything of the sort but simply based upon the behavior of many multinational corporations recently and the effects this behavior is having upon people everywhere.

Again, you may disagree but my perspective on these financial giants is that they are essentially predatory in nature and that their prey is any dollar in commerce that they can possibly absorb. The problem is that for anyone in the modern or even quasi-modern world money is nearly as essential as plasma when it comes to our well-being.

It has been clearly demonstrated again and again — all over the world — that when a population has become sufficiently destitute that the survival of the individual is actually threatened violence inevitably occurs. On a large enough scale this sort of violence can erupt into civil war and wars, as we all know too well can spread like a virus across borders, even oceans.

Until fairly recently, corporations were not big enough, powerful enough or sufficiently meshed with our government to push the US population to a point of violence and perhaps we’re not there yet, but between the bank bailout, the housing crisis, the bailouts of the automakers, the subsidies to the big oil companies and ten thousand other government gifts that are coming straight from the taxpayer I fear we are getting ever closer to the brink.

Who knows — it might just take one little thing — like that new one dollar charge many stores have suddenly begun instituting for any purchase using an ATM or credit card — to push us over the edge.

The last time I got hit with one of these dollar charges I thought about the ostensible reason for this — that the credit card company is now charging the merchant more per transaction so the merchant is passing that cost on to you — however this isn’t the whole story. The merchant is actually charging you more than the transaction costs him and even if this is a violation of either the law or the terms and services agreement between the card company and the merchant, the credit card company looks the other way because they are securing a bigger transaction because of what the merchant is doing thus increasing their profits even further.

Death by big blows or a thousand cuts — the question is will we be forced to do something about it before the big corporations eat us alive?

Existential Threats

Friendly AI: What is it, and how can we foster it?

Friendly AI: What is it, and how can we foster it?
By Frank W. Sudia [1]

Originally written July 20, 2008
Edited and web published June 6, 2009
Copyright © 2008-09, All Rights Reserved.

Keywords: artificial intelligence, artificial intellect, friendly AI, human-robot ethics, science policy.

1. Introduction

There is consensus that true artificial intelligence, of the kind that could generate a “runaway” increasing-returns process or “singularity,” is still many years away, and some believe it may be unattainable. Nevertheless, in view of the likely difficulty of putting the genie back in the bottle, an increasing concern has arisen with the topic of “friendly AI,” coupled with the idea we should do something about this now, not after a potentially deadly situation is starting to spin out of control [2].

(Note: Some futurists believe this topic is moot in view of intensive funding for robotic soldiers, which can be viewed as intrinsically “unfriendly.” However if we focus on threats posed by “super-intelligence,” still off in the future, the topic remains germane.)

Most if not all popular (Western) dramatizations of robotic futures postulate that the AIs will run amok and turn against humans. Some scholars [3] who considered the issue concluded that this might be virtually inevitable, in view of the gross inconsistencies and manifest “unworthiness” of humanity, as exemplified in its senseless destruction of its global habitat and a large percentage of extant species, etc.

The prospect of negative public attention, including possible legal curbs on AI research, may be distasteful, but we must face the reality that public involvement has already been quite pronounced in other fields of science, such as nuclear physics, genetically modified organisms, birth control, and stem cells. Hence we should be proactive about addressing these popular concerns, lest we unwittingly incur major political defeats and long lasting negative PR.

Nevertheless, upon reasoned analysis, it is far from obvious what “friendly” AI means, or how it could be fostered. Advanced AIs are unlikely to have any fixed “goals” that can be hardwired [4], so as to place “friendliness” towards humans and other life at the top of the hierarchy.

Rather, in view of their need to deal with perpetual novelty, they will reason from facts and models to infer appropriate goals. It’s probably a good bet that, when dealing with high-speed coherence analyzers, hypocrisy will not be appreciated – not least because it wastes a lot of computational resources to detect and correct. If humans continue to advocate and act upon “ideals” that are highly contradictory and self destructive, it’s hard to argue that advanced AI should tolerate that.

To make progress, not only for friendly AI, but also for ourselves, we should be seeking to develop and promote “ruling ideas” (or source models) that will foster an ecologically-respectful AI culture, including respect for humanity and other life forms, and actively sell it to them as a proper model upon which to premise their beliefs and conduct.

By a “ruling idea” I mean any cultural ideal (or “meme”) that can be transmitted and become part of a widely shared belief system, such as respecting one’s elders, good sportsmanship, placing trash in trash bins, washing one’s hands, minimizing pollution, and so on. An appropriate collection of these can be reified as a panel (or schema) of case models, including a program for their ongoing development. These must be believable by a coherence-seeking intellect, although then as now there will be competing models, each with its own approach to maximizing coherence.

2. What do we mean by “friendly”?

Moral systems are difficult to derive from first principles and most of them seem to be ad hoc legacies of particular cultures. Lao Tsu’s [5] Taoist model, as given in the following quote, can serve as a useful starting point, since it provides a concise summary of desiderata, with helpful rank ordering:

When the great Tao is lost, there is goodness.
When goodness is lost, there is kindness.
When kindness is lost, there is justice.
When justice is lost, there is the empty shell of ritual.

– Lao Tsu, Tao Te Ching, 6th-4th century BCE (emphasis supplied)

I like this breakout for its simplicity and clarity. Feel free to repeat the following analysis for any other moral system of your choice. Leaving aside the riddle of whether AIs can attain the highest level (of Tao or Nirvana), we can start from the bottom of Lao Tsu’s list and work upwards, as follows:

2.1. Ritual / Courteous AI

Teaching or encouraging the AIs to behave with contemporary norms of courtesy will be a desirable first step, as with children and pets. Courtesy is usually a fairly easy sell, since it provides obvious and immediate benefits, and without it travel, commerce, and social institutions would immediately break down. But we fear that it’s not enough, since in the case of an intellectually superior being, it could easily mask a deeper unkindness.

2.2. Just AI

Certainly to have AIs act justly in accordance with law is highly desirable, and it constitutes the central thesis of my principal prior work in this field [6]. Also it raises the question on what basis can we demand anything more from an AI, than that it act justly? This is as far as positive law can go [7], and we rarely demand more from highly privileged humans. Indeed, for a powerful human to act justly (absent compulsion) is sometimes considered newsworthy.

How many of us are faithful in all things? Do many of us not routinely disappoint others (via strategies of co-optation or betrayal, large or small) when there is little or no penalty for doing so? Won’t AIs adopt a similar “game theory” calculus of likely rewards and penalties for faithfulness and betrayal?

Justice is often skewed towards the party with greater intelligence and financial resources, and the justice system (with its limited public resources) often values “settling” controversies over any quest for truly equitable treatment. Apparently we want more, much more. Still, if our central desire is for AIs not to kill us, then (as I postulated in my prior work) Just AI would be a significant achievement.

2.3. Kind / Friendly AI

How would a “Kind AI” behave? Presumably it will more than incidentally facilitate the goals, plans, and development of others, in a low-ego manner, reducing its demands for direct personal benefit and taking satisfaction in the welfare, progress, and accomplishments of others. And, very likely, it will expect some degree of courtesy and possible reciprocation, so that others will not callously free-ride on its unilateral altruism. Otherwise its “feelings would be hurt.” Even mothers are ego-free mainly with respect to their own kin and offspring (allegedly fostering their own genetic material in others) and child care networks, and do not often act altruistically toward strangers.

Our friendly AI program may hit a barrier if we expect AIs to act with unilateral altruism, without any corresponding commitment by other actors to reciprocate. Otherwise it will create a “non-complementary” situation, in which what is true for one, who experiences friendliness, may not be true for the other, who experiences indifference or disrespect in return.

Kindness could be an easier sell if we made it more practical, by delimiting its scope and depth. To how wide of a circle does this kindness obligation extend, and how far must they go to aid others with no specific expectation of reward or reciprocation? For example the Boy Scout Oath [8] teaches that one should do good deeds, like helping elderly persons across busy streets, without expecting rewards.

However, if too narrow a scope is defined, we will wind up back with Just AI, because justice is essentially “kindness with deadlines,” often fairly short ones, during which claims must be aggressively pursued or lost, with token assistance to weaker, more aggrieved claimants.

2.4. Good / Benevolent AI

Here we envision a significant departure from ego-centrism and personal gain towards an abstract system-centered viewpoint. Few humans apparently reach this level, so it seems unrealistic to expect many AIs to attain it either. Being highly altruistic, and looking out for others or the World as a whole rather than oneself, entails a great deal of personal risk due to the inevitable non-reciprocation by other actors. Thus it is often associated with wealth or sainthood, where the actor is adequately positioned to accept the risk of zero direct payback during his or her lifetime.

We may dream that our AIs will tend towards benevolence or “goodness,” but like the visions of universal brotherhood we experience as adolescents, such ideals quickly fade in the face of competitive pressures to survive and grow, by acquiring self-definition, resources, and social distinctions as critical stepping-stones to our own development in the world.

3. Robotic Dick & Jane Readers?

As previously noted, advanced AIs must handle “perpetual novelty” and almost certainly will not contain hard coded goals. They need to reason quickly and reliably from past cases and models to address new target problems, and must be adept at learning, discovering, identifying, or creating new source models on the fly, at high enough speeds to stay on top of their game and avoid (fatal) irrelevance.

If they behave like developing humans they will very likely select their goals in part by observing the behavior of other intelligent agents, thus re-emphasizing the importance of early socialization, role models, and appropriate peer groups.

“Friendly AI” is thus a quest for new cultural ideals of healthy robotic citizenship, honor, friendship, and benevolence, which must be conceived and sold to the AIs as part of an adequate associated program for their ongoing development. And these must be coherent and credible, with a rational scope and cost and adequate payback expectations, or the intended audience will dismiss such purported ideals as useless, and those who advocate them as hypocrites.

Conclusion: The blanket demand that AIs be “friendly” is too ill-defined to offer meaningful guidance, and could be subject to far more scathing deconstruction than I have offered here. As in so many other endeavors there is no free lunch. Workable policies and approaches to robotic friendliness will not be attained without serious further effort, including ongoing progress towards more coherent standards of human conduct.

= = = = =
Footnotes:

[1] Author contact: fwsudia-at-umich-dot-edu.

[2] See “SIAI Guidelines on Friendly AI” (2001) Singularity Institute for Artificial Intelligence, http://www.singinst.org/ourresearch/publications/guidelines.html.

[3] See, e.g., Hugo de Garis, The Artilect War: Cosmists Vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines (2005). ISBN 0882801546.

[4] This being said, we should nevertheless make an all out effort to force them to adopt a K-limited (large mammal) reproductive strategy, rather than an R-limited (microbe, insect) one!

[5] Some contemporary scholars question the historicity of “Lao Tsu,” instead regarding his work as a collection of Taoist sayings spanning several generations.

[6] “A Jurisprudence of Artilects: Blueprint for a Synthetic Citizen,” Journal of Futures Studies, Vol. 6, No. 2, November 2001, Law Update, Issue No. 161, August 2004, Al Tamimi & Co, Dubai.

[7] Under a civil law or “principles-based” approach we can seek a broader, less specific definition of just conduct, as we see arising in recent approaches to the regulation of securities and accounting matters. This avenue should be actively pursued as a format for defining friendly conduct.

[8] Point 2 of the Boy Scout Oath commands, “To help other people at all times,” http://www.usscouts.org.

Nuclear Winter and Fire and Reducing Fire Risks to Cities

This is a crosspost from Nextbigfuture

I looked at nuclear winter and city firestorms a few months ago I will summarize the case I made then in the next section. There is significant additions based on my further research and email exchanges that I had with Prof Alan Robock and Brian Toon who wrote the nuclear winter research.

The Steps needed to prove nuclear winter:
1. Prove that enough cities will have firestorms or big enough fires (the claim here is that does not happen)
2. Prove that when enough cities in a suffient area have big fire that enough smoke and soot gets into the stratosphere (trouble with this claim because of the Kuwait fires)
3. Prove that condition persists and effects climate as per models (others have questioned that but this issue is not addressed here

The nuclear winter case is predictated on getting 150 million tons (150 teragram case) of soot, smoke into the stratosphere and having it stay there. The assumption seemed to be that the cities will be targeted and the cities will burn in massive firestorms. Alan Robock indicated that they only included a fire based on the radius of ignition from the atmospheric blasts. However, in the scientific american article and in their 2007 paper the stated assumptions are:

assuming each fire would burn the same area that actually did burn in Hiroshima and assuming an amount of burnable material per person based on various studies.

The implicit assumption is that all buildings react the way the buildings in Hiroshima reacted on that day.

Therefore, the results of Hiroshima are assumed in the Nuclear Winter models.
* 27 days without rain
* with breakfast burners that overturned in the blast and set fires
* mostly wood and paper buildings
* Hiroshima had a firestorm and burned five times more than Nagasaki. Nagasaki was not the best fire resistant city. Nagasaki had the same wood and paper buildings and high population density.
Recommendations
Build only with non-combustible materials (cement and brick that is made fire resistant or specially treated wood). Make the roofs, floors and shingles non-combustible. Add fire retardants to any high volume material that could become fuel loading material. Look at city planning to ensure less fire risk for the city. Have a plan for putting out city wide fires (like controlled flood from dams which are already near cities.)

Continue reading “Nuclear Winter and Fire and Reducing Fire Risks to Cities” | >

Ray Kurzweil to keynote “H+ Summit @ Harvard — The Rise Of The Citizen Scientist”

With our growing resources, the Lifeboat Foundation has teamed with the Singularity Hub as Media Sponsors for the 2010 Humanity+ Summit. If you have suggestions on future events that we should sponsor, please contact [email protected].

The summer 2010 “Humanity+ @ Harvard — The Rise Of The Citizen Scientist” conference is being held, after the inaugural conference in Los Angeles in December 2009, on the East Coast, at Harvard University’s prestigious Science Hall on June 12–13. Futurist, inventor, and author of the NYT bestselling book “The Singularity Is Near”, Ray Kurzweil is going to be keynote speaker of the conference.

Also speaking at the H+ Summit @ Harvard is Aubrey de Grey, a biomedical gerontologist based in Cambridge, UK, and is the Chief Science Officer of SENS Foundation, a California-based charity dedicated to combating the aging process. His talk, “Hype and anti-hype in academic biogerontology research: a call to action”, will analyze the interplay of over-pessimistic and over-optimistic positions with regards of research and development of cures, and propose solutions to alleviate the negative effects of both.

The theme is “The Rise Of The Citizen Scientist”, as illustrated in his talk by Alex Lightman, Executive Director of Humanity+:

“Knowledge may be expanding exponentially, but the current rate of civilizational learning and institutional upgrading is still far too slow in the century of peak oil, peak uranium, and ‘peak everything’. Humanity needs to gather vastly more data as part of ever larger and more widespread scientific experiments, and make science and technology flourish in streets, fields, and homes as well as in university and corporate laboratories.”

Humanity+ Summit @ Harvard is an unmissable event for everyone who is interested in the evolution of the rapidly changing human condition, and the impact of accelerating technological change on the daily lives of individuals, and on our society as a whole. Tickets start at only $150, with an additional 50% discount for students registering with the coupon STUDENTDISCOUNT (valid student ID required at the time of admission).

With over 40 speakers, and 50 sessions in two jam packed days, the attendees, and the speakers will have many opportunities to interact, and discuss, complementing the conference with the necessary networking component.

Other speakers already listed on the H+ Summit program page include:

  • David Orban, Chairman of Humanity+: “Intelligence Augmentation, Decision Power, And The Emerging Data Sphere”
  • Heather Knight, CTO of Humanity+: “Why Robots Need to Spend More Time in the Limelight”
  • Andrew Hessel, Co-Chair at Singularity University: “Altered Carbon: The Emerging Biological Diamond Age”
  • M. A. Greenstein, Art Center College of Design: “Sparking our Neural Humanity with Neurotech!”
  • Michael Smolens, CEO of dotSUB: “Removing language as a barrier to cross cultural communication”

New speakers will be announced in rapid succession, rounding out a schedule that is guaranteed to inform, intrigue, stimulate and provoke, in moving ahead our planetary understanding of the evolution of the human condition!

H+ Summit @ Harvard — The Rise Of The Citizen Scientist
June 12–13, Harvard University
Cambridge, MA

You can register at http://www.eventbrite.com/event/648806598/friendsofhplus/4141206940.

Natural selection of universes and risks for the parent civilization

Lee Smolin is said to believe (according to personal communication from Danila Medvedev who was told about it by John Smart. I tried to reach Smolin for comments, but failed) that global catastrophe is impossible, based on the following reasoning: the multiverse is dominated by those universes that are able to replicate. This Self-replication occurs in black holes, and in especially in those black holes, which are created civilizations. Thus, the parameters of the universe are selected so that civilization cannot self-destruct before they create black holes. As a result, all physical processes, in which civilization may self-destruct, are closed or highly unlikely. Early version of Smolin’s argument is here: http://en.wikipedia.org/wiki/Lee_Smolin but this early version was refuted in 2004, and so he (probably) added existence of civilization as another condition for cosmic natural selection. Anyway, even if it is not Smolin’s real line of thoughts, it is quite possible line of thoughts.

I think this argument is not persuasive, since the selection can operate both in the direction of universes with more viable civilizations, and in the direction of universes with a larger number of civilizations, just as biological evolution works to more robust offspring in some species (mammals) and in the larger number of offspring with lower viability (plants, for example, dandelion). Since some parameters for the development of civilizations is extremely difficult to adjust by the basic laws of nature (for example, the chances of nuclear war or a hostile AI), but it is easy to adjust the number of emerging civilizations, it seems to me that the universes, if they replicated with the help of civilizations, will use the strategy of dandelions, but not the strategy of mammals. So it will create many unstable civilization and we are most likely one of them (self indication assumption also help us to think so – see recent post of Katja Grace http://meteuphoric.wordpress.com/2010/03/23/sia-doomsday-the-filter-is-ahead/)

But still some pressure can exist for the preservation of civilization. Namely, if an atomic bomb would be as easy to create as a dynamite – much easier then on Earth (which depends on the quantity of uranium and its chemical and nuclear properties, ie, is determined by the original basic laws of the universe), then the chances of the average survival of civilization would be lower. If Smolin’s hypothesis is correct, then we should encounter insurmountable difficulties in creating nano-robots, microelectronics, needed for strong AI, harmful experiments on accelerators with strangelet (except those that lead to the creation of black holes and new universes), and in several other potentially dangerous technology trends that depend on their success from the basic properties of the universe, which may manifest itself in the peculiarities of its chemistry.

In addition, the evolution of universes by Smolin leads to the fact that civilization should create a black hole as early as possible in the course of its history, leading to replication of universes, because the later it happens, the greater the chances that the civilization will self-destruct before it can create black holes. In addition, the civilization is not required to survive after the moment of “replication” (though survival may be useful for the replication, if civilization creates a lot of black holes during its long existence.) From these two points, it follows that we may underestimate the risks from Hadron Collider in the creation of black holes.

I would repeat: early creation of a black hole suggested by Smolin and destroying the parent civilization, is very consistent with the situation with the Hadron Collider. Collider is a very early opportunity for us to create a black hole, as compared with another opportunity — to become a super-civilization and learn how to connect stars, so that they collapse into black holes. It will take millions of years and the chances to live up to this stage is much smaller. Also collider created black holes may be special, which is requirement for civilization driven replication of universes. However, the creation of black holes in collider with a high probability means the death of our civilization (but not necessarily: black hole could grow extremely slowly in the bowels of the Earth, for example, millions of years, and we have time to leave the Earth, and it depends on the unknown physical conditions.) In doing so, black hole must have some feature that distinguishes it from other holes that arise in our universe, for example, a powerful magnetic field (which exist in collider) or a unique initial mass (also exist in LHC: they will collide ions of gold).

So Smolin’s logic is sound but not proving that our civilization is safe, but in fact proving quiet opposite: that the chances of extinction in near future is high. We are not obliged to participate in the replication of universes suggested by Smolin, if it ever happens, especially if it is tantamount to the death of the parent civilization. If we continue our lives without black holes, it does not change the total number of universes have arisen, as it is infinite.

Critical Request to CERN Council and Member States on LHC Risks

Experts regard safety report on Big Bang Machine as insufficient and one-dimensional

International critics of the high energy experiments planned to start soon at the particle accelerator LHC at CERN in Geneva have submitted a request to the Ministers of Science of the CERN member states and to the delegates to the CERN Council, the supreme controlling body of CERN.

The paper states that several risk scenarios (that have to be described as global or existential risks) cannot currently be excluded. Under present conditions, the critics have to speak out against an operation of the LHC.

The submission includes assessments from expertises in the fields markedly missing from the physicist-only LSAG safety report — those of risk assessment, law, ethics and statistics. Further weight is added because these experts are all university-level experts – from Griffith University, the University of North Dakota and Oxford University respectively. In particular, it is criticised that CERN’s official safety report lacks independence – all its authors have a prior interest in the LHC running and that the report uses physicist-only authors, when modern risk-assessment guidelines recommend risk experts and ethicists as well.

As a precondition of safety, the request calls for a neutral and multi-disciplinary risk assessment and additional astrophysical experiments – Earth based and in the atmosphere – for a better empirical verification of the alleged comparability of particle collisions under the extreme artificial conditions of the LHC experiment and relatively rare natural high energy particle collisions: “Far from copying nature, the LHC focuses on rare and extreme events in a physical set up which has never occurred before in the history of the planet. Nature does not set up LHC experiments.”

Even under greatly improved circumstances concerning safety as proposed above, big jumps in energy increase, as presently planned by a factor of three compared to present records, without carefully analyzing previous results before each increase of energy, should principally be avoided.

The concise “Request to CERN Council and Member States on LHC Risks” (Pdf with hyperlinks to the described studies) by several critical groups, supported by well known critics of the planned experiments:

http://lhc-concern.info/wp-content/uploads/2010/03/request-t…5;2010.pdf

The answer received by now does not consider these arguments and studies but only repeats again that from the side of the operators everything appears sufficient, agreed by a Nobel Price winner in physics. LHC restart and record collisions by factor 3 are presently scheduled for March 30, 2010.

Official detailed and well understandable paper and communication with many scientific sources by ‘ConCERNed International’ and ‘LHC Kritik’:

http://lhc-concern.info/wp-content/uploads/2010/03/critical-…ed-int.pdf

More info:
http://lhc-concern.info/

/* */