Menu

Blog

Archive for the ‘lifeboat’ category: Page 14

Jan 9, 2012

LHC Safety Conference Requests / Cologne Administrative Court

Posted by in categories: environmental, events, existential risks, lifeboat, particle physics

If I can intervene on the polarized opinions posted by some individuals on Lifeboat regarding CERN and particle physics safety debate, wherein I was name dropped recently — the person in question, Mr Church, may find my email address on page one of the dissertation linked in my bio. Regarding the safety conference asked for by the Cologne Administrative Court cited by Prof Rossler, I would suggest that with its ample funds, The Lifeboat Foundation should host a public conference on the subject and invite CERN delegates, critics and journalists alike to attend. In the spirit of the Lifeboat Foundation, however, I would suggest that the focus of such conference should be on discussion of how particle physics can be used to solve problems in the future — and the matter of fringe concerns on MBH accretion rates and so on could be dealt with as a subtext. I think it would be a good opportunity to ‘clear the air’ and could be good for the profile not just of the Lifeboat Foundation, but for particle physics research in general. I would like to hear others thoughts on this, and how Lifeboat manages its funds for such events and conferences…

Nov 13, 2011

D’Nile aint just a river in Egypt…

Posted by in categories: business, complex systems, cosmology, economics, education, ethics, existential risks, finance, futurism, geopolitics, human trajectories, humor, life extension, lifeboat, media & arts, neuroscience, open access, open source, philosophy, policy, rants, robotics/AI, space, sustainability

Greetings fellow travelers, please allow me to introduce myself; I’m Mike ‘Cyber Shaman’ Kawitzky, independent film maker and writer from Cape Town, South Africa, one of your media/art contributors/co-conspirators.

It’s a bit daunting posting to such an illustrious board, so let me try to imagine, with you; how to regard the present with nostalgia while looking look forward to the past, knowing that a millisecond away in the future exists thoughts to think; it’s the mode of neural text, reverse causality, non-locality and quantum entanglement, where the traveller is the journey into a world in transition; after 9/11, after the economic meltdown, after the oil spill, after the tsunami, after Fukushima, after 21st Century melancholia upholstered by anti-psychotic drugs help us forget ‘the good old days’; because it’s business as usual for the 1%; the rest continue downhill with no brakes. Can’t wait to see how it all works out.

Please excuse me, my time machine is waiting…
Post cyberpunk and into Transhumanism

Apr 25, 2011

On the Problem of Modern Portfolio Theory: In Search of a Timeless & Universal Investment Perspective

Posted by in categories: complex systems, economics, existential risks, finance, human trajectories, lifeboat, philosophy, policy, sustainability

Dear Lifeboat Foundation Family & Friends,

A few months back, my Aunt Charlotte wrote, wondering why I — a relentless searcher focused upon human evolution and long-term human survival strategy, had chosen to pursue a PhD in economics (Banking & Finance). I recently replied that, as it turns out, sound economic theory and global financial stability both play central roles in the quest for long-term human survival. In the fifth and final chapter of my recent Masters thesis, On the Problem of Sustainable Economic Development: A Game-Theoretical Solution, I argued (with considerable passion) that much of the blame for the economic crisis of 2008 (which is, essentially still upon us) may be attributed the adoption of Keynesian economics and the dismissal of the powerful counter-arguments tabled by his great rival, F.A. von Hayek. Despite the fact that they remained friends all the way until the very end, their theories are diametrically opposed at nearly every point. There was, however, at least one central point they agreed upon — indeed, Hayek was fond of quoting one of Keynes’ most famous maxims: “The ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else” [1].

And, with this nontrivial problem and and the great Hayek vs. Keynes debate in mind, I’ll offer a preview-by-way-of-prelude with this invitation to turn a few pages of On the Problem of Modern Portfolio Theory: In Search of a Timeless & Universal Investment Perspective:

It is perhaps significant that Keynes hated to be addressed as “professor” (he never had that title). He was not primarily a scholar. He was a great amateur in many fields of knowledge and the arts; he had all the gifts of a great politician and a political pamphleteer; and he knew that “the ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is generally understood. Indeed the world is ruled by little else” [1]. And as he had a mind capable of recasting, in the intervals of his other occupations, the body of current economic theory, he more than any of his compeers had come to affect current thought. Whether it was he who was right or wrong, only the future will show. There are some who fear that if Lenin’s statement is correct that the best way to destroy the capitalist system is to debauch the currency, of which Keynes himself has reminded us [1], it will be largely due to Keynes’s influence if this prescription is followed.…

Continue reading “On the Problem of Modern Portfolio Theory: In Search of a Timeless & Universal Investment Perspective” »

Apr 19, 2011

On the Problem of Sustainable Economic Development: A Game-Theoretical Solution

Posted by in categories: asteroid/comet impacts, biological, complex systems, cosmology, defense, economics, education, existential risks, finance, human trajectories, lifeboat, military, philosophy, sustainability

Perhaps the most important lesson, which I have learned from Mises, was a lesson located outside economics itself. What Mises taught us in his writings, in his lectures, in his seminars, and in perhaps everything he said, was that economics—yes, and I mean sound economics, Austrian economics—is primordially, crucially important. Economics is not an intellectual game. Economics is deadly serious. The very future of mankind —of civilization—depends, in Mises’ view, upon widespread understanding of, and respect for, the principles of economics.

This is a lesson, which is located almost entirely outside economics proper. But all Mises’ work depended ultimately upon this tenet. Almost invariably, a scientist is motivated by values not strictly part of the science itself. The lust for fame, for material rewards—even the pure love of truth—these goals may possibly be fulfilled by scientific success, but are themselves not identified by science as worthwhile goals. What drove Mises, what accounted for his passionate dedication, his ability to calmly ignore the sneers of, and the isolation imposed by academic contemporaries, was his conviction that the survival of mankind depends on the development and dissemination of Austrian economics…

Austrian economics is not simply a matter of intellectual problem solving, like a challenging crossword puzzle, but literally a matter of the life or death of the human race.

–Israel M. Kirzner, Society for the Development of Austrian Economics Lifetime Achievement Award Acceptance Speech, 2006

Continue reading “On the Problem of Sustainable Economic Development: A Game-Theoretical Solution” »

Apr 2, 2011

A (Relatively) Brief Introduction to The Principles of Economics & Evolution: A Survival Guide for the Inhabitants of Small Islands, Including the Inhabitants of the Small Island of Earth

Posted by in categories: asteroid/comet impacts, biological, complex systems, cosmology, defense, economics, existential risks, geopolitics, habitats, human trajectories, lifeboat, military, philosophy, sustainability

(NOTE: Selecting the “Switch to White” button on the upper right-hand corner of the screen may ease reading this text).

“Who are you?” A simple question sometimes requires a complex answer. When a Homeric hero is asked who he is.., his answer consists of more than just his name; he provides a list of his ancestors. The history of his family is an essential constituent of his identity. When the city of Aphrodisias… decided to honor a prominent citizen with a public funeral…, the decree in his honor identified him in the following manner:

Hermogenes, son of Hephaistion, the so-called Theodotos, one of the first and most illustrious citizens, a man who has as his ancestors men among the greatest and among those who built together the community and have lived in virtue, love of glory, many promises of benefactions, and the most beautiful deeds for the fatherland; a man who has been himself good and virtuous, a lover of the fatherland, a constructor, a benefactor of the polis, and a savior.
– Angelos Chaniotis, In Search of an Identity: European Discourses and Ancient Paradigms, 2010

I realize many may not have the time to read all of this post — let alone the treatise it introduces — so for those with just a few minutes to spare, consider abandoning the remainder of this introduction and spending a few moments with a brief narrative which distills the very essence of the problem at hand: On the Origin of Mass Extinctions: Darwin’s Nontrivial Error.

Continue reading “A (Relatively) Brief Introduction to The Principles of Economics & Evolution: A Survival Guide for the Inhabitants of Small Islands, Including the Inhabitants of the Small Island of Earth” »

Mar 10, 2011

“Too Late for the Singularity?”

Posted by in categories: existential risks, lifeboat, particle physics

Ray Kurzweil is unique for having seen the unstoppable exponential growth of the computer revolution and extrapolating it correctly towards the attainment of a point which he called “singularity” and projects about 50 years into the future. At that point, the brain power of all human beings combined will be surpassed by the digital revolution.

The theory of the singularity has two flaws: a reparable and a hopefully not irreparable one. The repairable one has to do with the different use humans make of their brains compared to that of all animals on earth and presumably the universe. This special use can, however, be clearly defined and because of its preciousness be exported. This idea of “galactic export” makes Kurzweil’s program even more attractive.

The second drawback is nothing Ray Kurzweil has anything to do with, being entirely the fault of the rest of humankind: The half century that the singularity still needs to be reached may not be available any more.

The reason for that is CERN. Even though presented in time with published proofs that its proton-colliding experiment will with a probability of 8 percent produce a resident exponentially growing mini black hole eating earth inside out in perhaps 5 years time, CERN prefers not to quote those results or try and dismantle them before acting. Even the call by an administrative court (Cologne) to convene the overdue scientific safety conference before continuing was ignored when CERN re-ignited the machine a week ago.

Continue reading “"Too Late for the Singularity?"” »

Jan 17, 2011

Stories We Tell

Posted by in categories: complex systems, existential risks, futurism, lifeboat, policy


What do Singularitarianism and popular Western religion have in common? More than you might imagine. A thumbnail evaluation of both ends of the American technocentric intelligence spectrum reveals both remarkable similarities in their respective narrative constructions and, naturally, amusing disparities. It would appear that all humans, regardless of our respective beliefs, seem to express goal-oriented hardwiring that demands a neatly constructed story to frame our experiences.

Be you a technophile, you are eagerly awaiting, with perhaps equal parts hope and fear, the moment when artificial general intelligence surpasses human intelligence. You don’t know exactly how this new, more cunning intelligence will react to humans, but you’re fairly certain that humanity might well be in a bit of trouble, or at very least, have some unique competition.

Be you a technophobe, you shun the trappings of in-depth science and technology involvement, save for a superficial interaction with the rudimentary elements of technology which likely do not extend much further than your home computer, cell phone, automobile, and/or microwave oven. As a technophobe, you might even consider yourself religious, and if you’re a Christian, you might well be waiting for the second-coming, the rapture.

Both scenarios lead humanity to ironically similar destinations, in which humankind becomes either marginalized or largely vestigial.

Continue reading “Stories We Tell” »

Jun 26, 2010

Existential Risk Reduction Career Network

Posted by in categories: existential risks, finance, lifeboat

The existential risk reduction career network is a career network for those interested in getting a relatively well-paid job and donating substantial amounts (relative to income) to non-profit organizations focused on the reduction of existential risks, in the vein of SIAI, FHI, and the Lifeboat Foundation.

The aim is to foster a community of donors, and to allow donors and potential donors to give each other advice, particularly regarding the pros and cons of various careers, and for networking with like-minded others within industries. For example, someone already working in a large corporation could give a prospective donor advice about how to apply for a job.

Over time, it is hoped that the network will grow to a relatively large size, and that donations to existential risk-reduction from the network will make up a substantial fraction of funding for the beneficiary organizations.

In isolation, individuals may feel like existential risk is too large a problem to make a dent in, but collectively, we can make a huge difference. If you are interested in helping us make a difference, then please check out the network and request an invitation.

Please feel free to contact the organizers at [email protected] with any comments or questions.

May 2, 2010

Nuclear Winter and Fire and Reducing Fire Risks to Cities

Posted by in categories: defense, existential risks, lifeboat, military, nuclear weapons

This is a crosspost from Nextbigfuture

I looked at nuclear winter and city firestorms a few months ago I will summarize the case I made then in the next section. There is significant additions based on my further research and email exchanges that I had with Prof Alan Robock and Brian Toon who wrote the nuclear winter research.

The Steps needed to prove nuclear winter:
1. Prove that enough cities will have firestorms or big enough fires (the claim here is that does not happen)
2. Prove that when enough cities in a suffient area have big fire that enough smoke and soot gets into the stratosphere (trouble with this claim because of the Kuwait fires)
3. Prove that condition persists and effects climate as per models (others have questioned that but this issue is not addressed here

The nuclear winter case is predictated on getting 150 million tons (150 teragram case) of soot, smoke into the stratosphere and having it stay there. The assumption seemed to be that the cities will be targeted and the cities will burn in massive firestorms. Alan Robock indicated that they only included a fire based on the radius of ignition from the atmospheric blasts. However, in the scientific american article and in their 2007 paper the stated assumptions are:

Continue reading “Nuclear Winter and Fire and Reducing Fire Risks to Cities” »

Dec 30, 2009

Ark-starship – too early or too late?

Posted by in categories: existential risks, lifeboat, space

It is interesting to note that the technical possibility to send interstellar Ark appeared in 1960th, and is based on the concept of “Blust-ship” of Ulam. This blast-ship uses the energy of nuclear explosions to move forward. Detailed calculations were carried out under the project “Orion”. http://en.wikipedia.org/wiki/Project_Orion_(nuclear_propulsion) In 1968 Dyson published an article “Interstellar Transport”, which shows the upper and lower bounds of the projects. In conservative (ie not imply any technical achievements) valuation it would cost 1 U.S. GDP (600 billion U.S. dollars at the time of writing) to launch the spaceship with mass of 40 million tonnes (of which 5 million tons of payload), and its time of flight to Alpha Centauri would be 1200 years. In a more advanced version the price is 0.1 U.S. GDP, the flight time is 120 years and starting weight 150 000 tons (of which 50 000 tons of payload). In principle, using a two-tier scheme, more advanced thermonuclear bombs and reflectors the flying time to the nearest star can reduce to 40 years.
Of course, the crew of the spaceship is doomed to extinction if they do not find a habitable and fit for human planet in the nearest star system. Another option is that it will colonize uninhabited planet. In 1980, R. Freitas proposed a lunar exploration using self-replicating factory, the original weight of 100 tons, but to control that requires artificial intelligence. “Advanced Automation for Space Missions” http://www.islandone.org/MMSG/aasm/ Artificial intelligence yet not exist, but the management of such a factory could be implemented by people. The main question is how much technology and equipment should be enough to throw at the moonlike uninhabited planet, so that people could build on it completely self-sustaining and growing civilization. It is about creating something like inhabited von Neumann probe. Modern self-sustaining state includes at least a few million people (like Israel), with hundreds of tons of equipment on each person, mainly in the form of houses, roads. Weight of machines is much smaller. This gives us the upper boundary of the able to replicate human colony in the 1 billion tons. The lower estimate is that there would be about 100 people, each of which accounts for approximately 100 tons (mainly food and shelter), ie 10 000 tons of mass. A realistic assessment should be somewhere in between, and probably in the tens of millions of tons. All this under the assumption that no miraculous nanotechnology is not yet open.
The advantage of a spaceship as Ark is that it is non-specific reaction to a host of different threats with indeterminate probabilities. If you have some specific threat (the asteroid, the epidemic), then there is better to spend money on its removal.
Thus, if such a decision in the 1960th years were taken, now such a ship could be on the road.
But if we ignore the technical side of the issue, there are several trade-offs on strategies for creating such a spaceship.
1. The sooner such a project is started, the lesser technically advanced it would be, the lesser would be its chances of success and higher would be cost. But if it will be initiated later, the greater would be chances that it will not be complete until global catastrophe.
2. The later the project starts, the greater are the chance that it will take “diseases” of mother civilization with it (e.g. ability to create dangerous viruses ).
3. The project to create a spaceship could lead to the development of technologies that threaten civilization itself. Blast-ship used as fuel hundreds of thousands of hydrogen bombs. Therefore, it can either be used as a weapon, or other party may be afraid of it and respond. In addition, the spaceship can turn around and hit the Earth, as star-hammer — or there maybe fear of it. During construction of the spaceship could happen man-made accidents with enormous consequences, equal as maximum to detonation of all bombs on board. If the project is implementing by one of the countries in time of war, other countries could try to shoot down the spaceship when it launched.
4. The spaceship is a means of protection against Doomsday machine as strategic response in Khan style. Therefore, the creators of such a Doomsday machine can perceive the Ark as a threat to their power.
5. Should we implement a more expensive project, or a few cheaper projects?
6. Is it sufficient to limit the colonization to the Moon, Mars, Jupiter’s moons or objects in the Kuiper belt? At least it can be fallback position at which you can check the technology of autonomous colonies.
7. The sooner the spaceship starts, the less we know about exoplanets. How far and how fast the Ark should fly in order to be in relative safety?
8. Could the spaceship hide itself so that the Earth did not know where it is, and should it do that? Should the spaceship communicate with Earth? Or there is a risk of attack of a hostile AI in this case?
9. Would not the creation of such projects exacerbate the arms race or lead to premature depletion of resources and other undesirable outcomes? Creating of pure hydrogen bombs would simplify the creation of such a spaceship, or at least reduce its costs. But at the same time it would increase global risks, because nuclear non-proliferation will suffer complete failure.
10. Will the Earth in the future compete with its independent colonies or will this lead to Star Wars?
11. If the ship goes off slowly enough, is it possible to destroy it from Earth, by self-propelling missile or with radiation beam?
12. Is this mission a real chance for survival of the mankind? Flown away are likely to be killed, because the chance of success of the mission is no more than 10 per cent. Remaining on the Earth may start to behave more risky, in logic: “Well, if we have protection against global risks, now we can start risky experiments.” As a result of the project total probability of survival decreases.
13. What are the chances that its computer network of the Ark will download the virus, if it will communicate with Earth? And if not, it will reduce the chances of success. It is possible competition for nearby stars, and faster machines would win it. Eventually there are not many nearby stars at distance of about 5 light years — Alpha Centauri, the Barnard star, and the competition can begin for them. It is also possible the existence of dark lonely planets or large asteroids without host-stars. Their density in the surrounding space should be 10 times greater than the density of stars, but to find them is extremely difficult. Also if nearest stars have not any planets or moons it would be a problem. Some stars, including Barnard, are inclined to extreme stellar flares, which could kill the expedition.
14. The spaceship will not protect people from hostile AI that finds a way to catch up. Also in case of war starships may be prestigious, and easily vulnerable targets — unmanned rocket will always be faster than a spaceship. If arks are sent to several nearby stars, it does not ensure their secrecy, as the destination will be known in advance. Phase transition of the vacuum, the explosion of the Sun or Jupiter or other extreme event can also destroy the spaceship. See e.g. A.Bolonkin “Artificial Explosion of Sun. AB-Criterion for Solar Detonation” http://www.scribd.com/doc/24541542/Artificial-Explosion-of-S…Detonation
15. However, the spaceship is too expensive protection from many other risks that do not require such far removal. People could hide from almost any pandemic in the well-isolated islands in the ocean. People can hide on the Moon from gray goo, collision with asteroid, supervolcano, irreversible global warming. The ark-spaceship will carry with it problems of genetic degradation, propensity for violence and self-destruction, as well as problems associated with limited human outlook and cognitive biases. Spaceship would only burden the problem of resource depletion, as well as of wars and of the arms race. Thus, the set of global risks from which the spaceship is the best protection, is quite narrow.
16. And most importantly: does it make sense now to begin this project? Anyway, there is no time to finish it before become real new risks and new ways to create spaceships using nanotech.
Of course it easy to envision nano and AI based Ark – it would be small as grain of sand, carry only one human egg or even DNA information, and could self-replicate. The main problem with it is that it could be created only ARTER the most dangerous period of human existence, which is the period just before Singularity.

Page 14 of 19First1112131415161718Last