New York, NY (PRWEB) July 17, 2009 — The fourth annual Singularity Summit, a conference devoted to the better understanding of increasing intelligence and accelerating change, will be held in New York on October 3–4 in Kaufmann Hall at the historic 92nd St Y. The Summit brings together a visionary community to further dialogue and action on complex, long-term issues that are transforming the world.
Participants will hear talks from cutting-edge researchers and network with strategic business leaders. The world’s most eminent experts on forecasting, venture capital, emerging technologies, consciousness and life extension will present their unique perspectives on the future and how to get there. “The Singularity Summit is the premier conference on the Singularity,” says Ray Kurzweil, inventor of the CCD flatbed scanner and author of The Singularity is Near. “As we get closer to the Singularity, each year’s conference is better than the last.”
The Singularity Summit has previously been held in the San Francisco Bay Area, where it has been featured in numerous publications including the front page of the San Francisco Chronicle. It is hosted by the Singularity Institute, a 501©(3) nonprofit devoted to studying the benefits and risks of advanced technologies.
Select Speakers
* Ray Kurzweil is the author of The Singularity is Near (2005) and co-founder of Singularity University, which is backed by Google and NASA. At the Singularity Summit, he will present his theories on accelerating technological change and the future of humanity.
* Dr. David Chalmers, director of the Centre for Consciousness at Australian National University and one of the world’s foremost philosophers, will discuss mind uploading — the possibility of transferring human consciousness onto a computer network.
* Dr. Ed Boyden is a joint professor of Biological Engineering and of Brain and Cognitive Sciences at MIT. Discover Magazine named him one of the 20 best brains under 40.
* Peter Thiel is the president of Clarium, seed investor in Facebook, managing partner of Founders Fund, and co-founder of PayPal.
* Dr. Aubrey de Grey is a biogerontologist and Director of Research at the SENS Foundation, which seeks to extend the human lifespan. He will present on the ethics of this proposition.
* Dr. Philip Tetlock is Professor of Organizational Behavior at the Haas School of Business, University of California, Berkeley, and author of Expert Political Judgement: How Good Is It?
* Dr. Jürgen Schmidhuber is co-director of the Dalle Molle Institute for Artificial Intelligence in Lugano, Switzerland. He will discuss the mathematical essence of beauty and creativity.
* Dr. Gary Marcus is director of the NYU Infant Language Learning Center, and professor of psychology at New York University and author of the book Kludge.
See the Singularity Summit website at http://www.singularitysummit.com/.
The Singularity Summit is hosted by the Singularity Institute for Artificial Intelligence.
]]>Global Risk: A Quantitative Analysis
The scope and possible impact of global, long-term risks presents a unique challenge to humankind. The analysis and mitigation of such risks is extremely important, as such risks have the potential to affect billions of people worldwide; however, little systematic analysis has been done to determine the best strategies for overall mitigation. Direct, case-by-case analysis can be combined with standard probability theory, particularly Laplace’s rule of succession, to calculate the probability of any given risk, the scope of the risk, and the effectiveness of potential mitigation efforts. This methodology can be applied both to well-known risks, such as global warming, nuclear war, and bio-terrorism, and lesser-known or unknown risks. Although well-known risks are shown to be a significant threat, analysis strongly suggests that avoiding the risks of technologies which have not yet been developed may pose an even greater challenge. Eventually, some type of further quantitative analysis will be necessary for effective apportionment of government resources, as traditional indicators of risk level- such as press coverage and human intuition- can be shown to be inaccurate, often by many orders of magnitude.
More details are available online at the Society for Risk Analysis’s website. James Blodgett will be presenting on the precautionary principle two days earlier (Monday, Dec. 8th).
]]>The important question is, now that we’ve accumulated all of this money, what are we going to spend it on? It’s possible, theoretically, to put it all into Treasury bonds and forget about it for thirty years, but that would be an enormous waste of expected utility. In technology development, the earlier the money is spent (in general), the larger the effect will be. Spending $1M on a technology in the formative stages has a huge impact, probably doubling the overall budget or more. Spending $1M on a technology in the mature stages won’t even be noticed. We have plenty of case studies: Radios. TVs. Computers. Internet. Telephones. Cars. Startups.
The opposite danger is overfunding the project, commonly called “throwing money at the problem”. Hiring a lot of new people without thinking about how they will help is one common symptom. Having bloated layers of middle management is another. To an outside observer, it probably seems like we’re reaching this stage already. Hiring a Vice President In Charge Of Being In Charge doesn’t just waste money; it causes the entire organization to lose focus and distracts everyone from the ultimate goal.
I would suggest a top-down approach: start with the goal, figure out what you need, and get it. The opposite approach is to look for things that might be useful, get them, then see how you can complete a project with the stuff you’ve acquired. NASA is an interesting case study, as they followed the first strategy for a number of years, then switched to the second one.
The second strategy is useful at times, particularly when the goal is constantly changing. Paul Graham suggests using it as a strategy for personal success, because the ‘goal’ is changing too rapidly for any fixed plan to remain viable. “Personal success” in 2000 is very different from “success” in 1980, which was different from “success” in 1960. If Kurzweil’s graphs are accurate, “success” in 2040 will be so alien that we won’t even be able to recognize it.
But when the goal is clear- save the Universe, create an eternal utopia, develop new technology X- you simply need to smash through whatever problems show up. Apparently, money has been the main blocker for some time, and it looks like we’ve overcome that (in the short-term) through large-scale fundraising. There’s a large body of literature out there on how to deal with organizational problems; thousands of people have done this stuff before. I don’t know what the main blocker is now, but odds are it’s in there somewhere.
]]>- Any risk which has become widely known, or an issue in contemporary politics, will probably be very hard to deal with. Thus, even if it is a legitimate risk, it may be worth putting on the back burner; there’s no point in spending millions of dollars for little gain.
- Any risk which is totally natural (could happen without human intervention), must be highly improbable, as we know we have been on this planet for a hundred thousand years without getting killed off. To estimate the probability of these risks, use Laplace’s Law of Succession.
- Risks which we cannot affect the probability of can be safely ignored. It does us little good to know that there is a 1% chance of doom next Thursday, if we can’t do anything about it.
Some specific risks which can be safely ignored:
- Particle accelerator accidents. We don’t yet know enough high-energy physics to say conclusively that a particle accelerator could never create a true vacuum, stable strangelet, or another universe-destroying particle. Luckily, we don’t have to; cosmic rays have been bombarding us for the past four billion years, with energies a million times higher than anything we can create in an accelerator. If it were possible to annihilate the planet with a high-energy particle collision, it would have happened already.
- The simulation gets shut down. The idea that “the universe is a simulation” is equally good at explaining every outcome- no matter what happens in the universe, you can concoct some reason why the simulators would engineer it. Which specific actions would make the universe safer from being shut down? We have no clue, and barring a revelation from On High, we have no way to find out. If we do try and take action to stop the universe from being shut down, it could just as easily make the risk worse.
- A long list of natural scenarios. To quote Nick Bostrom: “solar flares, supernovae, black hole explosions or mergers, gamma-ray bursts, galactic center outbursts, supervolcanos, loss of biodiversity, buildup of air pollution, gradual loss of human fertility, and various religious doomsday scenarios.” We can’t prevent most of these anyway, even if they were serious risks.
Some specific risks which should be given lower priority:
- Asteroid impact. This is a serious risk, but it still has a fairly low probability, on the order of one in 10^5 to 10^7 for something that would threaten the human species within the next century or so. Mitigation is also likely to be quite expensive compared to other risks.
- Global climate change. While this is fairly probable, the impact of it isn’t likely to be severe enough to qualify as an existential risk. The IPCC Fourth Assessement Report has concluded that it is “very likely” that there will be more heat waves and heavy rainfall events, while it is “likely” that there will be more droughts, hurricanes, and extreme high tides; these do not qualify as existential risks, or even anything particularly serious. We know from past temperature data that the Earth can warm by 6–9 C on a fairly short timescale, without causing a permanent collapse or even a mass extinction. Additionally, climate change has become a political problem, making it next to impossible to implement serious measures without a massive effort.
- Nuclear war is a special case, because although we can’t do much to prevent it, we can take action to prepare for it in case it does happen. We don’t even have to think about the best ways to prepare; there are already published, reviewed books detailing what can be done to seek safety in the event of a nuclear catastrophe. I firmly believe that every transhumanist organization should have a contingency plan in the event of nuclear war, economic depression, a conventional WWIII or another political disaster. This planet is too important to let it get blown up because the people saving it were “collateral damage”.
- Terrorism. It may be the bogeyman-of-the-decade, but terrorists are not going to deliberately destroy the Earth; terrorism is a political tool with political goals that require someone to be alive. While terrorists might do something stupid which results in an existential risk, “terrorism” isn’t a special case that we need to separately plan for; a virus, nanoreplicator or UFAI is just as deadly regardless of where it comes from.
]]>