Tom McCabe – Lifeboat News: The Blog https://lifeboat.com/blog Safeguarding Humanity Mon, 17 Apr 2017 05:27:49 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.2 Singularity Summit 2010 in San Francisco to Explore Intelligence Augmentation https://lifeboat.com/blog/2010/06/singularity-summit-2010-in-san-francisco-to-explore-intelligence-augmentation https://lifeboat.com/blog/2010/06/singularity-summit-2010-in-san-francisco-to-explore-intelligence-augmentation#comments Fri, 25 Jun 2010 01:22:11 +0000 http://lifeboat.com/blog/?p=1056 This year, the Singularity Summit 2010 (SS10) will be held at the Hyatt Regency Hotel in San Francisco, California, in a 1100-seat ballroom on August 14–15.

Our speakers will include Ray Kurzweil, author of The Singularity is Near; James Randi, magician-skeptic and founder of the James Randi Educational Foundation; Terry Sejnowski, computational neuroscientist; Irene Pepperberg, pioneering researcher in animal intelligence; David Hanson, creator of the world’s most realistic human-like robots; and many more.  In all, the conference will include over twenty speakers, including many scientists presenting on their latest cutting-edge research in topics like intelligence enhancement and regenerative medicine.

A variety of discounts are available for those wanting to attend the conference for less.  If you register by midnight PST on Thursday, July 1st, you can register for $485, which is $200 less than the cost of a ticket at the door ($685). Registration before August 1st is $585, and from August 1st until the conference the price is $685.  The sooner you register, the more you save.

Additional discounts are available for students, $1,000+ SIAI donors, and attendees who refer others who pay full price (no student referrals).  Students receive $100 off whatever the current price is, and attendees gain a $100 discount per non-student referral.  These discounts are stackable, so a student who refers four non-students who pay full price before the end of June can attend for free.  You can ask us more about discounts at [email protected]. Your Singularity Summit ticket is a tax-deductible donation to SIAI, almost all of which goes to support our ongoing research and academic work.

If you’ve been to a Singularity Summit before, you’ll know that the attendees are among the smartest and most ambitious people you’ll ever meet. Scientists, engineers, writers, reporters, philosophers, tech policy specialists, and entrepreneurs all join to discuss the most important questions of our time.

The full list of speakers is here: http://www.singularitysummit.com/program
The logistics page is here: http://www.singularitysummit.com/logistics

We hope to see you in San Francisco this August for an exciting conference!
]]>
https://lifeboat.com/blog/2010/06/singularity-summit-2010-in-san-francisco-to-explore-intelligence-augmentation/feed 1
Ray Kurzweil and David Chalmers to Headline Singularity Summit 2009 in New York https://lifeboat.com/blog/2009/07/ray-kurzweil-and-david-chalmers-to-headline-singularity-summit-2009-in-new-york Fri, 17 Jul 2009 21:03:10 +0000 http://lifeboat.com/blog/?p=545 The Singularity Institute will be holding the fourth annual Singularity Summit in New York in October, featuring talks by Ray Kurzweil, David Chalmers, and Peter Thiel.

New York, NY (PRWEB) July 17, 2009 — The fourth annual Singularity Summit, a conference devoted to the better understanding of increasing intelligence and accelerating change, will be held in New York on October 3–4 in Kaufmann Hall at the historic 92nd St Y. The Summit brings together a visionary community to further dialogue and action on complex, long-term issues that are transforming the world.

Participants will hear talks from cutting-edge researchers and network with strategic business leaders. The world’s most eminent experts on forecasting, venture capital, emerging technologies, consciousness and life extension will present their unique perspectives on the future and how to get there. “The Singularity Summit is the premier conference on the Singularity,” says Ray Kurzweil, inventor of the CCD flatbed scanner and author of The Singularity is Near. “As we get closer to the Singularity, each year’s conference is better than the last.”

The Singularity Summit has previously been held in the San Francisco Bay Area, where it has been featured in numerous publications including the front page of the San Francisco Chronicle. It is hosted by the Singularity Institute, a 501©(3) nonprofit devoted to studying the benefits and risks of advanced technologies.

Select Speakers

* Ray Kurzweil is the author of The Singularity is Near (2005) and co-founder of Singularity University, which is backed by Google and NASA. At the Singularity Summit, he will present his theories on accelerating technological change and the future of humanity.

* Dr. David Chalmers, director of the Centre for Consciousness at Australian National University and one of the world’s foremost philosophers, will discuss mind uploading — the possibility of transferring human consciousness onto a computer network.

* Dr. Ed Boyden is a joint professor of Biological Engineering and of Brain and Cognitive Sciences at MIT. Discover Magazine named him one of the 20 best brains under 40.

* Peter Thiel is the president of Clarium, seed investor in Facebook, managing partner of Founders Fund, and co-founder of PayPal.

* Dr. Aubrey de Grey is a biogerontologist and Director of Research at the SENS Foundation, which seeks to extend the human lifespan. He will present on the ethics of this proposition.

* Dr. Philip Tetlock is Professor of Organizational Behavior at the Haas School of Business, University of California, Berkeley, and author of Expert Political Judgement: How Good Is It?

* Dr. Jürgen Schmidhuber is co-director of the Dalle Molle Institute for Artificial Intelligence in Lugano, Switzerland. He will discuss the mathematical essence of beauty and creativity.

* Dr. Gary Marcus is director of the NYU Infant Language Learning Center, and professor of psychology at New York University and author of the book Kludge.

See the Singularity Summit website at http://www.singularitysummit.com/.

The Singularity Summit is hosted by the Singularity Institute for Artificial Intelligence.

]]>
SRA Proposal Accepted https://lifeboat.com/blog/2008/07/sra-proposal-accepted https://lifeboat.com/blog/2008/07/sra-proposal-accepted#comments Thu, 31 Jul 2008 22:50:35 +0000 http://lifeboat.com/blog/?p=162 My proposal for the Society for Risk Analysis’s annual meeting in Boston has been accepted, in oral presentation format, for the afternoon of Wednesday, December 10th, 2008. Any Lifeboat members who will be in the area at the time are more than welcome to attend. Any suggestions for content would also be greatly appreciated; speaking time is limited to 15 minutes, with 5 minutes for questions. The abstract for the paper is as follows:

Global Risk: A Quantitative Analysis

The scope and possible impact of global, long-term risks presents a unique challenge to humankind. The analysis and mitigation of such risks is extremely important, as such risks have the potential to affect billions of people worldwide; however, little systematic analysis has been done to determine the best strategies for overall mitigation. Direct, case-by-case analysis can be combined with standard probability theory, particularly Laplace’s rule of succession, to calculate the probability of any given risk, the scope of the risk, and the effectiveness of potential mitigation efforts. This methodology can be applied both to well-known risks, such as global warming, nuclear war, and bio-terrorism, and lesser-known or unknown risks. Although well-known risks are shown to be a significant threat, analysis strongly suggests that avoiding the risks of technologies which have not yet been developed may pose an even greater challenge. Eventually, some type of further quantitative analysis will be necessary for effective apportionment of government resources, as traditional indicators of risk level- such as press coverage and human intuition- can be shown to be inaccurate, often by many orders of magnitude.

More details are available online at the Society for Risk Analysis’s website. James Blodgett will be presenting on the precautionary principle two days earlier (Monday, Dec. 8th).

]]>
https://lifeboat.com/blog/2008/07/sra-proposal-accepted/feed 3
Spending Effectively https://lifeboat.com/blog/2008/02/spending-effectively https://lifeboat.com/blog/2008/02/spending-effectively#comments Mon, 04 Feb 2008 00:39:32 +0000 http://lifeboat.com/blog/?p=125 Last year, the Singularity Institute raised over $500,000. The World Transhumanist Association raised $50,000. The Lifeboat Foundation set a new record for the single largest donation. The Center for Responsible Nanotechnology’s finances are combined with those of World Care, a related organization, so the public can’t get precise figures. But overall, it’s safe to say, we’ve been doing fairly well. Most not-for-profit organizations aren’t funded adequately; it’s rare for charities, even internationally famous ones, to have a large full-time staff, a physical headquarters, etc.

The important question is, now that we’ve accumulated all of this money, what are we going to spend it on? It’s possible, theoretically, to put it all into Treasury bonds and forget about it for thirty years, but that would be an enormous waste of expected utility. In technology development, the earlier the money is spent (in general), the larger the effect will be. Spending $1M on a technology in the formative stages has a huge impact, probably doubling the overall budget or more. Spending $1M on a technology in the mature stages won’t even be noticed. We have plenty of case studies: Radios. TVs. Computers. Internet. Telephones. Cars. Startups.

The opposite danger is overfunding the project, commonly called “throwing money at the problem”. Hiring a lot of new people without thinking about how they will help is one common symptom. Having bloated layers of middle management is another. To an outside observer, it probably seems like we’re reaching this stage already. Hiring a Vice President In Charge Of Being In Charge doesn’t just waste money; it causes the entire organization to lose focus and distracts everyone from the ultimate goal.

I would suggest a top-down approach: start with the goal, figure out what you need, and get it. The opposite approach is to look for things that might be useful, get them, then see how you can complete a project with the stuff you’ve acquired. NASA is an interesting case study, as they followed the first strategy for a number of years, then switched to the second one.

The second strategy is useful at times, particularly when the goal is constantly changing. Paul Graham suggests using it as a strategy for personal success, because the ‘goal’ is changing too rapidly for any fixed plan to remain viable. “Personal success” in 2000 is very different from “success” in 1980, which was different from “success” in 1960. If Kurzweil’s graphs are accurate, “success” in 2040 will be so alien that we won’t even be able to recognize it.

But when the goal is clear- save the Universe, create an eternal utopia, develop new technology X- you simply need to smash through whatever problems show up. Apparently, money has been the main blocker for some time, and it looks like we’ve overcome that (in the short-term) through large-scale fundraising. There’s a large body of literature out there on how to deal with organizational problems; thousands of people have done this stuff before. I don’t know what the main blocker is now, but odds are it’s in there somewhere.

]]>
https://lifeboat.com/blog/2008/02/spending-effectively/feed 2
Risks Not Worth Worrying About https://lifeboat.com/blog/2007/08/risks-not-worth-worrying-about https://lifeboat.com/blog/2007/08/risks-not-worth-worrying-about#comments Wed, 22 Aug 2007 05:01:29 +0000 http://lifeboat.com/blog/?p=90 There are dozens of published existential risks; there are undoubtedly many more that Nick Bostrom did not think of in his paper on the subject. Ideally, the Lifeboat Foundation and other organizations would identify each of these risks and take action to combat them all, but this simply isn’t realistic. We have a finite budget and a finite number of man-hours to spend on the problem, and our resources aren’t even particularly large compared with other non-profit organizations. If Lifeboat or other organizations are going to take serious action against existential risk, we need to identify the areas where we can do the most good, even at the expense of ignoring other risks. Humans like to totally eliminate risks, but this is a cognitive bias; it does not correspond to the most effective strategy. In general, when assessing existential risks, there are a number of useful heuristics:

- Any risk which has become widely known, or an issue in contemporary politics, will probably be very hard to deal with. Thus, even if it is a legitimate risk, it may be worth putting on the back burner; there’s no point in spending millions of dollars for little gain.

- Any risk which is totally natural (could happen without human intervention), must be highly improbable, as we know we have been on this planet for a hundred thousand years without getting killed off. To estimate the probability of these risks, use Laplace’s Law of Succession.

- Risks which we cannot affect the probability of can be safely ignored. It does us little good to know that there is a 1% chance of doom next Thursday, if we can’t do anything about it.

Some specific risks which can be safely ignored:

- Particle accelerator accidents. We don’t yet know enough high-energy physics to say conclusively that a particle accelerator could never create a true vacuum, stable strangelet, or another universe-destroying particle. Luckily, we don’t have to; cosmic rays have been bombarding us for the past four billion years, with energies a million times higher than anything we can create in an accelerator. If it were possible to annihilate the planet with a high-energy particle collision, it would have happened already.

- The simulation gets shut down. The idea that “the universe is a simulation” is equally good at explaining every outcome- no matter what happens in the universe, you can concoct some reason why the simulators would engineer it. Which specific actions would make the universe safer from being shut down? We have no clue, and barring a revelation from On High, we have no way to find out. If we do try and take action to stop the universe from being shut down, it could just as easily make the risk worse.

- A long list of natural scenarios. To quote Nick Bostrom: “solar flares, supernovae, black hole explosions or mergers, gamma-ray bursts, galactic center outbursts, supervolcanos, loss of biodiversity, buildup of air pollution, gradual loss of human fertility, and various religious doomsday scenarios.” We can’t prevent most of these anyway, even if they were serious risks.

Some specific risks which should be given lower priority:

- Asteroid impact. This is a serious risk, but it still has a fairly low probability, on the order of one in 10^5 to 10^7 for something that would threaten the human species within the next century or so. Mitigation is also likely to be quite expensive compared to other risks.

- Global climate change. While this is fairly probable, the impact of it isn’t likely to be severe enough to qualify as an existential risk. The IPCC Fourth Assessement Report has concluded that it is “very likely” that there will be more heat waves and heavy rainfall events, while it is “likely” that there will be more droughts, hurricanes, and extreme high tides; these do not qualify as existential risks, or even anything particularly serious. We know from past temperature data that the Earth can warm by 6–9 C on a fairly short timescale, without causing a permanent collapse or even a mass extinction. Additionally, climate change has become a political problem, making it next to impossible to implement serious measures without a massive effort.

- Nuclear war is a special case, because although we can’t do much to prevent it, we can take action to prepare for it in case it does happen. We don’t even have to think about the best ways to prepare; there are already published, reviewed books detailing what can be done to seek safety in the event of a nuclear catastrophe. I firmly believe that every transhumanist organization should have a contingency plan in the event of nuclear war, economic depression, a conventional WWIII or another political disaster. This planet is too important to let it get blown up because the people saving it were “collateral damage”.

- Terrorism. It may be the bogeyman-of-the-decade, but terrorists are not going to deliberately destroy the Earth; terrorism is a political tool with political goals that require someone to be alive. While terrorists might do something stupid which results in an existential risk, “terrorism” isn’t a special case that we need to separately plan for; a virus, nanoreplicator or UFAI is just as deadly regardless of where it comes from.

]]>
https://lifeboat.com/blog/2007/08/risks-not-worth-worrying-about/feed 7