Menu

Press Releases

Lifeboat Foundation Advisory Board reaches 1,500 members

Abstract
 
The Lifeboat Foundation Advisory Board reaches 1,500 members.
 
Story
 
August 27, 2011 — John O. McGinnis, George C. Dix Professor in Constitutional Law at Northwestern University Law School, joins our Advisory Board, becoming our 1,500th member. He joins Ray Kurzweil, Nobel Laureates Eric S. Maskin and Wole Soyinka, and many other luminaries, to participate in the Lifeboat Foundation goal of “Safeguarding Humanity”.
 
The Lifeboat Foundation Advisory Board is broken up into 38 subboards ranging from Human Trajectories to Particle Physics to Space Settlement and has developed 26 programs ranging from a BioShield program to a NanoShield Program to a SecurityPreserver Program. More information about these programs is available at https://lifeboat.com/ex/programs.
 
The Lifeboat Foundation is developing a world-class think tank with a rich cognitive diversity of philosophers, economists, biologists, nanotechnologists, AI researchers, educators, policy experts, engineers, lawyers, ethicists, futurists, neuroscientists, physicists, space experts, and other top thinkers to help humanity survive existential risks and possible misuse of increasingly powerful technologies, including genetic engineering, nanotechnology, and robotics/AI, as we move towards the Singularity.
 
Unlike most organizations, whose advisory boards are too small to do more than provide some advice, our think tank provides action as well as words. Our board members have developed programs, created reports, donated money, fueled our blog, created educational videos, joined our staff, launched numerous forums, organized events, and provided input on a range of issues from web design to grant proposals to ideas for new areas that Lifeboat Foundation should be involved in.
 


John O. McGinnis, M.A., J.D. is George C. Dix Professor in Constitutional Law at Northwestern University Law School.
 
John authored the important paper Accelerating AI. In this paper he describes why strong AI has a substantial possibility of becoming a reality and then sketches the two threats that some ascribe to AI. He shows that relinquishing or effectively regulating AI in a world of competing sovereign states cannot respond effectively to such threats, given that sovereign states can gain a military advantage from AI, and that even within states, it would be very difficult to prevent individuals from conducting research into AI. Moreover, he suggests that AI-driven robots on the battlefield may actually lead to less destruction, becoming a civilizing force in wars as well as an aid to civilization in its fight against terrorism. Finally, he offers reasons that friendly artificial intelligence can be developed to help rather than harm humanity, thus eliminating the existential threat.
 
He concludes by showing that, in contrast to a regime of prohibition or heavy regulation, a policy of government support for AI that follows principles of friendliness is the best approach to artificial intelligence. If friendly AI emerges, it may aid in preventing the emergence of less friendly versions of strong AI, as well as distinguish the real threats from the many potential benefits inherent in other forms of accelerating technology.
 
For more information about the Lifeboat Foundation Advisory Board, visit https://lifeboat.com/ex/boards
 
####

About Lifeboat Foundation
 
The Lifeboat Foundation is a nonprofit nongovernmental organization dedicated to encouraging scientific advancements while helping humanity survive existential risks and possible misuse of increasingly powerful technologies, including genetic engineering, nanotechnology, and robotics/AI, as we move towards the Singularity.
 
Contacts:
 
Lifeboat Foundation News office
1468 James Rd
Gardnerville, NV 89460, USA
+1 (512) 548-6425
[email protected]