Advisory Board

Eliezer S. Yudkowsky

Chapter two of The Singularity Is Near : When Humans Transcend Biology by Ray Kurzweil began with the following quote from Eliezer S. Yudkowsky

Our sole responsibility is to produce something smarter than we are; any problems beyond that are not ours to solve… There are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards [in level of intelligence], and some problems will suddenly move from “impossible” to “obvious”. Move a substantial degree upwards, and all of them will become obvious.

Eliezer S. Yudkowsky is one of the foremost experts on the Singularity and the development of Friendly Artificial Intelligence including the article What is Friendly AI? published on KurzweilAI.net. The Lifeboat Foundation endorses his proposal for Friendly AI as the solution to advances in Robotics.
 
Eliezer cofounded the Singularity Institute for Artificial Intelligence and is best known for his activist stance on the Singularity; that the Singularity will enormously benefit humanity, and that we should therefore try to accelerate the Singularity. He started the first Singularity mailing list to organize existing Singularity advocates into a community, and, slightly over a year later, helped to found the first nonprofit solely devoted to the in-depth study and direct implementation of the Singularity.
 
Eliezer has spoken to a variety of audiences (venture capitalists, futurists, technologists) about his theory of Friendly AI and the criticalness of the Singularity Institute’s mission. He is the author of the SIAI publication Levels of Organization in General Intelligence which will appear as a chapter in the forthcoming book Real AI: New Approaches to Artificial General Intelligence (Goertzel and Pennachin). He has also authored General Intelligence and Seed AI, Staring into the Singularity, The Singularitarian Principles 1.0, and The Plan to Singularity.
 
His professional work focuses on Artificial Intelligence designs which enable self-understanding, self-modification, and recursive self-improvement (“seed AI”); and on Artificial Intelligence architectures that enable the creation of sustainable and improvable benevolence (“Friendly AI”). He has spoken on these topics at venues ranging from private corporations to Foresight gatherings.
 
Read this Question & Answer session with Eliezer by Tyler Emerson!
 
Listen to Eliezer at The Singularity Summit at Stanford.