Toggle light / dark theme

The implausibility of intelligence explosion

In 1965, I. J. Good described for the first time the notion of “intelligence explosion”, as it relates to artificial intelligence (AI):

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

Decades later, the concept of an “intelligence explosion” — leading to the sudden rise of “superintelligence” and the accidental end of the human race — has taken hold in the AI community. Famous business leaders are casting it as a major risk, greater than nuclear war or climate change. Average graduate students in machine learning are endorsing it. In a 2015 email survey targeting AI researchers, 29% of respondents answered that intelligence explosion was “likely” or “highly likely”. A further 21% considered it a serious possibility.

Online polygraph separates truth from lies using just text-based cues

Imagine a future where electronic text messaging is tracked by an intelligent algorithm that can identify truth from lies. A new study from two US researchers suggests this kind of online polygraph is entirely possible, with early experiments showing a machine learning algorithm can separate truth from lies based just on text cues over 85 percent of the time.

Stanford University launches the Institute for Human-Centered Artificial Intelligence

Three fundamental beliefs guide Stanford’s new Institute for Human-Centered Artificial Intelligence, co-directed by John Etchemendy and Fei-Fei Li : #AI technology should be inspired by human intelligence; the development of AI must be guided by its human impact; and applications of AI should enhance and augment humans, not replace them.


The new institute will focus on guiding artificial intelligence to benefit humanity.

Artificial Intelligence Creates a New Generation of Machine Learning

CEO and founder of R2ai, Yiwen Huang, talks to Interesting Engineering in an exclusive interview about how he started a company where AI creates Machine Learning models and how AI is not going to replace but enhance humans’ jobs in the future.


R2ai’s Founder and CEO, Yiwen Huang, tells interesting Engineering in an interview how he goes from a lab to creating an AI that creates AI. And how AI is not going to replace but to augment jobs in the future.

Japan to back int’l efforts to regulate AI-equipped ‘killer robots’

Japan is hoping to play a lead role in crafting international rules on what has been called lethal autonomous weapons systems or LAWS.


Japan is planning to give its backing to international efforts to regulate the development of lethal weapons controlled by artificial intelligence at a UN conference in Geneva late this month, government sources said Saturday.

It would mark a departure from Japan’s current policy. The government was already opposed to the development of so-called killer robots that could kill without human involvement. But it had called for careful discussions when it comes to rules so as to make sure that commercial development of AI would not be hampered.

With the policy shift, Japan is hoping to play a leading role in crafting international rules on what have been called lethal autonomous weapons systems, or LAWS, the sources said.