Toggle light / dark theme

Up until now, the ability to make gray goo has been theoretical. However, the scientists at the Columbia University School of Engineering and Applied Science have made a significant breakthrough. The individual components are computationally simple but can exhibit complex behavior.


Current robots are usually self-contained entities made of interdependent subcomponents, each with a specific function. If one part fails, the robot stops working. In robotic swarms, each robot is an independently functioning machine.

In a new study published today in Nature, researchers at Columbia Engineering and MIT Computer Science & Artificial Intelligence Lab (CSAIL), demonstrate for the first time a way to make a robot composed of many loosely coupled components, or “particles.” Unlike swarm or modular robots, each component is simple, and has no individual address or identity. In their system, which the researchers call a “particle robot,” each particle can perform only uniform volumetric oscillations (slightly expanding and contracting), but cannot move independently.

The team, led by Hod Lipson, professor of mechanical engineering at Columbia Engineering, and CSAIL Director Daniela Rus, discovered that when they grouped thousands of these particles together in a “sticky” cluster and made them oscillate in reaction to a light source, the entire particle robot slowly began to move forward, towards the light.

In 1965, I. J. Good described for the first time the notion of “intelligence explosion”, as it relates to artificial intelligence (AI):

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

Decades later, the concept of an “intelligence explosion” — leading to the sudden rise of “superintelligence” and the accidental end of the human race — has taken hold in the AI community. Famous business leaders are casting it as a major risk, greater than nuclear war or climate change. Average graduate students in machine learning are endorsing it. In a 2015 email survey targeting AI researchers, 29% of respondents answered that intelligence explosion was “likely” or “highly likely”. A further 21% considered it a serious possibility.

Read more

Imagine a future where electronic text messaging is tracked by an intelligent algorithm that can identify truth from lies. A new study from two US researchers suggests this kind of online polygraph is entirely possible, with early experiments showing a machine learning algorithm can separate truth from lies based just on text cues over 85 percent of the time.

Read more

Three fundamental beliefs guide Stanford’s new Institute for Human-Centered Artificial Intelligence, co-directed by John Etchemendy and Fei-Fei Li : #AI technology should be inspired by human intelligence; the development of AI must be guided by its human impact; and applications of AI should enhance and augment humans, not replace them.


The new institute will focus on guiding artificial intelligence to benefit humanity.

Read more

CEO and founder of R2ai, Yiwen Huang, talks to Interesting Engineering in an exclusive interview about how he started a company where AI creates Machine Learning models and how AI is not going to replace but enhance humans’ jobs in the future.


R2ai’s Founder and CEO, Yiwen Huang, tells interesting Engineering in an interview how he goes from a lab to creating an AI that creates AI. And how AI is not going to replace but to augment jobs in the future.

Read more

Japan is hoping to play a lead role in crafting international rules on what has been called lethal autonomous weapons systems or LAWS.


Japan is planning to give its backing to international efforts to regulate the development of lethal weapons controlled by artificial intelligence at a UN conference in Geneva late this month, government sources said Saturday.

It would mark a departure from Japan’s current policy. The government was already opposed to the development of so-called killer robots that could kill without human involvement. But it had called for careful discussions when it comes to rules so as to make sure that commercial development of AI would not be hampered.

With the policy shift, Japan is hoping to play a leading role in crafting international rules on what have been called lethal autonomous weapons systems, or LAWS, the sources said.