Toggle light / dark theme

Yann LeCun, the chief AI scientist at Facebook, helped develop the deep learning algorithms that power many artificial intelligence systems today. In conversation with head of TED Chris Anderson, LeCun discusses his current research into self-supervised machine learning, how he’s trying to build machines that learn with common sense (like humans) and his hopes for the next conceptual breakthrough in AI.

This talk was presented at an official TED conference, and was featured by our editors on the home page.

The U.S. intelligence community (IC) on Thursday rolled out an “ethics guide” and framework for how intelligence agencies can responsibly develop and use artificial intelligence (AI) technologies.

Among the key ethical requirements were shoring up security, respecting human dignity through complying with existing civil rights and privacy laws, rooting out bias to ensure AI use is “objective and equitable,” and ensuring human judgement is incorporated into AI development and use.

The IC wrote in the framework, which digs into the details of the ethics guide, that it was intended to ensure that use of AI technologies matches “the Intelligence Community’s unique mission purposes, authorities, and responsibilities for collecting and using data and AI outputs.”

Many computational properties are maximized when the dynamics of a network are at a “critical point”, a state where systems can quickly change their overall characteristics in fundamental ways, transitioning e.g. between order and chaos or stability and instability. Therefore, the critical state is widely assumed to be optimal for any computation in recurrent neural networks, which are used in many AI applications.

A new general language machine learning model is pushing the boundaries of what AI can do.

Why it matters: OpenAI’s GPT-3 system can reasonably make sense of and write human language. It’s still a long way from genuine artificial intelligence, but it may be looked back on as the iPhone of AI, opening the door to countless commercial applications — both benign and potentially dangerous.

Driving the news: After announcing GPT-3 in a paper in May, OpenAI recently began offering a select group of people access to the system’s API to help the nonprofit explore the AI’s full capabilities.

A Russian Soyuz rocket will launch a robotic cargo ship packed with tons of supplies to the International Space Station Thursday (July 29), and you can watch the launch live.

Roscosmos, Russia’s space agency, will launch the uncrewed Progress 76 supply ship to the station at 10:26 a.m. EDT (1426 GMT) from Baikonur Cosmodrome in Kazakhstan, where the local time will be 7:26 p.m. You can watch the launch live here and on the Space.com homepage, courtesy of NASA TV.

Machine learning performed by neural networks is a popular approach to developing artificial intelligence, as researchers aim to replicate brain functionalities for a variety of applications.

A paper in the journal Applied Physics Reviews, by AIP Publishing, proposes a new approach to perform computations required by a , using light instead of electricity. In this approach, a photonic tensor core performs multiplications of matrices in parallel, improving speed and efficiency of current deep learning paradigms.

In , neural networks are trained to learn to perform unsupervised decision and classification on unseen data. Once a neural network is trained on data, it can produce an inference to recognize and classify objects and patterns and find a signature within the data.

However, to dismiss the subject as fantastical or unnecessary would be akin to telling scientists 100 years ago that landing on the moon was also irrelevant.

This is because, for pioneers and champions of artificial intelligence, quantum computing is the holy grail. It’s not a make-believe fantasy; rather, it’s a tangible area of science that will take our probability-driven world into a whole new dimension.

The core of GPT-3, which is a creation of OpenAI, an artificial intelligence company based in San Francisco, is a general language model designed to perform autofill. It is trained on uncategorized internet writings, and basically guesses what text ought to come next from any starting point. That may sound unglamorous, but a language model built for guessing with 175 billion parameters — 10 times more than previous competitors — is surprisingly powerful.


With attention focused on a pandemic and an election, AI has taken a major leap forward.