Toggle light / dark theme

Elon Musk signaled plans to scale Tesla to the “extreme” while teasing the release of Tesla’s “Master Plan Part 3” on Twitter one day before opening the automaker’s first European factory.

On Monday, Musk revealed on Twitter the themes that will dominate the next installment in Tesla’s long-term playbook: artificial intelligence and scaling the automaker’s operations.

“Main Tesla subjects will be scaling to extreme size, which is needed to shift humanity away from fossil fuels, and AI,” Musk tweeted. “But I will also include sections about SpaceX, Tesla and The Boring Company.”

What is the next step toward bridging the gap between natural and artificial intelligence? Scientists and researchers are divided on the answer. Yann LeCun, Chief AI Scientist at Meta and the recipient of the 2018 Turing Award, is betting on self-supervised learning, machine learning models that can be trained without the need for human-labeled examples.

LeCun has been thinking and talking about self-supervised and unsupervised learning for years. But as his research and the fields of AI and neuroscience have progressed, his vision has converged around several promising concepts and trends.

In a recent event held by Meta AI, LeCun discussed possible paths toward human-level AI, challenges that remain and the impact of advances in AI.

A.I. is only beginning to show what it can do for modern medicine.

In today’s society, artificial intelligence (A.I.) is mostly used for good. But what if it was not?

Naive thinking “The thought had never previously struck us. We were vaguely aware of security concerns around work with pathogens or toxic chemicals, but that did not relate to us; we primarily operate in a virtual setting. Our work is rooted in building machine learning models for therapeutic and toxic targets to better assist in the design of new molecules for drug discovery,” wrote the researchers in their paper. “We have spent decades using computers and A.I. to improve human health—not to degrade it. We were naive in thinking about the potential misuse of our trade, as our aim had always been to avoid molecular features that could interfere with the many different classes of proteins essential to human life.”

Full Story:


A team of researchers affiliated with multiple institutions in China and the U.S. has found that it is possible to track the sliding of grain boundaries in some metals at the atomic scale using an electron microscope and an automatic atom tracker. In their paper published in the journal Science, the group describes their study of platinum using their new technique and the discovery they made in doing so.

Scientists have been studying the properties of metals for many years. Learning more about how crystal grains in certain metals interact with one another has led to the development of new kinds of metals and applications for their use. In their recent effort, the researchers took a novel approach to studying the sliding that occurs between grains and in so doing have learned something new.

When crystalline metals are deformed, the grains that they are made of move against one another, and the way they move determines many of their properties, such as malleability. To learn more about what happens between grains in such metals during deformity, the researchers used two types of technologies: and automated atom-tracking.

For self-driving cars and other applications developed using AI, you need what’s known as “deep learning”, the core concepts of which emerged in the ’50s. This requires training models based on similar patterns as seen in the human brain. This, in turn, requires a large amount of compute power, as afforded by TPUs (tensor processing units) or GPUs (graphics processing units) running for lengthy periods. However, cost of this compute power is out of reach of most AI developers, who largely rent it from cloud computing platforms such as AWS or Azure. What is to be done?

Well, one approach is that taken by U.K. startup Gensyn. It’s taken the idea of the distributed computing power of older projects such as SETI@home and the COVID-19 focussed Folding@home and applied it in the direction of this desire for deep learning amongst AI developers. The result is a way to get high-performance compute power from a distributed network of computers.

Gensyn has now raised a $6.5 million seed led by Eden Block, a web3 VC. Also participating in the round is Galaxy Digital, Maven 11, Coinfund, Hypersphere, Zee Prime and founders from some blockchain protocols. This adds to a previously unannounced pre-seed investment of $1.1 millionin 2021 — led by 7percent Ventures and Counterview Capital, with participation from Entrepreneur First and id4 Ventures.

Four-legged robots are nothing novel — Boston Dynamics’ Spot has been making the rounds for some time, as have countless alternative open source designs. But with theirs, researchers at MIT claim to have broken the record for the fastest robot run recorded. Working out of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), the team says that they developed a system that allows the MIT-designed Mini Cheetah to learn to run by trial and error in simulation.

While the speedy Mini Cheetah has limited direct applications in the enterprise, the researchers believe that their technique could be used to improve the capabilities of other robotics systems — including those used in factories to assemble products before they’re shipped to customers. It’s timely work as the pandemic accelerates the adoption of autonomous robots in industry. According to an Automation World survey, 44.9% of the assembly and manufacturing facilities that currently use robots consider the robots to be an integral part of their operations.

Today’s cutting-edge robots are “taught” to perform tasks through reinforcement learning, a type of machine learning technique that enables robots to learn by trial and error using feedback from their own actions and experiences. When a robot performs a “right” action — i.e., an action that’ll lead it toward a desired goal, like stowing an object on a shelf — it receives a “reward.” When it makes a mistake, the robot either doesn’t receive a reward or is “punished” by losing a previous reward. Over time, the robot discovers ways to maximize its reward and perform actions that achieve the sought-after goal.

The researchers simulated the molecules H4, molecular nitrogen, and solid diamond. These involved as many as 120 orbitals, the patterns of electron density formed in atoms or molecules by one or more electrons. These are the largest chemistry simulations performed to date with the help of quantum computers.

A classical computer actually handles most of this fermionic quantum Monte Carlo simulation. The quantum computer steps in during the last, most computationally complex step—calculating the differences between the estimates of the ground state made by the quantum computer and the classical computer.

The prior record for chemical simulations with quantum computing employed 12 qubits and a kind of hybrid algorithm known as a variational quantum eigensolver (VQE). However, VQEs possess a number of limitations compared with this new hybrid approach. For example, when one wants a very precise answer from a VQE, even a small amount of noise in the quantum circuitry “can cause enough of an error in our estimate of the energy or other properties that’s too large,” says study coauthor William Huggins, a quantum physicist at Google Quantum AI in Mountain View, Calif.