Toggle light / dark theme

On the pursuit for anyons (Majoranas) in the context of the latest progress on multiple platforms.


Already, the graphene efforts have offered “a breath of fresh air” to the community, Alicea says. “It’s one of the most promising avenues that I’ve seen in a while.” Since leaving Microsoft, Zaletel has shifted his focus to graphene. “It’s clear that this is just where you should do it now,” he says.

But not everyone believes they will have enough control over the free-moving quasiparticles in the graphene system to scale up to an array of qubits—or that they can create big enough gaps to keep out intruders. Manipulating the quarter-charge quasiparticles in graphene is much more complicated than moving the Majoranas at the ends of nanowires, Kouwenhoven says. “It’s super interesting for physics, but for a quantum computer I don’t see it.”

Just across the parking lot from Station Q’s new office, a third kind of Majorana hunt is underway. In an unassuming black building branded Google AI Quantum, past the company rock-climbing wall and surfboard rack, a dozen or so proto–quantum computers dangle from workstations, hidden inside their chandelier-like cooling systems. Their chips contain arrays of dozens of qubits based on a more conventional technology: tiny loops of superconducting wires through which current oscillates between two electrical states. These qubits, like other standard approaches, are beset with errors, but Google researchers are hoping they can marry the Majorana’s innate error protection to their quantum chip.

Taking inspiration from the human brain, researchers have developed a new synaptic transistor capable of higher-level thinking.

Designed by researchers at Northwestern University, Boston College and the Massachusetts Institute of Technology (MIT), the device simultaneously processes and stores information just like the . In new experiments, the researchers demonstrated that the transistor goes beyond simple machine-learning tasks to categorize data and is capable of performing associative learning.

Although previous studies have leveraged similar strategies to develop brain-like computing devices, those transistors cannot function outside cryogenic temperatures. The new device, by contrast, is stable at room temperatures. It also operates at fast speeds, consumes very little energy and retains stored information even when power is removed, making it ideal for real-world applications.

Researchers have discovered that AI memory consolidation processes resemble those in the human brain, specifically in the hippocampus, offering potential for advancements in AI and a deeper understanding of human memory mechanisms.

An interdisciplinary team consisting of researchers from the Center for Cognition and Sociality and the Data Science Group within the Institute for Basic Science (IBS) revealed a striking similarity between the memory processing of artificial intelligence (AI) models and the hippocampus of the human brain. This new finding provides a novel perspective on memory consolidation, which is a process that transforms short-term memories into long-term ones, in AI systems.

Advancing AI through understanding human intelligence.

Researchers at MIT, the Broad Institute of MIT and Harvard, Integrated Biosciences, the Wyss Institute for Biologically Inspired Engineering, and the Leibniz Institute of Polymer Research have identified a new structural class of antibiotics.

Scientists have discovered one of the first new classes of antibiotics identified in the past 60 years, and the first discovered leveraging an AI-powered platform built around explainable deep learning.

Published in Nature today, December 20, the peer-reviewed paper, entitled “Discovery of a structural class of antibiotics with explainable deep learning,” was co-authored by a team of 21 researchers, led by Felix Wong, Ph.D., co-founder of Integrated Biosciences, and James J. Collins, Ph.D., Termeer Professor of Medical Engineering and Science at MIT and founding chair of the Integrated Biosciences Scientific Advisory Board.

A non-organic intelligent system has for the first time designed, planned and executed a chemistry experiment, Carnegie Mellon University researchers report in the Dec. 21 issue of the journal Nature.

“We anticipate that intelligent agent systems for autonomous scientific experimentation will bring tremendous discoveries, unforeseen therapies and new materials. While we cannot predict what those discoveries will be, we hope to see a new way of conducting research given by the synergetic partnership between humans and machines,” the Carnegie Mellon research team wrote in their paper.

The system, called Coscientist, was designed by Assistant Professor of Chemistry and Chemical Engineering Gabe Gomes and chemical engineering doctoral students Daniil Boiko and Robert MacKnight. It uses large language models (LLMs), including OpenAI’s GPT-4 and Anthropic’s Claude, to execute the full range of the experimental process with a simple, plain language prompt.