Toggle light / dark theme

Electronic circuits that compute and store information contain millions of tiny switches that control the flow of electric current. A deeper understanding of how these tiny switches work could help researchers push the frontiers of modern computing.

Now scientists have made the first snapshots of atoms moving inside one of those switches as it turns on and off. Among other things, they discovered a short-lived state within the switch that might someday be exploited for faster and more energy-efficient computing devices.

The research team from the Department of Energy’s SLAC National Accelerator Laboratory, Stanford University, Hewlett Packard Labs, Penn State University and Purdue University described their work in a paper published in Science today.

The Google Quantum AI team has found that adding logical qubits to the company’s quantum computer reduced the logical qubit error rate exponentially. In their paper published in the journal Nature, the group describes their work with logical qubits as an error correction technique and outline what they have learned so far.

One of the hurdles standing in the way of the creation of usable quantum computers is figuring out how to either prevent errors from occurring or fixing them before they are used as part of a computation. On traditional computers, the problem is mostly solved by adding a parity bit—but that approach will not work with quantum computers because of the different nature of qubits—attempts to measure them destroy the data. Prior research has suggested that one possible solution to the problem is to group qubits into clusters called logical qubits. In this new effort, the team at AI Quantum has tested this idea on Google’s Sycamore quantum .

Sycamore works with 54 physical qubits, in their work, the researchers created logical qubits of different sizes ranging from five to 21 qubits to see how each would work. In so doing, they found that adding qubits reduced rates exponentially. They were able to measure the extra qubits in a way that did not involve collapsing their state, but that still provided enough information for them to be used for computations.

Today, in a peer-reviewed paper published in the prestigious scientific journal Nature, DeepMind offered further details of how exactly its A.I. software was able to perform so well. It has also open-sourced the code it used to create AlphaFold 2 for other researchers to use.


But it’s still not clear when researchers and drug companies will have easy access to AlphaFold’s structure predictions.

NASA’s Juno probe has flown closer to Jupiter and its largest moon, Ganymede, than any other spacecraft in more than two decades — and the images it beamed back of the gas giant and its icy orb are breathtaking.

Juno approached Ganymede on June 7, before making its 34th flyby of Jupiter the following day, traveling from pole to pole in under three hours.

On Thursday, NASA released an animated series of images captured by the spacecraft’s JunoCam imager, providing a “starship captain” point of view of each flyby. They mark the first close-up views of the largest moon in the solar system since the Galileo orbiter last flew past in 2000.

Researchers at UC San Francisco have successfully developed a “speech neuroprosthesis” that has enabled a man with severe paralysis to communicate in sentences, translating signals from his brain to the vocal tract directly into words that appear as text on a screen.

The achievement, which was developed in collaboration with the first participant of a clinical research trial, builds on more than a decade of effort by UCSF neurosurgeon Edward Chang, MD, to develop a technology that allows people with paralysis to communicate even if they are unable to speak on their own. The study appears July 15 in the New England Journal of Medicine.