Toggle light / dark theme

It’s expected that the technology will tackle myriad problems that were once deemed impractical or even impossible to solve. Quantum computing promises huge leaps forward for fields spanning drug discovery and materials development to financial forecasting.

But just as exciting as quantum computing’s future are the breakthroughs already being made today in quantum hardware, error correction and algorithms.

NVIDIA is celebrating and exploring this remarkable progress in quantum computing by announcing its first Quantum Day at GTC 2025 on Thursday, March 20. This new focus area brings together leading experts for a comprehensive and balanced perspective on what businesses should expect from quantum computing in the coming decades — mapping the path toward useful quantum applications.

Quantum computing promises to solve complex problems exponentially faster than a classical computer, by using the principles of quantum mechanics to encode and manipulate information in quantum bits (qubits).

Qubits are the building blocks of a quantum computer. One challenge to scaling, however, is that qubits are highly sensitive to background noise and control imperfections, which introduce errors into the quantum operations and ultimately limit the complexity and duration of a quantum algorithm. To improve the situation, MIT researchers and researchers worldwide have continually focused on improving qubit performance.

In new work, using a superconducting qubit called fluxonium, MIT researchers in the Department of Physics, the Research Laboratory of Electronics (RLE), and the Department of Electrical Engineering and Computer Science (EECS) developed two new control techniques to achieve a world-record single-qubit fidelity of 99.998%. This result complements then-MIT researcher Leon Ding’s demonstration last year of a 99.92% two-qubit gate fidelity.

Quantum networking continues to encode information in polarization states due to ease and precision. The variable environmental polarization transformations induced by deployed fiber need correction for deployed quantum networking. Here, we present a method for automatic polarization compensation (APC) and demonstrate its performance on a metropolitan quantum network. Designing an APC involves many design decisions as indicated by the diversity of previous solutions in the literature. Our design leverages heterodyne detection of wavelength-multiplexed dim classical references for continuous high-bandwidth polarization measurements used by newly developed multi-axis (non-)linear control algorithm(s) for complete polarization channel stabilization with no downtime. This enables continuous relatively high-bandwidth correction without significant added noise from classical reference signals. We demonstrate the performance of our APC using a variety of classical and quantum characterizations. Finally, we use C-band and L-band APC versions to demonstrate continuous high-fidelity entanglement distribution on a metropolitan quantum network with an average relative fidelity of 0.94 ± 0.03 for over 30 hrs.

Artificial neural networks (ANNs) have brought about many stunning tools in the past decade, including the Nobel-Prize-winning AlphaFold model for protein-structure prediction [1]. However, this success comes with an ever-increasing economic and environmental cost: Processing the vast amounts of data for training such models on machine-learning tasks requires staggering amounts of energy [2]. As their name suggests, ANNs are computational algorithms that take inspiration from their biological counterparts. Despite some similarity between real and artificial neural networks, biological ones operate with an energy budget many orders of magnitude lower than ANNs. Their secret? Information is relayed among neurons via short electrical pulses, so-called spikes. The fact that information processing occurs through sparse patterns of electrical pulses leads to remarkable energy efficiency.

Quantum computers may soon dramatically enhance our ability to solve problems modeled by nonreversible Markov chains, according to a study published on the pre-print server arXiv.

The researchers from Qubit Pharmaceuticals and Sorbonne University, demonstrated that quantum algorithms could achieve exponential speedups in sampling from such chains, with the potential to surpass the capabilities of classical methods. These advances — if fully realized — have a range of implications for fields like drug discovery, machine learning and financial modeling.

Markov chains are mathematical frameworks used to model systems that transition between various states, such as stock prices or molecules in motion. Each transition is governed by a set of probabilities, which defines how likely the system is to move from one state to another. Reversible Markov chains — where the probability of moving from, let’s call them, state A to state B equals the probability of moving from B to A — have traditionally been the focus of computational techniques. However, many real-world systems are nonreversible, meaning their transitions are biased in one direction, as seen in certain biological and chemical processes.

Western researchers have developed a novel technique using math to understand exactly how neural networks make decisions—a widely recognized but poorly understood process in the field of machine learning.

Many of today’s technologies, from digital assistants like Siri and ChatGPT to and self-driving cars, are powered by machine learning. However, the —computer models inspired by the —behind these machine learning systems have been difficult to understand, sometimes earning them the nickname “” among researchers.

“We create neural networks that can perform , while also allowing us to solve the equations that govern the networks’ activity,” said Lyle Muller, mathematics professor and director of Western’s Fields Lab for Network Science, part of the newly created Fields-Western Collaboration Centre. “This mathematical solution lets us ‘open the black box’ to understand precisely how the network does what it does.”

In February 2016, scientists working for the Laser Interferometer Gravitational-Wave Observatory (LIGO) made history by announcing the first-ever detection of gravitational waves (GW). These waves, predicted by Einstein’s Theory of General Relativity, are created when massive objects collide (neutron stars or black holes), causing ripples in spacetime that can be detected millions or billions of light years away. Since their discovery, astrophysicists have been finding applications for GW astronomy, which include probing the interiors of neutron stars.

For instance, scientists believe that probing the continuous gravitational wave (CW) emissions from neutron stars will reveal data on their internal structure and equation of state and can provide tests of General Relativity. In a recent study, members of the LIGO-Virgo-KAGRA (LVK) Collaboration conducted a search for CWs from 45 known pulsars. While their results showed no signs of CWs emanating from their sample of pulsars, their work does establish upper and lower limits on the signal amplitude, potentially aiding future searches.

The LVK Collaboration is an international consortium of scientists from hundreds of universities and institutes worldwide. This collaboration combines data from the Laser Interferometer Gravitational-Wave Observatory’s (LIGO) twin observatories, the Virgo Observatory, and the Kamioka Gravitational Wave Detector (KAGRA). The preprint of the paper, “Search for continuous gravitational waves from known pulsars in the first part of the fourth LIGO-Virgo-KAGRA observing run,” recently appeared online.

Our neural network model of C. elegans contained 136 neurons that participated in sensory and locomotion functions, as indicated by published studies24,27,28,29,30,31. To construct this model, we first collected the necessary data including neural morphology, ion channel models, electrophysiology of single neurons, connectome, connection models and network activities (Fig. 2a). Next, we constructed the individual neuron models and their connections (Fig. 2b). At this stage, the biophysically detailed model was only structurally accurate (Fig. 2c), without network-level realistic dynamics. Finally, we optimized the weights and polarities of the connections to obtain a model that reflected network-level realistic dynamics (Fig. 2d). An overview of the model construction is shown in Fig. 2.

To achieve a high level of biophysical and morphological realism in our model, we used multicompartment models to represent individual neurons. The morphologies of neuron models were constructed on the basis of published morphological data9,32. Soma and neurite sections were further divided into several segments, where each segment was less than 2 μm in length. We integrated 14 established classes of ion channels (Supplementary Tables 1 and 2)33 in neuron models and tuned the passive parameters and ion channel conductance densities for each neuron model using an optimization algorithm34. This tuning was done to accurately reproduce the electrophysiological recordings obtained from patch-clamp experiments35,36,37,38 at the single-neuron level. Based on the few available electrophysiological data, we digitally reconstructed models of five representative neurons: AWC, AIY, AVA, RIM and VD5.

Modern AI systems have fulfilled Turing’s vision of machines that learn and converse like humans, but challenges remain. A new paper highlights concerns about energy consumption and societal inequality while calling for more robust AI testing to ensure ethical and sustainable progress.

A perspective published on November 13 in Intelligent Computing, a Science Partner Journal, argues that modern artificial intelligence.

Artificial Intelligence (AI) is a branch of computer science focused on creating systems that can perform tasks typically requiring human intelligence. These tasks include understanding natural language, recognizing patterns, solving problems, and learning from experience. AI technologies use algorithms and massive amounts of data to train models that can make decisions, automate processes, and improve over time through machine learning. The applications of AI are diverse, impacting fields such as healthcare, finance, automotive, and entertainment, fundamentally changing the way we interact with technology.