Toggle light / dark theme

RRAM-based analog computing system rapidly solves matrix equations with high precision

Analog computers are systems that perform computations by manipulating physical quantities such as electrical current, that map math variables, instead of representing information using abstraction with discrete binary values (i.e., 0 or 1), like digital computers.

While computing systems can perform well on general-purpose tasks, they are known to be susceptible to noise (i.e., background or external interferences) and less precise than .

Researchers at Peking University and the Beijing Advanced Innovation Center for Integrated Circuits have developed a scalable analog computing device that can solve so-called matrix equations with remarkable precision. This new system, introduced in a paper published in Nature Electronics, was built using tiny non-volatile memory devices known as resistive random-access memory (RRAM) chips.

Mathematical proof debunks the idea that the universe is a computer simulation

From the article:

“We have demonstrated that it is impossible to describe all aspects of physical reality using a computational theory of quantum gravity,” says Dr. Faizal. “Therefore, no physically complete and consistent theory of everything can be derived from computation alone. Rather, it requires a non-algorithmic understanding, which is more fundamental than the computational laws of quantum gravity and therefore more fundamental than spacetime itself.”


It’s a plot device beloved by science fiction: our entire universe might be a simulation running on some advanced civilization’s supercomputer. But new research from UBC Okanagan has mathematically proven this isn’t just unlikely—it’s impossible.

Dr. Mir Faizal, Adjunct Professor with UBC Okanagan’s Irving K. Barber Faculty of Science, and his international colleagues, Drs. Lawrence M. Krauss, Arshid Shabir and Francesco Marino have shown that the fundamental nature of reality operates in a way that no computer could ever simulate.

Their findings, published in the Journal of Holography Applications in Physics, go beyond simply suggesting that we’re not living in a simulated world like The Matrix. They prove something far more profound: the universe is built on a type of understanding that exists beyond the reach of any algorithm.

Gemini gets a huge upgrade for academics and researchers with powerful new LaTeX features

For anyone who has ever wrestled with creating documents containing complex mathematical equations, intricate tables, or precise multi-column layouts, the LaTeX document preparation system is likely a familiar (and sometimes frustrating) friend. It’s the standard for high-quality academic, scientific, and technical documents, but it traditionally requires specialized editors and significant technical know-how.

Midtraining Bridges Pretraining and Posttraining Distributions

Recently, many language models have been pretrained with a “midtraining” phase, in which higher quality, often instruction-formatted data, is mixed in at the end of pretraining. Despite the popularity of this practice, there is little scientific understanding of this phase of model training or why it is effective. In this work, we conduct the first systematic investigation of midtraining through controlled experiments with language models pretrained from scratch and fine-tuned on supervised finetuning datasets in different domains. We find that when compared after supervised fine-tuning, the effectiveness of midtraining is highest in the math and code domains, where midtraining can best reduce the syntactic gap between pretraining and posttraining data. In these cases, midtraining consistently outperforms continued pretraining in both in-domain validation loss as well as pretraining data forgetting after posttraining. We conduct ablations on the starting time of the midtraining phase and mixture weights of the midtraining data, using code midtraining as a case study, and find that timing has a greater impact than mixture weights, with earlier introduction of specialized data, yielding greater benefits in-domain as well as preserving general language modeling better. These findings establish midtraining as a domain adaptation technique that compared to continued pretraining yields better performance through reduced forgetting.

Artificial neurons replicate biological function for improved computer chips

Researchers at the USC Viterbi School of Engineering and School of Advanced Computing have developed artificial neurons that replicate the complex electrochemical behavior of biological brain cells.

The innovation, documented in Nature Electronics, is a leap forward in neuromorphic computing technology. The innovation will allow for a reduction of the chip size by orders of magnitude, will reduce its energy consumption by orders of magnitude, and could advance artificial general intelligence.

Unlike conventional digital processors or existing neuromorphic chips based on silicon technology that merely simulate neural activity, these physically embody or emulate the analog dynamics of their biological counterparts. Just as neurochemicals initiate brain activity, chemicals can be used to initiate computation in neuromorphic (brain-inspired) . By being a physical replication of the biological process, they differ from prior iterations of artificial neurons that were solely mathematical equations.

Unit-free theorem pinpoints key variables for AI and physics models

Machine learning models are designed to take in data, to find patterns or relationships within those data, and to use what they have learned to make predictions or to create new content. The quality of those outputs depends not only on the details of a model’s inner workings but also, crucially, on the information that is fed into the model.

Some models follow a brute force approach, essentially adding every bit of data related to a particular problem into the model and seeing what comes out. But a sleeker, less energy-hungry way to approach a problem is to determine which variables are vital to the outcome and only provide the model with information about those key variables.

Now, Adrián Lozano-Durán, an associate professor of aerospace at Caltech and a visiting professor at MIT, and MIT graduate student Yuan Yuan, have developed a theorem that takes any number of possible variables and whittles them down, leaving only those that are most important. In the process, the model removes all units, such as meters and feet, from the underlying equations, making them dimensionless, something scientists require of equations that describe the physical world. The work can be applied not only to machine learning but to any .

Researcher improves century-old equation to predict movement of dangerous air pollutants

A new method developed at the University of Warwick offers the first simple and predictive way to calculate how irregularly shaped nanoparticles—a dangerous class of airborne pollutant—move through the air.

Every day, we breathe in millions of , including soot, dust, pollen, microplastics, viruses, and synthetic nanoparticles. Some are small enough to slip deep into the lungs and even enter the bloodstream, contributing to conditions such as heart disease, stroke, and cancer.

Most of these are irregularly shaped. Yet the mathematical models used to predict how these particles behave typically assume they are perfect spheres, simply because the equations are easier to solve. This makes it difficult to monitor or predict the movement of real-world, non-spherical—and often more hazardous—particles.

Gravitational wave events hint at ‘second-generation’ black holes

In a paper published in The Astrophysical Journal Letters, the international LIGO-Virgo-KAGRA Collaboration reports on the detection of two gravitational wave events in October and November of 2024 with unusual black hole spins. This observation adds an important new piece to our understanding of the most elusive phenomena in the universe.

Gravitational waves are “ripples” in that result from cataclysmic events in deep space, with the strongest waves produced by the collision of black holes.

Using sophisticated algorithmic techniques and mathematical models, researchers are able to reconstruct many physical features of the detected black holes from the analysis of gravitational signals, such as their masses and the distance of the event from Earth, and even the speed and direction of their rotation around their axis, called spin.

Mathematical proof unites two puzzling phenomena in spin glass physics

A fundamental link between two counterintuitive phenomena in spin glasses—reentrance and temperature chaos—has been mathematically proven for the first time. By extending the Edwards–Anderson model to include correlated disorder, researchers at Science Tokyo and Tohoku University provided the first rigorous proof that reentrance implies temperature chaos.

Spin glasses are in which atomic “spins,” or tiny magnetic moments, point in random directions rather than aligning neatly as in a regular magnet. These disordered spins can remain stable for extremely long periods of time, possibly even indefinitely. This frozen randomness gives rise to unusual physical properties not seen in any other physical system.

To describe the spin glass behavior, physicists use models such as the Edwards–Anderson (EA) model, which simulates how spins interact in two or three dimensions—conditions that more closely reflect real-world systems than the well-studied mean-field model. Numerical studies of the EA model have uncovered two strange and counterintuitive phenomena: reentrant transitions and temperature .

/* */