Toggle light / dark theme

Computer simulations help materials scientists and biochemists study the motion of macromolecules, advancing the development of new drugs and sustainable materials. However, these simulations pose a challenge for even the most powerful supercomputers.

A University of Oregon graduate student has developed a new mathematical equation that significantly improves the accuracy of the simplified computer models used to study the motion and behavior of large molecules such as proteins, and synthetic materials such as plastics.

The breakthrough, published last month in Physical Review Letters, enhances researchers’ ability to investigate the motion of large molecules in complex biological processes, such as DNA replication. It could aid in understanding diseases linked to errors in such replication, potentially leading to new diagnostic and therapeutic strategies.

For years, quantum computing has been the tech world’s version of “almost there”. But now, engineers at MIT have pulled off something that might change the game. They’ve made a critical leap in quantum error correction, bringing us one step closer to reliable, real-world quantum computers.

In a traditional computer, everything runs on bits —zeroes and ones that flip on and off like tiny digital switches. Quantum computers, on the other hand, use qubits. These are bizarre little things that can be both 0 and 1 at the same time, thanks to a quantum property called superposition. They’re also capable of entanglement, meaning one qubit can instantly influence another, even at a distance.

All this weirdness gives quantum computers enormous potential power. They could solve problems in seconds that might take today’s fastest supercomputers years. Think of it like having thousands of parallel universes doing your math homework at once. But there’s a catch.

While early language models could only process text, contemporary large language models now perform highly diverse tasks on different types of data. For instance, LLMs can understand many languages, generate computer code, solve math problems, or answer questions about images and audio.

MIT researchers probed the inner workings of LLMs to better understand how they process such assorted data, and found evidence that they share some similarities with the human brain.

Neuroscientists believe the human brain has a “semantic hub” in the anterior temporal lobe that integrates semantic information from various modalities, like visual data and tactile inputs. This semantic hub is connected to modality-specific “spokes” that route information to the hub. The MIT researchers found that LLMs use a similar mechanism by abstractly processing data from diverse modalities in a central, generalized way. For instance, a model that has English as its dominant language would rely on English as a central medium to process inputs in Japanese or reason about arithmetic, computer code, etc. Furthermore, the researchers demonstrate that they can intervene in a model’s semantic hub by using text in the model’s dominant language to change its outputs, even when the model is processing data in other languages.

A team of researchers at Nagoya University has discovered something surprising. If you have two tiny vibrating elements, each one barely moving on its own, and you combine them in the right way, their combined vibration can be amplified dramatically—up to 100 million times.

The paper is published in the Chaos: An Interdisciplinary Journal of Nonlinear Science.

Their findings suggest that by relying on structural amplification rather than power, even small, simple devices can transmit long-distance clear signals, potentially innovating long-distance communications and remote medical devices.

Hardships in childhood could have lasting effects on the brain, new research shows, with adverse events such as family conflict and poverty potentially affecting cognitive function in kids for several years afterwards.

This study, led by a team from Brigham and Women’s Hospital in Massachusetts, looked specifically at white matter: the deeper tissue in the brain, made up of communication fibers ferrying information between neurons.

“We found that a range of adversities is associated with lower levels of fractional anisotropy (FA), a measure of white matter microstructure, throughout the whole brain, and that this is associated with lower performance on mathematics and language tasks later on,” write the researchers in their published paper.

Solving one of the oldest algebra problems isn’t a bad claim to fame, and it’s a claim Norman Wildberger can now make: The mathematician has solved what are known as higher-degree polynomial equations, which have been puzzling experts for nearly 200 years.

Wildberger, from the University of New South Wales (UNSW) in Australia, worked with computer scientist Dean Rubine on a paper that details how these incredibly complex calculations could be worked out.

“This is a dramatic revision of a basic chapter in algebra,” says Wildberger. “Our solution reopens a previously closed book in mathematics history.”

A mathematician has solved a 200-year-old maths problem after figuring out a way to crack higher-degree polynomial equations without using radicals or irrational numbers.

The method developed by Norman Wildberger, PhD, an honorary professor at the School of Mathematics and Statistics at UNSW Sydney, solves one of algebra’s oldest challenges by finding a general solution to equations where the variable is raised to the fifth power or higher.

It inspired further work — mathematicians like Sophie Germain had previously contributed techniques (notably the “Sophie Germain trick” for special primes), and Dirichlet’s work continued the trend of applying novel number-theoretic tools.


(/ ˌ d ɪər ɪ ˈ k l eɪ / ; [ 1 ] German: [ləˈʒœn diʁiˈkleː] ; [ 2 ] 13 February 1805 – 5 May 1859) was a German mathematician. In number theory, he proved special cases of Fermat’s last theorem and created analytic number theory. In analysis, he advanced the theory of Fourier series and was one of the first to give the modern formal definition of a function. In mathematical physics, he studied potential theory, boundary-value problems, and heat diffusion, and hydrodynamics.

Although his surname is Lejeune Dirichlet, he is commonly referred to by his mononym Dirichlet, in particular for results named after him.

Most people’s experiences with polynomial equations don’t extend much further than high school algebra and the quadratic formula. Still, these numeric puzzles remain a foundational component of everything from calculating planetary orbits to computer programming. Although solving lower order polynomials—where the x in an equation is raised up to the fourth power—is often a simple task, things get complicated once you start seeing powers of five or greater. For centuries, mathematicians accepted this as simply an inherent challenge to their work, but not Norman Wildberger. According to his new approach detailed in The American Mathematical Monthly, there’s a much more elegant approach to high order polynomials—all you need to do is get rid of pesky notions like irrational numbers.

Babylonians first conceived of two-degree polynomials around 1800 BCE, but it took until the 16th century for mathematicians to evolve the concept to incorporate three-and four-degree variables using root numbers, also known as radicals. Polynomials remained there for another two centuries, with larger examples stumping experts until in 1832. That year, French mathematician Évariste Galois finally illustrated why this was such a problem—the underlying mathematical symmetry in the established methods for lower-order polynomials simply became too complicated for degree five or higher. For Galois, this meant there just wasn’t a general formula available for them.

Mathematicians have since developed approximate solutions, but they require integrating concepts like irrational numbers into the classical formula.