A team of researchers used a massive dance video dataset and advanced AI models to map how the human brain interprets dance, revealing striking differences between experts and nonexperts.
Cai Borui and Zhao Yao from Deakin University (Australia) presented a concept that they believe will bridge the gap between modern chatbots and general-purpose AI. Their proposed “Intelligence Foundation Model” (IFM) shifts the focus of AI training from merely learning surface-level data patterns to mastering the universal mechanisms of intelligence itself. By utilizing a biologically inspired “State Neural Network” architecture and a “Neuron Output Prediction” learning objective, the framework is designed to mimic the collective dynamics of biological brains and internalize how information is processed over time. This approach aims to overcome the reasoning limitations of current Large Language Models, offering a scalable path toward true Artificial General Intelligence (AGI) and theoretically laying the groundwork for the future convergence of biological and digital minds.
The Intelligence Foundation Model represents a bold new proposal in the quest to build machines that can truly think. We currently live in an era dominated by Large Language Models like ChatGPT and Gemini. These systems are incredibly impressive feats of engineering that can write poetry, solve coding errors, and summarize history. However, despite their fluency, they often lack the fundamental spark of what we consider true intelligence.
They are brilliant mimics that predict statistical patterns in text but do not actually understand the world or learn from it in real-time. A new research paper suggests that to get to the next level, we need to stop modeling language and start modeling the brain itself.
Borui Cai and Yao Zhao have introduced a concept they believe will bridge the gap between today’s chatbots and Artificial General Intelligence. Published in a preprint on arXiv, their research argues that existing foundation models suffer from severe limitations because they specialize in specific domains like vision or text. While a chatbot can tell you what a bicycle is, it does not understand the physics of riding one in the way a human does.
Most strikingly, the paper claims four genuinely new mathematical results, carefully verified by the human mathematicians involved. In a discipline where truth is eternal and progress is measured in decades, an AI contributed novel insights that helped settle previously unsolved problems. The authors stress these contributions are “modest in scope but profound in implication”—not because they’re minor, but because they represent a proof of concept. If GPT-5 can do this now, what comes next?
The paper carries an undercurrent of urgency: many scientists still don’t realize what’s possible. The authors are essentially saying, “Look, this is already working for us—don’t get left behind.” Yet they avoid boosterism, emphasizing the technology’s current limitations as clearly as its strengths.
What we’re learning from collaborations with scientists.
Session by SPRIND with Klaus Wagenbauer, CEO of Plectonic Biotech, Nicola Kegel, CEO of Nanogami, Christian Sigl, CEO of Capsitec, and Hendrik Dietz, Professor & Founder, Technical University of Munich at the \.
While atmospheric turbulence is a familiar culprit of rough flights, the chaotic movement of turbulent flows remains an unsolved problem in physics. To gain insight into the system, a team of researchers used explainable AI to pinpoint the most important regions in a turbulent flow, according to a Nature Communications study led by the University of Michigan and the Universitat Politècnica de València.
A clearer understanding of turbulence could improve forecasting, helping pilots navigate around turbulent areas to avoid passenger injuries or structural damage. It can also help engineers manipulate turbulence, dialing it up to help industrial mixing like water treatment or dialing it down to improve fuel efficiency in vehicles.
“For more than a century, turbulence research has struggled with equations too complex to solve, experiments too difficult to perform, and computers too weak to simulate reality. Artificial Intelligence has now given us a new tool to confront this challenge, leading to a breakthrough with profound practical implications,” said Sergio Hoyas, a professor of aerospace engineering at the Universitat Politècnica de València and co-author of the study.
Not metaphorically—literally. The light intensity field becomes an artificial “gravity,” and the robot’s trajectory becomes a null geodesic, the same path light takes in warped spacetime.
By calculating the robot’s “energy” and “angular momentum” (just like planetary orbits), they mathematically prove: robots starting within 90 degrees of a target will converge exponentially, every time. No simulations or wishful thinking—it’s a theorem.
They use the Schwarz-Christoffel transformation (a tool from black hole physics) to “unfold” a maze onto a flat rectangle, program a simple path, then “fold” it back. The result: a single, static light pattern that both guides robots and acts as invisible walls they can’t cross.
npj Robot ics — Artificial spacetimes for reactive control of resource-limited robots. npj Robot 3, 39 (2025). https://doi.org/10.1038/s44182-025-00058-9
Not exactly a brain chip per se by a bit of nanotech.
While companies like Elon Musk’s Neuralink are hard at work on brain-computer interfaces that require surgery to cut open the skull and insert a complex array of wires into a person’s head, a team of researchers at MIT have been researching a wireless electronic brain implant that they say could provide a non-invasive alternative that makes the technology far easier to access.
They describe the system, called Circulatronics, as more of a treatment platform than a one-off brain chip. Working with researchers from Wellesley College and Harvard University, the MIT team recently released a paper on the new technology, which they describe as an autonomous bioelectronic implant.
As New Atlas points out, the Circulatronics platform starts with an injectable swarm of sub-cellular sized wireless electronic devices, or “SWEDs,” which can travel into inflamed regions of the patient’s brain after being injected into the bloodstream. They do so by fusing with living immune cells, called monocytes, forming a sort of cellular cyborg.
Ever wondered what ancient languages sounded like?