Toggle light / dark theme

Are Five Senses Holding Us Back? Scientists Say We Could Use Seven

A mathematical model shows memory capacity is maximized when represented by seven features. The study links this to the potential for seven senses, with applications in AI and neuroscience. Skoltech researchers have developed a mathematical model to study how memory works. Their analysis led to u

Gemini achieves gold-level performance at the International Collegiate Programming Contest World Finals

Gemini 2.5 Deep Think achieves breakthrough performance at the world’s most prestigious computer programming competition, demonstrating a profound leap in abstract problem solving.

An advanced version of Gemini 2.5 Deep Think has achieved gold-medal level performance at the 2025 International Collegiate Programming Contest (ICPC) World Finals.

This milestone builds directly on Gemini 2.5 Deep Think’s gold-medal win at the International Mathematical Olympiad (IMO) just two months ago. Innovations from these efforts will continue to be integrated into future versions of Gemini Deep Think, expanding the frontier of advanced AI capabilities accessible to students and researchers.

Neuromorphic Intelligence Leverages Dynamical Systems Theory To Model Inference And Learning In Sustainable, Adaptable Systems

The pursuit of artificial intelligence increasingly focuses on replicating the efficiency and adaptability of the human brain, and a new approach, termed neuromorphic intelligence, offers a promising path forward. Marcel van Gerven from Radboud University and colleagues demonstrate how brain-inspired systems can achieve significantly greater energy efficiency than conventional digital computers. This research establishes a unifying theoretical framework, rooted in dynamical systems theory, to integrate insights from diverse fields including neuroscience, physics, and artificial intelligence. By harnessing noise as a learning resource and employing differential genetic programming, the team advances the development of truly adaptive and sustainable artificial intelligence, paving the way for emergent intelligence arising directly from physical substrates.


Researchers demonstrate that applying dynamical systems theory, a mathematical framework describing change over time, to artificial intelligence enables the creation of more sustainable and adaptable systems by harnessing noise as a learning tool and allowing intelligence to emerge from the physical properties of the system itself.

Doing The Math On CPU-Native AI Inference

A number of chip companies — importantly Intel and IBM, but also the Arm collective and AMD — have come out recently with new CPU designs that feature native Artificial Intelligence (AI) and its related machine learning (ML). The need for math engines specifically designed to support machine learning algorithms, particularly for inference workloads but also for certain kinds of training, has been covered extensively here at The Next Platform.

Just to rattle off a few of them, consider the impending “Cirrus” Power10 processor from IBM, which is due in a matter of days from Big Blue in its high-end NUMA machines and which has a new matrix math engine aimed at accelerating machine learning. Or IBM’s “Telum” z16 mainframe processor coming next year, which was unveiled at the recent Hot Chips conference and which has a dedicated mixed precision matrix math core for the CPU cores to share. Intel is adding its Advanced Matrix Extensions (AMX) to its future “Sapphire Rapids” Xeon SP processors, which should have been here by now but which have been pushed out to early next year. Arm Holdings has created future Arm core designs, the “Zeus” V1 core and the “Perseus” N2 core, that will have substantially wider vector engines that support the mixed precision math commonly used for machine learning inference, too. Ditto for the vector engines in the “Milan” Epyc 7,003 processors from AMD.

All of these chips are designed to keep inference on the CPUs, where in a lot of cases it belongs because of data security, data compliance, and application latency reasons.

Machine learning unravels quantum atomic vibrations in materials

Caltech scientists have developed an artificial intelligence (AI)–based method that dramatically speeds up calculations of the quantum interactions that take place in materials. In new work, the group focuses on interactions among atomic vibrations, or phonons—interactions that govern a wide range of material properties, including heat transport, thermal expansion, and phase transitions. The new machine learning approach could be extended to compute all quantum interactions, potentially enabling encyclopedic knowledge about how particles and excitations behave in materials.

Scientists like Marco Bernardi, professor of applied physics, physics, and at Caltech, and his graduate student Yao Luo (MS ‘24) have been trying to find ways to speed up the gargantuan calculations required to understand such particle interactions from first principles in real materials—that is, beginning with only a material’s atomic structure and the laws of quantum mechanics.

Last year, Bernardi and Luo developed a data-driven method based on a technique called singular value decomposition (SVD) to simplify the enormous mathematical matrices scientists use to represent the interactions between electrons and phonons in a material.

Systematic fraud uncovered in mathematics publications

An international team of authors led by Ilka Agricola, professor of mathematics at the University of Marburg, Germany, has investigated fraudulent practices in the publication of research results in mathematics on behalf of the German Mathematical Society (DMV) and the International Mathematical Union (IMU), documenting systematic fraud over many years.

The results of the study were recently posted on the arXiv preprint server and in the Notices of the American Mathematical Society and have since caused a stir among mathematicians.

To solve the problem, the study also provides recommendations for the publication of research results in mathematics.

Tesla AI5 & AI6 Chips “Compressing Reality”?! What Did Elon See?!

Elon Musk has revealed Tesla’s new AI chips, AI5 and AI6, which will drive the company’s shift towards AI-powered services, enabling significant advancements in Full Self-Driving capabilities and potentially revolutionizing the self-driving car industry and beyond.

## Questions to inspire discussion.

Tesla’s AI Chip Advancements.

🚀 Q: What are the key features of Tesla’s AI5 and AI6 chips? A: Tesla’s AI5 and AI6 chips are inference-first, designed for high-throughput and efficient processing of AI models on devices like autos, Optimus, and Grok voice agents, being 40x faster than previous models.

💻 Q: How do Tesla’s AI5 and AI6 chips compare to previous models? A: Tesla’s AI5 chip is a 40x improvement over AI4, with 500 TOPS expanding to 5,000 TOPS, enabling excellent performance in full self-driving and Optimus humanoid robots.

🧠 Q: What is the significance of softmax in Tesla’s AI5 chip? A: AI5 is designed to run softmax natively in a few steps, unlike AI4 which relies on CPU and runs softmax in 40 steps in emulation mode.

Mathematical model of memory suggests seven senses are optimal

Skoltech scientists have devised a mathematical model of memory. By analyzing its new model, the team came to surprising conclusions that could prove useful for robot design, artificial intelligence, and for better understanding of human memory. Published in Scientific Reports, the study suggests there may be an optimal number of senses—if so, those of us with five senses could use a couple more.

“Our conclusion is, of course, highly speculative in application to human senses, although you never know: It could be that humans of the future would evolve a sense of radiation or magnetic field. But in any case, our findings may be of practical importance for robotics and the theory of ,” said study co-author Professor Nikolay Brilliantov of Skoltech AI.

“It appears that when each retained in memory is characterized in terms of seven features—as opposed to, say, five or eight—the of distinct objects held in memory is maximized.”

Mathematical ‘sum of zeros’ trick exposes topological magnetization in quantum materials

A new study addresses a foundational problem in the theory of driven quantum matter by extending the Středa formula to non-equilibrium regimes. It demonstrates that a superficially trivial “sum of zeros” encodes a universal, quantized magnetic response—one that is intrinsically topological and uniquely emergent under non-equilibrium driving conditions.

Imagine a strange material being rhythmically pushed—tapped again and again by invisible hands. These are periodically driven , or Floquet systems, where energy is no longer conserved in the usual sense. Instead, physicists speak of quasienergy—a looping spectrum with no clear start or end.

When scientists measure how such a system responds to a magnetic field, every single contribution seems to vanish—like adding an infinite list of zeros. And yet, the total stubbornly comes out finite, quantized, and very real.

/* */