A mathematical model shows memory capacity is maximized when represented by seven features. The study links this to the potential for seven senses, with applications in AI and neuroscience. Skoltech researchers have developed a mathematical model to study how memory works. Their analysis led to u
Gemini 2.5 Deep Think achieves breakthrough performance at the world’s most prestigious computer programming competition, demonstrating a profound leap in abstract problem solving.
This milestone builds directly on Gemini 2.5 Deep Think’s gold-medal win at the International Mathematical Olympiad (IMO) just two months ago. Innovations from these efforts will continue to be integrated into future versions of Gemini Deep Think, expanding the frontier of advanced AI capabilities accessible to students and researchers.
The pursuit of artificial intelligence increasingly focuses on replicating the efficiency and adaptability of the human brain, and a new approach, termed neuromorphic intelligence, offers a promising path forward. Marcel van Gerven from Radboud University and colleagues demonstrate how brain-inspired systems can achieve significantly greater energy efficiency than conventional digital computers. This research establishes a unifying theoretical framework, rooted in dynamical systems theory, to integrate insights from diverse fields including neuroscience, physics, and artificial intelligence. By harnessing noise as a learning resource and employing differential genetic programming, the team advances the development of truly adaptive and sustainable artificial intelligence, paving the way for emergent intelligence arising directly from physical substrates.
Researchers demonstrate that applying dynamical systems theory, a mathematical framework describing change over time, to artificial intelligence enables the creation of more sustainable and adaptable systems by harnessing noise as a learning tool and allowing intelligence to emerge from the physical properties of the system itself.
A number of chip companies — importantly Intel and IBM, but also the Arm collective and AMD — have come out recently with new CPU designs that feature native Artificial Intelligence (AI) and its related machine learning (ML). The need for math engines specifically designed to support machine learning algorithms, particularly for inference workloads but also for certain kinds of training, has been covered extensively here at The Next Platform.
All of these chips are designed to keep inference on the CPUs, where in a lot of cases it belongs because of data security, data compliance, and application latency reasons.
Climate science is the most significant scientific collaboration in history. This series from Quanta Magazine guides you through basic climate science — from quantum effects to ancient hothouses, from the math of tipping points to the audacity of climate models.
Caltech scientists have developed an artificial intelligence (AI)–based method that dramatically speeds up calculations of the quantum interactions that take place in materials. In new work, the group focuses on interactions among atomic vibrations, or phonons—interactions that govern a wide range of material properties, including heat transport, thermal expansion, and phase transitions. The new machine learning approach could be extended to compute all quantum interactions, potentially enabling encyclopedic knowledge about how particles and excitations behave in materials.
Scientists like Marco Bernardi, professor of applied physics, physics, and materials science at Caltech, and his graduate student Yao Luo (MS ‘24) have been trying to find ways to speed up the gargantuan calculations required to understand such particle interactions from first principles in real materials—that is, beginning with only a material’s atomic structure and the laws of quantum mechanics.
An international team of authors led by Ilka Agricola, professor of mathematics at the University of Marburg, Germany, has investigated fraudulent practices in the publication of research results in mathematics on behalf of the German Mathematical Society (DMV) and the International Mathematical Union (IMU), documenting systematic fraud over many years.
The results of the study were recently posted on the arXiv preprint server and in the Notices of the American Mathematical Society and have since caused a stir among mathematicians.
To solve the problem, the study also provides recommendations for the publication of research results in mathematics.
Elon Musk has revealed Tesla’s new AI chips, AI5 and AI6, which will drive the company’s shift towards AI-powered services, enabling significant advancements in Full Self-Driving capabilities and potentially revolutionizing the self-driving car industry and beyond.
## Questions to inspire discussion.
Tesla’s AI Chip Advancements.
🚀 Q: What are the key features of Tesla’s AI5 and AI6 chips? A: Tesla’s AI5 and AI6 chips are inference-first, designed for high-throughput and efficient processing of AI models on devices like autos, Optimus, and Grok voice agents, being 40x faster than previous models.
💻 Q: How do Tesla’s AI5 and AI6 chips compare to previous models? A: Tesla’s AI5 chip is a 40x improvement over AI4, with 500 TOPS expanding to 5,000 TOPS, enabling excellent performance in full self-driving and Optimus humanoid robots.
🧠 Q: What is the significance of softmax in Tesla’s AI5 chip? A: AI5 is designed to run softmax natively in a few steps, unlike AI4 which relies on CPU and runs softmax in 40 steps in emulation mode.
Skoltech scientists have devised a mathematical model of memory. By analyzing its new model, the team came to surprising conclusions that could prove useful for robot design, artificial intelligence, and for better understanding of human memory. Published in Scientific Reports, the study suggests there may be an optimal number of senses—if so, those of us with five senses could use a couple more.
“Our conclusion is, of course, highly speculative in application to human senses, although you never know: It could be that humans of the future would evolve a sense of radiation or magnetic field. But in any case, our findings may be of practical importance for robotics and the theory of artificial intelligence,” said study co-author Professor Nikolay Brilliantov of Skoltech AI.
“It appears that when each concept retained in memory is characterized in terms of seven features—as opposed to, say, five or eight—the number of distinct objects held in memory is maximized.”
A new study addresses a foundational problem in the theory of driven quantum matter by extending the Středa formula to non-equilibrium regimes. It demonstrates that a superficially trivial “sum of zeros” encodes a universal, quantized magnetic response—one that is intrinsically topological and uniquely emergent under non-equilibrium driving conditions.
Imagine a strange material being rhythmically pushed—tapped again and again by invisible hands. These are periodically driven quantum systems, or Floquet systems, where energy is no longer conserved in the usual sense. Instead, physicists speak of quasienergy—a looping spectrum with no clear start or end.
When scientists measure how such a system responds to a magnetic field, every single contribution seems to vanish—like adding an infinite list of zeros. And yet, the total stubbornly comes out finite, quantized, and very real.