AI can now turn brain scans into text. It’s not mind reading but it’s pretty close.
Researchers have successfully performed the world’s first Milky Way simulation that accurately represents more than 100 billion individual stars over the course of 10 thousand years. This feat was accomplished by combining artificial intelligence (AI) with numerical simulations. Not only does the simulation represent 100 times more individual stars than previous state-of-the-art models, but it was produced more than 100 times faster.
Published in Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, the study represents a breakthrough at the intersection of astrophysics, high-performance computing, and AI. Beyond astrophysics, this new methodology can be used to model other phenomena such as climate change and weather patterns.
They can plan multi-step actions, generate creative solutions, and assist in complex decision-making. Yet these strengths fade when tasks stretch over long, dependent sequences. Even small per-step error rates compound quickly, turning an impressive short-term performance into complete long-term failure.
That fragility poses a fundamental obstacle for real-world systems. Most large-scale human and organizational processes – from manufacturing and logistics to finance, healthcare, and governance – depend on millions of actions executed precisely and in order. A single mistake can cascade through an entire pipeline. For AI to become a reliable participant in such processes, it must do more than reason well. It must maintain flawless execution over time, sustaining accuracy across millions of interdependent steps.
Apple’s recent study, The Illusion of Thinking, captured this challenge vividly. Researchers tested advanced reasoning models such as Claude 3.7 Thinking and DeepSeek-R1 on structured puzzles like Towers of Hanoi, where each additional disk doubles the number of required moves. The results revealed a sharp reliability cliff: models performed perfectly on simple problems but failed completely once the task crossed about eight disks, even when token budgets were sufficient. In short, more “thinking” led to less consistent reasoning.
Researchers at Aalto University have demonstrated single-shot tensor computing at the speed of light, a remarkable step towards next-generation artificial general intelligence hardware powered by optical computation rather than electronics.
Tensor operations are the kind of arithmetic that form the backbone of nearly all modern technologies, especially artificial intelligence, yet they extend beyond the simple math we’re familiar with. Imagine the mathematics behind rotating, slicing, or rearranging a Rubik’s cube along multiple dimensions. While humans and classical computers must perform these operations step by step, light can do them all at once.
Today, every task in AI, from image recognition to natural language processing, relies on tensor operations. However, the explosion of data has pushed conventional digital computing platforms, such as GPUs, to their limits in terms of speed, scalability and energy consumption.
A lot of people are talking about an AI bubble since it is normal for tech to explode in growth for a while, then collapse a bit, and then eventually move forward again.
WE ARE NOT IN AN AI BUBBLE. THE SINGULARITY HAS BEGUN.
There will not be a year between now and the upcoming AI takeover where AI data center spending will decline worldwide.
Learn about the Singularity!
We show that an agentic large language model (LLM) (OpenAI o3 with deep research) can autonomously reason, write code, and iteratively refine hypotheses to derive a physically interpretable equation for competitive adsorption on metal-organic layers (MOLs)—an open problem our lab had struggled with for months. In a single 29-min session, o3 formulated the governing equations, generated fitting scripts, diagnosed shortcomings, and produced a compact three-parameter model that quantitatively matches experiments across a dozen carboxylic acids.
A new software enables brain simulations which both imitate the processes in the brain in detail and can solve challenging cognitive tasks. The program was developed by a research team at the Cluster of Excellence “Machine Learning: New Perspectives for Science” at the University of Tübingen. The software thus forms the basis for a new generation of brain simulations which allow deeper insights into the functioning and performance of the brain. The Tübingen researchers’ paper has been published in the journal Nature Methods.
For decades, researchers have been trying to create computer models of the brain in order to increase understanding of the organ and the processes that take place there. Using mathematical methods, they have simulated the behavior and interaction of nerve cells and their compounds.
However, previous models had significant weaknesses: They were either based on oversimplified neuron models and therefore strayed significantly from biological reality, or they depicted the biophysical processes within cells in detail, but were incapable of carrying out similar tasks to the brain.