Over The Horizon
Category: robotics/AI – Page 10
The 2030 Deadline: $14/Hour Robots & The End of Human Work
Over The Horizon


A Systems View of LLMs on TPUs
Training LLMs often feels like alchemy, but understanding and optimizing the performance of your models doesn’t have to. This book aims to demystify the science of scaling language models: how TPUs (and GPUs) work and how they communicate with each other, how LLMs run on real hardware, and how to parallelize your models during training and inference so they run efficiently at massive scale. If you’ve ever wondered “how expensive should this LLM be to train” or “how much memory do I need to serve this model myself” or “what’s an AllGather”, we hope this will be useful to you.

Mathematical model of memory suggests seven senses are optimal
Skoltech scientists have devised a mathematical model of memory. By analyzing its new model, the team came to surprising conclusions that could prove useful for robot design, artificial intelligence, and for better understanding of human memory. Published in Scientific Reports, the study suggests there may be an optimal number of senses—if so, those of us with five senses could use a couple more.
“Our conclusion is, of course, highly speculative in application to human senses, although you never know: It could be that humans of the future would evolve a sense of radiation or magnetic field. But in any case, our findings may be of practical importance for robotics and the theory of artificial intelligence,” said study co-author Professor Nikolay Brilliantov of Skoltech AI.
“It appears that when each concept retained in memory is characterized in terms of seven features—as opposed to, say, five or eight—the number of distinct objects held in memory is maximized.”


Atom-thin crystals provide new way to power the future of computer memory
Picture the smartphone in your pocket, the data centers powering artificial intelligence, or the wearable health monitors that track your heartbeat. All of them rely on energy-hungry memory chips to store and process information. As demand for computing resources continues to soar, so does the need for memory devices that are smaller, faster, and far more efficient.
A new study by Auburn physicists has taken an important step toward meeting this challenge.
The study, “Electrode-Assisted Switching in Memristors Based on Single-Crystal Transition Metal Dichalcogenides,” published in ACS Applied Materials & Interfaces, shows how memristors—ultra-thin memory devices that “remember” past electrical signals —switch their state with the help of electrodes and subtle atomic changes inside the material.

Brain cells simulated in the electronic brain
Europe now has an exascale supercomputer which runs entirely on renewable energy. Of particular interest: one of the 30 inaugural projects for the machine focuses on realistic simulations of biological neurons (see https://www.fz-juelich.de/en/news/effzett/2024/brain-research)
[ https://www.nature.com/articles/d41586-025-02981-1](https://www.nature.com/articles/d41586-025-02981-1)
Large language models (LLMs) work with artificial neural networks inspired by the way the brain works. Dr. Thorsten Hater (JSC) is focused on the nature-inspired models of LLMs: neurons that communicate with each other in the human brain. He wants to use the exascale computer JUPITER to perform even more realistic simulations of the behaviour of individual neurons.
Many models treat a neuron merely as a point that is connected to other points. The spikes, or electrical signals, travel along these connections. “Of course, this is overly simplified,” says Hater. “In our model, the neurons have a spatial extension, as they do in reality. This allows us to describe many processes in detail on the molecular level. We can calculate the electric field across the entire cell. And we can thus show how signal transmission varies right down to the individual neuron. This gives us a much more realistic picture of these processes.”
For the simulations, Hater uses a program called Arbor. This allows more than two million individual cells to be interconnected computationally. Such models of natural neural networks are useful, for example, in the development of drugs to combat neurodegenerative diseases like Alzheimer’s. The physicist and software developer would like to simulate and study the changes that take place in the neurons in the brain on the exascale computer.
The man who triggered the AI explosion
NewsPicks crew flew to Canada in search of AI revolution but unexpectedly came across the true story behind the birth of AI.
The AI Takeover Is Closer Than You Think
To try Brilliant for free, visit https://brilliant.org/APERTURE/ or scan the QR code onscreen. You’ll also get 20% off an annual premium subscription.
AI experts from all around the world believe that given its current rate of progress, by 2027, we may hit the most dangerous milestone in human history. The point of no return, when AI could stop being a tool and start improving itself beyond our control. A moment when humanity may never catch up.
00:00 The AI Takeover Is Closer Than You Think.
01:05 The rise of AI in text, art & video.
02:00 What is the Technological Singularity?
04:06 AI’s impact on jobs & economy.
05:31 What happens when AI surpasses human intellect.
08:36 AlphaGo vs world champion Lee Sedol.
11:10 Can we really “turn off” AI?
12:12 Narrow AI vs Artificial General Intelligence (AGI)
16:39 AGI (Artificial General Intelligence)
18:01 From AGI to Superintelligence.
20:18 Ethical concerns & defining intelligence.
22:36 Neuralink and human-AI integration.
25:54 Experts warning of 2027 AGI
Support: / apertureyt.
Shop: https://bit.ly/ApertureMerch.
Subscribe: https://bit.ly/SubscribeToAperture.
Discord: / discord.
Questions or concerns? https://underknown.com/contact/
Interested in sponsoring Aperture? [email protected].
#aperture #ai #artificialintelligence #pointofnoreturn #technology #tech #future