Toggle light / dark theme

AI helps explain how covert attention works and uncovers new neuron types

Shifting focus on a visual scene without moving our eyes—think driving, or reading a room for the reaction to your joke—is a behavior known as covert attention. We do it all the time, but little is known about its neurophysiological foundation.

Now, using convolutional neural networks (CNNs), UC Santa Barbara researchers Sudhanshu Srivastava, Miguel Eckstein and William Wang have uncovered the underpinnings of covert attention, and in the process, have found new, emergent neuron types, which they confirmed in real life using data from mouse brain studies.

“This is a clear case of AI advancing neuroscience, cognitive sciences and psychology,” said Srivastava, a former graduate student in the lab of Eckstein, now a postdoctoral researcher at UC San Diego.

Sub-millimeter-sized robots can sense, ‘think’ and act on their own

Robots small enough to travel autonomously through the human body to repair damaged sites may seem the stuff of science fiction dreams. But this vision of surgery on a microscale is a step closer to reality, with news that researchers from the University of Pennsylvania and the University of Michigan have built a robot smaller than a millimeter that has an onboard computer and sensors.

Scientists have been trying for decades to develop microscopic robots, not only for medical applications but also for environmental monitoring and manufacturing. However, they have faced formidable challenges. Existing microbots typically require large, external control systems, such as powerful magnets and lasers, and cannot make autonomous decisions in unfamiliar environments.

AI helps solve decades-old maze in frustrated magnet physics

The study, conducted by Brookhaven theoretical physicist Weiguo Yin and described in a recent paper published in Physical Review B, is the first paper emerging from the “AI Jam Session” earlier this year, a first-of-its-kind event hosted by DOE and held in cooperation with OpenAI to push the limits of general-purpose large language models applied to science research. The event brought together approximately 1,600 scientists across nine host locations within the DOE national laboratory complex. At Brookhaven, more than 120 scientists challenged and evaluated the capabilities of OpenAI’s latest step-based logical reasoning AImodel built for complex problem solving.

Yin’s AI study focused on a class of advanced materials known as frustrated magnets. In these systems, the electron spins—the tiny magnetic moments carried by each electron—cannot settle on an orientation because competing interactions pull them in different directions. These materials have unique and fascinating properties that could translate to novel applications in the energy and information technology industries.

New agentic AI platform accelerates advanced optics design

Stanford engineers debuted a new framework introducing computational tools and self-reflective AI assistants, potentially advancing fields like optical computing and astronomy.

Hyper-realistic holograms, next-generation sensors for autonomous robots, and slim augmented reality glasses are among the applications of metasurfaces, emerging photonic devices constructed from nanoscale building blocks.

Now, Stanford engineers have developed an AI framework that rapidly accelerates metasurface design, with potential widespread technological applications. The framework, called MetaChat, introduces new computational tools and self-reflective AI assistants, enabling rapid solving of optics-related problems. The findings were reported recently in the journal Science Advances.

Genie 3: Creating dynamic worlds that you can navigate in real-time

Genie 3 is a world builder powered by generative AI. It appears that it could in principle be built into a game engine.

One thing I’d like to do is have procedural generation as the backbone, and have generative AI modify things further that regular proc-gen textures just are not able to accomplish.


Introducing Genie 3, a general purpose world model that can generate an unprecedented diversity of interactive environments. Given a text prompt, Genie 3 can generate dynamic worlds that you can navigate in real time at 24 frames per second, retaining consistency for a few minutes at a resolution of 720p.

Watch the Google DeepMind episode on Genie 3 with Hannah Fry here: • Genie 3: An infinite world model | Shlomi…

Our team has been pioneering research in simulated environments for over a decade, from training agents to master real-time strategy games to developing simulated environments for open-ended learning and robotics. This work motivated our development of world models, which are AI systems that can use their understanding of the world to simulate aspects of it, enabling agents to predict both how an environment will evolve and how their actions will affect it.

Taming the chaos gently: a predictive alignment learning rule in recurrent neural networks

The study presents Predictive Alignment, a local learning rule for recurrent neural networks that aligns internal network predictions with feedback. This biologically inspired method tames chaos and enables robust learning of complex patterns.

/* */