Toggle light / dark theme

Zero-shot strategy enables robots to traverse complex environments without extra sensors or rough terrain training

Figuring out certain aspects of a material’s electron structure can take a lot out of a computer—up to a million CPU hours, in fact. A team of Yale researchers, though, are using a type of artificial intelligence to make these calculations much faster and more accurately. Among other benefits, this makes it much easier to discover new materials. Their results are published in Nature Communications.

In the field of materials science, exploring the of real materials is of particular interest, since it allows for better understanding of the physics of larger and more complex systems, such as moiré systems and defect states. Researchers typically will use a method known as density functional theory (DFT) to explore electronic structure, and for the most part it works fine.

“But the issue is that if you’re looking at excited state properties, like how materials behave when they interact with light or when they conduct electricity, then DFT really isn’t sufficient to understand the properties of the material,” said Prof. Diana Qiu, who led the study.

Spatial Transcriptomic Clocks Reveal Local Cellular Interactions Influence Brain Aging

Certain cells in the brain create a nurturing environment, enhancing the health and resilience of their neighbors, while others promote stress and damage. Using spatial transcriptomics and AI, researchers at Stanford’s Knight Initiative for Brain Resilience discovered these interactions playing out across the lifespan—suggesting local cellular interactions may significantly influence brain aging and resilience.

A new study was published in Nature in an article titled, “Spatial transcriptomic clocks reveal cell proximity effects in brain aging.”

“What was exciting to us was finding that some cells have a pro-aging effect on neighboring cells while others appear to have a rejuvenating effect on their neighbors,” said Anne Brunet, the Michele and Timothy Barakett Endowed Professor in Stanford’s department of genetics and co-senior investigator of the new study.

Novel Physical Reservoir Computing Device Mimics Human Synaptic Behavior for Efficient Edge AI Processing by Tokyo University of Science

Artificial intelligence (AI) is becoming increasingly useful for the prediction of emergency events such as heart attacks, natural disasters, and pipeline failures. This requires state-of-the-art technologies that can rapidly process data. In this regard, reservoir computing, specially designed for time-series data processing with low power consumption, is a promising option.

It can be implemented in various frameworks, among which physical reservoir computing (PRC) is the most popular. PRC with optoelectronic artificial synapses that mimic human synaptic elements are expected to have unparalleled recognition and real-time processing capabilities akin to the human visual system.

However, PRC based on existing self-powered optoelectronic synaptic devices cannot handle time-series data across multiple timescales, present in signals for monitoring infrastructure, natural environment, and health conditions.

Laser-based artificial neuron mimics nerve cell functions at lightning speed

In-plane magnetic fields are responsible for inducing anomalous Hall effect in EuCd2Sb2 films, report researchers from the Institute of Science Tokyo. By studying how these fields change electronic structures, the team discovered a large in-plane anomalous Hall effect.

These findings, published in Physical Review Letters on December 3, 2024, pave the way for new strategies for controlling electronic transport under magnetic fields, potentially advancing applications in .

The Hall effect is a fundamental phenomenon in material science. It occurs when a material carrying an electric current is exposed to a magnetic field, producing a voltage perpendicular to both the current and the magnetic field. This effect has been extensively studied in materials under out-of-plane magnetic fields. However, research on how in-plane magnetic fields induce this phenomenon has been very limited.

Agents are the ‘third wave’ of the AI revolution

“The challenge is applying agentic AI in the enterprise setting or in innovation-driven industries, like materials science R&D or pharma, where there is higher uncertainty and risk,” said Connell. “These more complex environments require a very nuanced understanding by the agent in order to make trustworthy, reliable decisions.”

Also: What is Google’s Project Mariner? This AI agent can navigate the web for you.

As with analytical and gen AI, data — particularly real-time data — is at the core of agentic AI success. It’s important “to have an understanding of how agentic AI will be used and the data that is powering the agent, as well as a system for testing,” said Connell. “To build AI agents, you need clean and, for some applications, labeled data that accurately represents the problem domain, along with sufficient volume to train and validate your models.”

AI system can envision an entire world from a single picture

Johns Hopkins computer scientists have created an artificial intelligence system capable of “imagining” its surroundings without having to physically explore them, bringing AI closer to humanlike reasoning.

The new system—called Generative World Explorer, or GenEx—needs only a single still image to conjure an entire world, giving it a significant advantage over previous systems that required a robot or agent to physically move through a scene to map the surrounding environment, which can be costly, unsafe, and time-consuming. The team’s results are posted to the arXiv preprint server.

“Say you’re in an area you’ve never been before—as a human, you use environmental cues, past experiences, and your knowledge of the world to imagine what might be around the corner,” says senior author Alan Yuille, the Bloomberg Distinguished Professor of Computational Cognitive Science at Johns Hopkins.