Toggle light / dark theme

Advancing Physical Understanding with Interpretable Machine Learning

A new artificial neural-network architecture opens a window into the workings of a tool previously regarded as a black box.

Thanks to the extremely large datasets and computing power that have become available in recent years, a new paradigm in scientific discovery has emerged. This new approach is purely data driven, using large amounts of data to train machine-learning models―typically neural networks―to predict the behavior of the natural world [1]. The most prominent achievement of this new methodology has arguably been the AlphaFold model for predicting protein folding (see Research News: Chemistry Nobel Awarded for an AI System That Predicts Protein Structures) [2]. But despite such successes, these data-driven approaches suffer a major drawback in that they are generally “black boxes” that offer no human-accessible understanding of how they make their predictions. This shortcoming also extends to the models’ inputs: It is often desirable to build known domain knowledge into these models, but the data-driven approach excludes that option.

Scientists unravel neural networks that guide guilt and shame-driven behaviors

Feelings of guilt and shame can lead us to behave in a variety of different ways, including trying to make amends or save face, cooperating more with others or avoiding people altogether. Now, researchers have shed light on how the two emotions emerge from cognitive processes and in turn guide how we respond to them.

Their study is published in eLife. The editors say it provides compelling behavioral, computational and neural evidence to explain the cognitive link between emotions and compensatory actions. They add that the findings have broad theoretical and practical implications across a range of disciplines concerned with human behavior, including psychology, neuroscience, public policy and psychiatry.

‘AI advisor’ helps self-driving labs share control in creation of next-generation materials

“Self-driving” or “autonomous” labs are an emerging technology in which artificial intelligence guides the discovery process, helping design experiments or perfecting decision strategies.

While these labs have generated heated debate about whether humans or machines should lead scientific research, a new paper from Argonne National Laboratory and the University of Chicago Pritzker School of Molecular Engineering (UChicago PME) has proposed a novel answer: Both.

In the paper published in Nature Chemical Engineering, the team led by UChicago PME Asst. Prof. Jie Xu, who has a joint appointment at Argonne, outlined an “AI advisor” model that helps humans and machines share the driver’s seat in self-driving labs.

Scientists Identify Promising New Magnetic Material for the AI Era

A newly validated magnetic state could open a path toward ultra-fast, high-density memory for future AI and data-center technologies. A collaborative team of researchers from NIMS, the University of Tokyo, Kyoto Institute of Technology, and Tohoku University has shown that thin films of ruthenium

ThreatsDay Bulletin: WhatsApp Hijacks, MCP Leaks, AI Recon, React2Shell Exploit and 15 More Stories

This week’s ThreatsDay Bulletin tracks how attackers keep reshaping old tools and finding new angles in familiar systems. Small changes in tactics are stacking up fast, and each one hints at where the next big breach could come from.

From shifting infrastructures to clever social hooks, the week’s activity shows just how fluid the threat landscape has become.

Here’s the full rundown of what moved in the cyber world this week.

New password spraying attacks target Cisco, PAN VPN gateways

An automated campaign is targeting multiple VPN platforms, with credential-based attacks being observed on Palo Alto Networks GlobalProtect and Cisco SSL VPN.

On December 11, threat monitoring platform GreyNoise observed the number of login attempts aimed at GlobalProtect portals peaked at 1.7 million during a period of 16 hours.

Collected data showed that the attacks originated from more than 10,000 unique IP addresses and were aimed at infrastructure located in the United States, Mexico, and Pakistan.

Robotic arm successfully learns 1,000 manipulation tasks in one day

Over the past decades, roboticists have introduced a wide range of systems that can effectively tackle some real-world problems. Most of these robots, however, often perform poorly on tasks that they were not trained on, particularly those that entail manipulating previously unseen objects or handling objects that were encountered before in new ways.

Researchers at the Robot Learning Lab at Imperial College London recently developed a new imitation learning approach that could allow robots to successfully learn new tasks faster and without requiring substantial training data. Using this method, which was introduced in a paper published in Science Robotics, they were able to train a robotic arm to complete 1,000 different tasks in a single day.

“The research was initially inspired by our prior work on trajectory transfer, where we introduced a method that proved robust and efficient for teaching robots single tasks,” Kamil Dreczkowski and Pietro Vitiello, co-authors of the paper, told Tech Xplore.

Subsystem resetting: Researchers discover a new route to control phase transitions in complex systems

Researchers in the Department of Theoretical Physics at Tata Institute of Fundamental Research (TIFR), Mumbai, have discovered that instead of manipulating every component or modifying interactions in a many-body system, occasionally resetting just a small fraction can reshape how the entire system behaves, including how it transitions from one phase to another.

This counterintuitive approach, called subsystem resetting, offers a powerful, universal control strategy to tune collective behavior in complex systems ranging from magnets to neural networks.

This work by Anish Acharya, Rupak Majumder, and Prof. Shamik Gupta has been published in Physical Review Letters.

/* */