Toggle light / dark theme

Chemical networks can mimic nervous systems to power movement in soft materials

What if a soft material could move on its own, guided not by electronics or motors, but by the kind of rudimentary chemical signaling that powers the simplest organisms? Researchers at the University of Pittsburgh Swanson School of Engineering have modeled just that—a synthetic system that on its own directly transforms chemical reactions into mechanical motion, without the need for the complex biochemical machinery present in our bodies.

Just like jellyfish, some of the simplest organisms do not have a centralized brain or . Instead, they have a “nerve net” which consists of dispersed nerve cells that are interconnected by active junctions, which emit and receive . Even without a central “processor,” the chemical signals spontaneously travel through the net and trigger the autonomous motion needed for organisms’ survival.

In a study published in PNAS Nexus, Oleg E. Shklyaev, research assistant, and Anna C. Balazs, Distinguished Professor of Chemical and Petroleum Engineering and the John A. Swanson Chair of Engineering, have developed computer simulations to design a with a “nerve net” that links chemical and mechanical networks in a way that mimics how the earliest and simplest living systems coordinate motion.

Optical system achieves terabit-per-second capacity and integrates quantum cryptography for long-term security

The artificial intelligence (AI) boom has created unprecedented demand for data traffic. But the infrastructure needed to support it faces mounting challenges. AI data centers must deliver faster, more reliable communication than ever before, while also confronting their soaring electricity use and a looming quantum security threat, which could one day break today’s encryption methods.

To address these challenges, a recent study published in Advanced Photonics proposes a quantum-secured architecture that involves minimal digital signal processing (DSP) consumption and meets all the stringent requirements for AI-driven data center optical interconnect (AI–DCI) scenarios. This system enables data to move at terabit-per-second speeds with while defending against future quantum threats.

“Our work paves the way for the next generation of secure, scalable, and cost-efficient optical interconnects, protecting AI-driven data centers against quantum security threats while meeting the high demands of modern data-driven applications,” the researchers state in their paper.

How a human ‘jumping gene’ targets structured DNA to reshape the genome

Long interspersed nuclear element-1 (LINE-1 or L1) is the only active, self-copying genetic element in the human genome—comprising about 17% of the genome. It is commonly called a “jumping gene” or “retrotransposon” because it can “retrotranspose” (move) from one genomic location to another.

Researchers from the Institute of Biophysics of the Chinese Academy of Sciences have now unveiled the molecular mechanisms that underlie L1’s retrotransposition and integration into genomic DNA. Their study was published in Science on October 9.

L1 is the only autonomously active retrotransposon in the and serves as the primary vehicle for the mobilization of most other retrotransposons. Its retrotransposition process is mediated by the reverse transcriptase ORF2p through a mechanism known as target-primed reverse transcription (TPRT). Until now, the manner in which ORF2p recognizes DNA targets and mediates integration had remained unclear.

131 Chrome Extensions Caught Hijacking WhatsApp Web for Massive Spam Campaign

Cybersecurity researchers have uncovered a coordinated campaign that leveraged 131 rebranded clones of a WhatsApp Web automation extension for Google Chrome to spam Brazilian users at scale.

The 131 spamware extensions share the same codebase, design patterns, and infrastructure, according to supply chain security company Socket. The browser add-ons collectively have about 20,905 active users.

“They are not classic malware, but they function as high-risk spam automation that abuses platform rules,” security researcher Kirill Boychenko said. “The code injects directly into the WhatsApp Web page, running alongside WhatsApp’s own scripts, automates bulk outreach and scheduling in ways that aim to bypass WhatsApp’s anti-spam enforcement.”

AI model could boost robot intelligence via object recognition

Stanford researchers have developed an innovative computer vision model that recognizes the real-world functions of objects, potentially allowing autonomous robots to select and use tools more effectively.

In the field of AI known as computer vision, researchers have successfully trained models that can identify objects in . It is a skill critical to a future of robots able to navigate the world autonomously. But is only a first step. AI also must understand the function of the parts of an object—to know a spout from a handle, or the blade of a bread knife from that of a butter knife.

Computer vision experts call such utility overlaps “functional correspondence.” It is one of the most difficult challenges in computer vision. But now, in a paper that will be presented at the International Conference on Computer Vision (ICCV 2025), Stanford scholars will debut a new AI model that can not only recognize various parts of an object and discern their real-world purposes but also map those at pixel-by-pixel granularity between objects.

Shapeshifting soft robot uses electric fields to swing like a gymnast

Researchers have invented a new super agile robot that can cleverly change shape thanks to amorphous characteristics akin to the popular Marvel anti-hero Venom.

The unique soft morphing creation, developed by the University of Bristol and Queen Mary University of London, is much more adaptable than current . The study, published in the journal Advanced Materials, showcases an electro-morphing gel jelly-like humanoid gymnast that can move from one place to another using its flexible body and limbs.

Researchers used a special material called electro-morphing gel (e-MG) which allows the robot to show shapeshifting functions, allowing them to bend, stretch, and move in ways that were previously difficult or impossible, through manipulation of electric fields from ultralightweight electrodes.

Size doesn’t matter: Just a small number of malicious files can corrupt LLMs of any size

Large language models (LLMs), which power sophisticated AI chatbots, are more vulnerable than previously thought. According to research by Anthropic, the UK AI Security Institute and the Alan Turing Institute, it only takes 250 malicious documents to compromise even the largest models.

The vast majority of data used to train LLMs is scraped from the public internet. While this helps them to build knowledge and generate natural responses, it also puts them at risk from data poisoning attacks. It had been thought that as models grew, the risk was minimized because the percentage of poisoned data had to remain the same. In other words, it would need massive amounts of data to corrupt the largest models. But in this study, which is published on the arXiv preprint server, researchers showed that an attacker only needs a small number of poisoned documents to potentially wreak havoc.

To assess the ease of compromising large AI models, the researchers built several LLMs from scratch, ranging from small systems (600 million parameters) to very large (13 billion parameters). Each model was trained on vast amounts of clean public data, but the team inserted a fixed number of malicious files (100 to 500) into each one.

Method teaches generative AI models to locate personalized objects

Say a person takes their French Bulldog, Bowser, to the dog park. Identifying Bowser as he plays among the other canines is easy for the dog owner to do while onsite.

But if someone wants to use a generative AI model like GPT-5 to monitor their pet while they are at work, the model could fail at this basic task. Vision-language models like GPT-5 often excel at recognizing general objects, like a dog, but they perform poorly at locating personalized objects, like Bowser the French Bulldog.

To address this shortcoming, researchers from MIT and the MIT-IBM Watson AI Lab have introduced a new training method that teaches vision-language models to localize personalized objects in a scene.

/* */