Toggle light / dark theme

BrainBody-LLM algorithm helps robots mimic human-like planning and movement

Large language models (LLMs), such as the model underpinning the functioning of OpenAI’s platform ChatGPT, are now widely used to tackle a wide range of tasks, ranging from sourcing information to the generation of texts in different languages and even code. Many scientists and engineers also started using these models to conduct research or advance other technologies.

In the context of robotics, LLMs have been found to be promising for the creation of robot policies derived from a user’s instructions. Policies are essentially “rules” that a robot needs to follow to correctly perform desired actions.

Researchers at NYU Tandon School of Engineering recently introduced a new algorithm called BrainBody-LLM, which leverages LLMs to plan and refine the execution of a robot’s actions. The new algorithm, presented in a paper published in Advanced Robotics Research, draws inspiration from how the human brain plans actions and fine-tunes the body’s movements over time.

Researchers pioneer pathway to mechanical intelligence by breaking symmetry in soft composite materials

A research team has developed soft composite systems with highly programmable, asymmetric mechanical responses. By integrating “shear-jamming transitions” into compliant polymeric solids, this innovative work enhances key material functionalities essential for engineering mechano-intelligent systems—a major step toward the development of next-generation smart materials and devices.

The work is published in the journal Nature Materials.

In engineering fields such as soft robotics, synthetic tissues, and flexible electronics, materials that exhibit direction-dependent responses to external stimuli are crucial for realizing intelligent functions.

Intelligent photodetectors ‘sniff and seek’ like retriever dogs to recognize materials directly from light spectra

Researchers at the University of California, Los Angeles (UCLA), in collaboration with UC Berkeley, have developed a new type of intelligent image sensor that can perform machine-learning inference during the act of photodetection itself.

Reported in Science, the breakthrough redefines how spectral imaging, machine vision and AI can be integrated within a single semiconductor device.

Traditionally, spectral cameras capture a dense stack of images, each image corresponding to a different wavelength, and then transfer this large dataset to digital processors for computation and scene analysis. This workflow, while powerful, creates a severe bottleneck: the hardware must move and process massive amounts of data, which limits speed, power efficiency, and the achievable spatial–spectral resolution.

Public GitLab repositories exposed more than 17,000 secrets

After scanning all 5.6 million public repositories on GitLab Cloud, a security engineer discovered more than 17,000 exposed secrets across over 2,800 unique domains.

Luke Marshall used the TruffleHog open-source tool to check the code in the repositories for sensitive credentials like API keys, passwords, and tokens.

The researcher previously scanned Bitbucket, where he found 6,212 secrets spread over 2.6 million repositories. He also checked the Common Crawl dataset that is used to train AI models, which exposed 12,000 valid secrets.

Your body may already have a molecule that helps fight Alzheimer’s

Spermine, a small but powerful molecule in the body, helps neutralize harmful protein accumulations linked to Alzheimer’s and Parkinson’s. It encourages these misfolded proteins to gather into manageable clumps that cells can more efficiently dispose of through autophagy. Experiments in nematodes show that spermine also enhances longevity and cellular energy production. These insights open the door to targeted therapies powered by polyamines and advanced AI-driven molecular design.

A nonsurgical brain implant enabled through a cell–electronics hybrid for focal neuromodulation

MIT researchers have taken a major step toward making this scenario a reality. They developed microscopic, wireless bioelectronics that could travel through the body’s circulatory system and autonomously self-implant in a target region of the brain, where they would provide focused treatment.

In a study on mice, the researchers show that after injection, these miniscule implants can identify and travel to a specific brain region without the need for human guidance. Once there, they can be wirelessly powered to provide electrical stimulation to the precise area. Such stimulation, known as neuromodulation, has shown promise as a way to treat brain tumors and diseases like Alzheimer’s and multiple sclerosis.

Moreover, because the electronic devices are integrated with living, biological cells before being injected, they are not attacked by the body’s immune system and can cross the blood-brain barrier while leaving it intact. This maintains the barrier’s crucial protection of the brain.

A nonsurgical brain implant enabled through a cell–electronics hybrid for focal neuromodulation.


Photovoltaic devices attached to immune cells travel through the blood to inflamed brain regions.

Laude × CSGE: Bill Joy — 50 Years of Advancements: Computing and Technology 1975–2025 (and beyond)

From the rise of numerical and symbolic computing to the future of AI, this talk traces five decades of breakthroughs and the challenges ahead.


Bill is the author of Berkeley UNIX, cofounder of Sun Microsystems, author of “Why the Future Doesn’t Need Us” (Wired 2000), ex-cleantech VC at Kleiner Perkins, investor in and unpaid advisor to Nodra. AI.

Talk Details.
50 Years of Advancements: Computing and Technology 1975–2025 (and beyond)

I came to UC Berkeley CS in 1975 as a graduate student expecting to do computer theory— Berkeley CS didn’t have a proper departmental computer, and I was tired of coding, having written a lot of numerical code for early supercomputers.

But it’s hard to make predictions, especially about the future. Berkeley soon had a Vax superminicomputer, I installed a port of UNIX and was upgrading the operating system, and the Internet and Microprocessor boom beckoned.

Robots combine AI learning and control theory to perform advanced movements

When it comes to training robots to perform agile, single-task motor skills, such as handstands or backflips, artificial intelligence methods can be very useful. But if you want to train your robot to perform multiple tasks—say, performing a backward flip into a handstand—things get a little more complicated.

“We often want to train our robots to learn new skills by compounding existing skills with one another,” said Ian Abraham, assistant professor of mechanical engineering. “Unfortunately, AI models trained to allow robots to perform complex skills across many tasks tend to have worse performance than training on an individual task.”

To solve for that, Abraham’s lab is using techniques from optimal control—that is, taking a mathematical approach to help robots perform movements in the most efficient and optimal way possible. In particular, they’re employing hybrid control theory, which involves deciding when an autonomous system should switch between control modes to solve a task. The research is published on the arXiv preprint server.

Study finds AI can safely assist with some software annotation tasks

A dystopian future where advanced artificial intelligence (AI) systems replace human decision-making has long been a trope of science fiction. The malevolent computer HAL, which takes control of the spaceship in Stanley Kubrick’s film, 2001: A Space Odyssey, is a chilling example.

But rather than being fearful of automation, a more useful response is to consider what types of repetitive human tasks could be safely offloaded to AI, particularly with the advances of large language models (LLMs) that can sort through vast amounts of data, see patterns and make predictions.

Such is the area of research co-authored by Christoph Treude, an Associate Professor of Computer Science at Singapore Management University (SMU). The team explores potential roles for LLMs in annotating software engineering artifacts, a process that is expensive and time-consuming when done manually.

/* */