Toggle light / dark theme

Turning robotic ensembles into smart materials that mimic life

Researchers have engineered groups of robots that behave as smart materials with tunable shape and strength, mimicking living systems. “We’ve figured out a way for robots to behave more like a material,” said Matthew Devlin, a former doctoral researcher in the lab of University of California, Santa Barbara (USCB) mechanical engineering professor Elliot Hawkes, and the lead author of the article published in the journal Science.

Composed of individual, disk-shaped that look like small hockey pucks, the members of the collective are programmed to assemble themselves together into various forms with different material strengths.

One challenge of particular interest to the research team was creating a robotic material that could both be stiff and strong, yet be able to flow when a new form is needed. “Robotic materials should be able to take a shape and hold it” Hawkes explained, “but also able to selectively flow themselves into a new shape.” However, when robots are strongly held to each other in a group, it was not possible to reconfigure the group in a way that can flow and change shape at will. Until now.

Scientists Tested AI For Cognitive Decline. The Results Were a Shock

It’s barely been two years since OpenAI’s ChatGPT was released for public use, inviting anyone on the internet to collaborate with an artificial mind on anything from poetry to school assignments to letters to their landlord.

Today, the famous large language model (LLM) is just one of several leading programs that appear convincingly human in their responses to basic queries.

That uncanny resemblance may extend further than intended, with researchers from Israel now finding LLMs suffer a form of cognitive decline that increases with age just as we do.

Overview: Mind uploading is my favorite!

The Carboncopies Foundation is starting The Brain Emulation Challenge.


With the availability of high throughput electron microscopy (EM), expansion microscopy (ExM), Calcium and voltage imaging, co-registered combinations of these techniques and further advancements, high resolution data sets that span multiple brain regions or entire small animal brains such as the fruit-fly Drosophila melanogaster may now offer inroads to expansive neuronal circuit analysis. Results of such analysis represent a paradigm change in the conduct of neuroscience.

So far, almost all investigations in neuroscience have relied on correlational studies, in which a modicum of insight gleaned from observational data leads to the formulation of mechanistic hypotheses, corresponding computational modeling, and predictions made using those models, so that experimental testing of the predictions offers support or modification of hypotheses. These are indirect methods for the study of a black box system of highly complex internal structure, methods that have received published critique as being unlikely to lead to a full understanding of brain function (Jonas and Kording, 2017).

Large scale, high resolution reconstruction of brain circuitry may instead lead to mechanistic explanations and predictions of cognitive function with meaningful descriptions of representations and their transformation along the full trajectory of stages in neural processing. Insights that come from circuit reconstructions of this kind, a reverse engineering of cognitive processes, will lead to valuable advances in neuroprosthetic medicine, understanding of the causes and effects of neurodegenerative disease, possible implementations of similar processes in artificial intelligence, and in-silico emulations of brain function, known as whole-brain emulation (WBE).

Figure’s humanoids start doing tasks they weren’t trained for

Only weeks after Figure.ai announced ending its collaboration deal with OpenAI, the Silicon Valley startup has announced Helix – a commercial-ready, AI “hive-mind” humanoid robot that can do almost anything you tell it to.

Figure has made headlines in the past with its Figure 01 humanoid robot. The company is now on version 2 of its premiere robot, however, it’s received more than just a few design changes: it’s been given an entirely new AI brain called Helix VLA.

It’s not just any ordinary AI either. Helix is the very first of its kind to be put into a humanoid robot. It’s a generalist Vision-Language-Action model. The keyword being “generalist.” It can see the world around it, understand natural language, interact with the real world, and it can learn anything.

Alibaba makes artificial general intelligence its focus

He emphasized that enhancing intelligence models is key to Alibaba’s long-term vision as it shifts towards AI technologies.

This aligns with Alibaba’s declaration as an AI-driven company.

While e-commerce remains central, Alibaba’s cloud services saw strong growth, with revenue rising 13% last quarter. AI-related products within the cloud division posted triple-digit growth.

Advancing game ideation with Muse: the first World and Human Action Model (WHAM)

Introduces the first World and Human Action Model (WHAM). The WHAM, which we’ve named “Muse,” is a generative AI model of a video game that can generate game visuals, controller actions, or both.


Today Nature published Microsoft’s research detailing our WHAM, an AI model that generates video game visuals & controller actions. We are releasing the model weights, sample data, & WHAM Demonstrator on Azure AI Foundry, enabling researchers to build on the work.

AI can now model and design the genetic code for all domains of life with Evo 2

Very excellent.


Arc Institute researchers have developed a machine learning model called Evo 2 that is trained on the DNA of over 100,000 species across the entire tree of life. Its deep understanding of biological code means that Evo 2 can identify patterns in gene sequences across disparate organisms that experimental researchers would need years to uncover. The model can accurately identify disease-causing mutations in human genes and is capable of designing new genomes that are as long as the genomes of simple bacteria.

Evo 2’s developers—made up of scientists from Arc Institute and NVIDIA, convening collaborators across Stanford University, UC Berkeley, and UC San Francisco—will post details about the model as a preprint on February 19, 2025, accompanied by a user-friendly interface called Evo Designer. The Evo 2 code is publicly accessible from Arc’s GitHub, and is also integrated into the NVIDIA BioNeMo framework, as part of a collaboration between Arc Institute and NVIDIA to accelerate scientific research. Arc Institute also worked with AI research lab Goodfire to develop a mechanistic interpretability visualizer that uncovers the key biological features and patterns the model learns to recognize in genomic sequences. The Evo team is sharing its training data, training and inference code, and model weights to release the largest-scale, fully open source AI model to date.

Building on its predecessor Evo 1, which was trained entirely on single-cell genomes, Evo 2 is the largest artificial intelligence model in biology to date, trained on over 9.3 trillion nucleotides—the building blocks that make up DNA or RNA—from over 128,000 whole genomes as well as metagenomic data. In addition to an expanded collection of bacterial, archaeal, and phage genomes, Evo 2 includes information from humans, plants, and other single-celled and multi-cellular species in the eukaryotic domain of life.