Toggle light / dark theme

A long-lost cosmic explosion, buried within two decades of NASA

NASA, the National Aeronautics and Space Administration, is the United States government agency responsible for the nation’s civilian space program and for aeronautics and aerospace research. Established in 1958 by the National Aeronautics and Space Act, NASA has led the U.S. in space exploration efforts, including the Apollo moon-landing missions, the Skylab space station, and the Space Shuttle program.

Researchers at UC Santa Barbara and TU Dresden are pioneering a new approach to robotics by creating a collective of small robots that function like a smart material.

According to Matthew Devlin, a former doctoral researcher in the lab of UCSB mechanical engineering professor Elliot Hawkes and lead author of a paper published in Science, researchers have developed a method for robots to behave more like a material.

Researchers have engineered groups of robots that behave as smart materials with tunable shape and strength, mimicking living systems. “We’ve figured out a way for robots to behave more like a material,” said Matthew Devlin, a former doctoral researcher in the lab of University of California, Santa Barbara (USCB) mechanical engineering professor Elliot Hawkes, and the lead author of the article published in the journal Science.

Composed of individual, disk-shaped that look like small hockey pucks, the members of the collective are programmed to assemble themselves together into various forms with different material strengths.

One challenge of particular interest to the research team was creating a robotic material that could both be stiff and strong, yet be able to flow when a new form is needed. “Robotic materials should be able to take a shape and hold it” Hawkes explained, “but also able to selectively flow themselves into a new shape.” However, when robots are strongly held to each other in a group, it was not possible to reconfigure the group in a way that can flow and change shape at will. Until now.

It’s barely been two years since OpenAI’s ChatGPT was released for public use, inviting anyone on the internet to collaborate with an artificial mind on anything from poetry to school assignments to letters to their landlord.

Today, the famous large language model (LLM) is just one of several leading programs that appear convincingly human in their responses to basic queries.

That uncanny resemblance may extend further than intended, with researchers from Israel now finding LLMs suffer a form of cognitive decline that increases with age just as we do.

The Carboncopies Foundation is starting The Brain Emulation Challenge.


With the availability of high throughput electron microscopy (EM), expansion microscopy (ExM), Calcium and voltage imaging, co-registered combinations of these techniques and further advancements, high resolution data sets that span multiple brain regions or entire small animal brains such as the fruit-fly Drosophila melanogaster may now offer inroads to expansive neuronal circuit analysis. Results of such analysis represent a paradigm change in the conduct of neuroscience.

So far, almost all investigations in neuroscience have relied on correlational studies, in which a modicum of insight gleaned from observational data leads to the formulation of mechanistic hypotheses, corresponding computational modeling, and predictions made using those models, so that experimental testing of the predictions offers support or modification of hypotheses. These are indirect methods for the study of a black box system of highly complex internal structure, methods that have received published critique as being unlikely to lead to a full understanding of brain function (Jonas and Kording, 2017).

Large scale, high resolution reconstruction of brain circuitry may instead lead to mechanistic explanations and predictions of cognitive function with meaningful descriptions of representations and their transformation along the full trajectory of stages in neural processing. Insights that come from circuit reconstructions of this kind, a reverse engineering of cognitive processes, will lead to valuable advances in neuroprosthetic medicine, understanding of the causes and effects of neurodegenerative disease, possible implementations of similar processes in artificial intelligence, and in-silico emulations of brain function, known as whole-brain emulation (WBE).

Only weeks after Figure.ai announced ending its collaboration deal with OpenAI, the Silicon Valley startup has announced Helix – a commercial-ready, AI “hive-mind” humanoid robot that can do almost anything you tell it to.

Figure has made headlines in the past with its Figure 01 humanoid robot. The company is now on version 2 of its premiere robot, however, it’s received more than just a few design changes: it’s been given an entirely new AI brain called Helix VLA.

It’s not just any ordinary AI either. Helix is the very first of its kind to be put into a humanoid robot. It’s a generalist Vision-Language-Action model. The keyword being “generalist.” It can see the world around it, understand natural language, interact with the real world, and it can learn anything.