Toggle light / dark theme

LLMs develop their own understanding of reality as their language abilities improve

But does the lack of eyes mean that language models can’t ever “understand” that a lion is “larger” than a house cat? Philosophers and scientists alike have long considered the ability to assign meaning to language a hallmark of human intelligence — and pondered what essential ingredients enable us to do so.

Peering into this enigma, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have uncovered intriguing results suggesting that language models may develop their own understanding of reality as a way to improve their generative abilities. The team first developed a set of small Karel puzzles, which consisted of coming up with instructions to control a robot in a simulated environment. They then trained an LLM on the solutions, but without demonstrating how the solutions actually worked. Finally, using a machine learning technique called “probing,” they looked inside the model’s “thought process” as it generates new solutions.

After training on over 1 million random puzzles, they found that the model spontaneously developed its own conception of the underlying simulation, despite never being exposed to this reality during training. Such findings call into question our intuitions about what types of information are necessary for learning linguistic meaning — and whether LLMs may someday understand language at a deeper level than they do today.

New Study Suggests Mars Has Large Underground Ocean

A new study provides evidence that Mars contains a large ocean deep beneath its surface.

The finding is based on data collected by the InSight Lander, a robotic explorer operated by the American space agency NASA. InSight, which landed in 2018, was designed to capture data from within the planet’s interior. The lander ended its operations on Mars in late 2022.

For the current study, researchers used seismic data collected by InSight. The team examined the data to study Martian quake activity. Seismic activity on Mars happens in the form of “marsquakes.” NASA says InSight had recorded more than 1,300 marsquakes.

‘AI Scientist’ model designed to conduct scientific research autonomously

A team of AI researchers at Sakana AI, in Japan, working with colleagues from the University of Oxford and the University of British Columbia, has developed an AI system that can conduct scientific research autonomously.

The group has posted a paper to the arXiv preprint server describing their system, which they call “The AI Scientist”. They have also posted an overview of their system on Sakana’s corporate website.

Scientific research is generally a long and involved process. It tends to start with a simple idea, such as, “Is there a way to stop the buildup of plaque on human teeth?” Scientists then research other studies to determine what research has been done on the topic.

Flexible multi-task computation in recurrent neural networks relies on dynamical motifs, study shows

Cognitive flexibility, the ability to rapidly switch between different thoughts and mental concepts, is a highly advantageous human capability. This salient capability supports multi-tasking, the rapid acquisition of new skills and the adaptation to new situations.

While (AI) systems have become increasingly advanced over the past few decades, they currently do not exhibit the same flexibility as humans in learning new skills and switching between tasks. A better understanding of how biological neural circuits support , particularly how they support multi-tasking, could inform future efforts aimed at developing more flexible AI.

Recently, some computer scientists and neuroscientists have been studying neural computations using artificial neural networks. Most of these networks, however, were generally trained to tackle individually as opposed to multiple tasks.

Fully 3D-printed shape memory mini-actuators can move small soft robots

Researchers from North Carolina State University have demonstrated miniature soft hydraulic actuators that can be used to control the deformation and motion of soft robots that are less than a millimeter thick. The researchers have also demonstrated that this technique works with shape memory materials, allowing users to repeatedly lock the soft robots into a desired shape and return to the original shape as needed.

“Soft robotics holds promise for many applications, but it is challenging to design the actuators that drive the motion of soft robots on a small scale,” says Jie Yin, corresponding author of a paper on the work (Advanced Materials, “Fully 3D-Printed Miniature Soft Hydraulic Actuators with Shape Memory Effect for Morphing and Manipulation”) and an associate professor of mechanical and aerospace engineering at NC State. “Our approach makes use of commercially available multi-material 3D printing technologies and shape memory polymers to create soft actuators on a microscale that allow us to control very small soft robots, which allows for exceptional control and delicacy.”

The new technique relies on creating soft robots that consist of two layers. The first layer is a flexible polymer that is created using 3D printing technologies and incorporates a pattern of microfluidic channels – essentially very small tubes running through the material. The second layer is a flexible shape memory polymer. Altogether, the soft robot is only 0.8 millimeters thick.

/* */