Toggle light / dark theme

Roboticists develop a bird-like robot that can jump into the air to launch itself into flight

A team of roboticists at École Polytechnique Fédérale de Lausanne, working with a colleague from the University of California, has designed, built and demonstrated a bird-like robot that can launch itself into flight using spring-like legs.

The group describes their in a paper published in the journal Nature. Aimy Wissa, an at Princeton University, has published a News & Views piece in the same journal issue suggesting possible ways the innovation could be used in real-world applications.

Some types of drones, such as those with rotors, can rise straight up off the ground—others that are powered with forward-facing or engines that push exhaust out the back must either race along a runway or catapult to get airborne. For this new project, the research team developed a new for getting such craft into the air—jumping using spring-like legs.

Giant cyborg cockroaches could be the search and rescue workers of the future

Fitzgerald says cyborg search and rescue beetles or cockroaches might be able to help in disaster situations by finding and reporting the location of survivors and delivering lifesaving drugs to them before human rescuers can get there.

But first, the Australian researchers must master the ability to direct the movements of the insects, which could take a while. Fitzgerald says that although the work might seem futuristic now, in a few decades, cyborg insects could be saving lives.

He’s not the only roboticist creating robots from living organisms. Academics at the California Institute of Technology (Caltech), for example, are implanting electronic pacemakers into jellyfish to control their swimming speed. They hope the bionic jellies could help collect data about the ocean far below the surface.

Brain Age Models Offer Insights into Early Development Trajectories

Summary: A new study highlights how brain age models can track healthy infant development and reveal environmental influences. Using MRI data from over 600 term and preterm infants, researchers trained machine learning models to predict brain age and identify gaps between predicted and actual ages.

These brain age gaps can indicate whether an infant’s development is faster or slower than expected, with maternal age emerging as a significant influencing factor. Advanced brain development was linked to better cognitive abilities but poorer emotional regulation, suggesting that following normative developmental trajectories may be ideal.

Tiny 15 mm robot from China zips past speed records in robotics

Chinese researchers have created the BHMbot-B, a 15 mm long microrobot with quick forward and backward movements, which is ideal for navigating small places.

The robot effectively switches between forward and backward movement by aligning the vibratory motions of its magnet, cantilever, and linkages using vibration mode transition control.

The Beihnag University team claims that the device combines a battery, a control circuit for wireless operation, and two electromagnetic actuators for a high load capacity.

Building a “Google Maps” for Biology: Human Cell Atlas Revolutionizes Medicine

New research from the Human Cell Atlas offers insights into cell development, disease mechanisms, and genetic influences, enhancing our understanding of human biology and health.

The Human Cell Atlas (HCA) consortium has made significant progress in its mission to better understand the cells of the human body in health and disease, with a recent publication of a Collection of more than 40 peer-reviewed papers in Nature and other Nature Portfolio journals.

The Collection showcases a range of large-scale datasets, artificial intelligence algorithms, and biomedical discoveries from the HCA that are enhancing our understanding of the human body. The studies reveal insights into how the placenta and skeleton form, changes during brain maturation, new gut and vascular cell states, lung responses to COVID-19, and the effects of genetic variation on disease, among others.

DeepMind’s Genie 2 can generate interactive worlds that look like video games

DeepMind, Google’s AI research org, has unveiled a model that can generate an “endless” variety of playable 3D worlds.

Called Genie 2, the model — the successor to DeepMind’s Genie, which was released earlier this year — can generate an interactive, real-time scene from a single image and text description (e.g. “A cute humanoid robot in the woods”). In this way, it’s similar to models under development by Fei-Fei Li’s company, World Labs, and Israeli startup Decart.

DeepMind claims that Genie 2 can generate a “vast diversity of rich 3D worlds,” including worlds in which users can take actions like jumping and swimming by using a mouse or keyboard. Trained on videos, the model’s able to simulate object interactions, animations, lighting, physics, reflections, and the behavior of “NPCs.”

Genie 2: A large-scale foundation world model

Games play a key role in AI research.


Generating unlimited diverse training environments for future general agents.

Today we introduce Genie 2, a foundation world model capable of generating an endless variety of action-controllable, playable 3D environments for training and evaluating embodied agents. Based on a single prompt image, it can be played by a human or AI agent using keyboard and mouse inputs.

Games play a key role in the world of artificial intelligence (AI) research. Their engaging nature, unique blend of challenges, and measurable progress make them ideal environments to safely test and advance AI capabilities.