Toggle light / dark theme

A Norwegian startup is building massive AI robots to help airlines reduce their carbon emissions, save water, and inspect their planes in a fraction of the time it usually takes.

The challenge: The aviation industry is responsible for about 2.5% of global carbon emissions, and while sustainable jet fuels or electric propulsion systems could one day slash that figure, airlines can reduce their emissions right now — simply by cleaning their planes more often.

Washing an airplane’s exterior reduces air resistance, which means it can decrease the amount of jet fuel a plane needs to burn by up to 2% — while that’s not a huge difference, it can add up when you consider there are about 28,000 commercial jets in the global fleet.

If users wish to “freeze” the soft robot’s shape, they can apply moderate heat (64°C, or 147°F), and then let the robot cool briefly. This prevents the soft robot from reverting to its original shape, even after the liquid in the microfluidic channels is pumped out. If users want to return the soft robot to its original shape, they simply apply the heat again after pumping out the liquid, and the robot relaxes to its original configuration.

“A key factor here is fine-tuning the thickness of the shape memory layer relative to the layer that contains the microfluidic channels,” says Yinding Chi, co-lead author of the paper and a former Ph.D. student at NC State. “You need the shape memory layer to be thin enough to bend when the actuator’s pressure is applied, but thick enough to get the soft robot to retain its shape even after the pressure is removed.”

To demonstrate the technique, the researchers created a soft robot “,” capable of picking up small objects. The researchers applied hydraulic pressure, causing the gripper to pinch closed on an object. By applying heat, the researchers were able to fix the gripper in its “closed” position, even after releasing pressure from the hydraulic actuator.

There’s a lot of other ways that AI could really take things in a bad direction.


One of the OpenAI directors who worked to oust CEO Sam Altman is issuing some stark warnings about the future of unchecked artificial intelligence.

In an interview during Axios’ AI+ summit, former OpenAI board member Helen Toner suggested that the risks AI poses to humanity aren’t just worst-case scenarios from science fiction.

“I just think sometimes people hear the phrase ‘existential risk’ and they just think Skynet, and robots shooting humans,” Toner said, referencing the evil AI technology from the “Terminator” films that’s often used as a metaphor for worst-case-scenario AI predictions.

Quantum computers have the potential to solve complex problems in human health, drug discovery, and artificial intelligence millions of times faster than some of the world’s fastest supercomputers. A network of quantum computers could advance these discoveries even faster. But before that can happen, the computer industry will need a reliable way to string together billions of qubits—or quantum bits—with atomic precision.

Powerful new artificial intelligence models sometimes, quite famously, get things wrong—whether hallucinating false information or memorizing others’ work and offering it up as their own. To address the latter, researchers led by a team at The University of Texas at Austin have developed a framework to train AI models on images corrupted beyond recognition.

DALL-E, Midjourney and Stable Diffusion are among the text-to-image diffusion generative AI models that can turn arbitrary user text into highly realistic images. All three are now facing lawsuits from artists who allege generated samples replicate their work. Trained on billions of image-text pairs that are not publicly available, the models are capable of generating high-quality imagery from textual prompts but may draw on copyrighted images that they then replicate.

The newly proposed framework, called Ambient Diffusion, gets around this problem by training diffusion models through access only to corrupted image-based data. Early efforts suggest the framework is able to continue to generate high-quality samples without ever seeing anything that’s recognizable as the original source images.