Toggle light / dark theme

Scientists have invented an artificial plant that can simultaneously clean indoor air while generating enough electricity to power a smartphone.

A team from Binghamton University in New York created an artificial leaf “for fun” using five biological solar cells and their photosynthetic bacteria, before realising that the device could be used for practical applications.

A proof-of-concept plant with five artificial leaves was capable of generating electricity and oxygen, while removing CO2 at a far more efficient rate than natural plants.

When cars, planes, ships or computers are built from a material that functions as both a battery and a load-bearing structure, the weight and energy consumption are radically reduced. A research group at Chalmers University of Technology in Sweden is now presenting a world-leading advance in so-called massless energy storage — a structural battery that could halve the weight of a laptop, make the mobile phone as thin as a credit card or increase the driving range of an electric car by up to 70% on a single charge.

“We have succeeded in creating a battery made of carbon fiber composite that is as stiff as aluminum and energy-dense enough to be used commercially. Just like a human skeleton, the battery has several functions at the same time,” says Chalmers researcher Richa Chaudhary, who is the first author of an article recently published in Advanced Materials.

Research on structural batteries has been going on for many years at Chalmers, and in some stages also together with researchers at the KTH Royal Institute of Technology in Stockholm, Sweden. When Professor Leif Asp and colleagues published their first results in 2018 on how stiff, strong carbon fibers could store electrical energy chemically, the advance attracted massive attention.

Whether it’s the smartphone in your pocket or the laptop on your desk, all current computer devices are based on electronic technology. But this has some inherent drawbacks; in particular, they necessarily generate a lot of heat, especially as they increase in performance, not to mention that fabrication technologies are approaching the fundamental limits of what is theoretically possible.

As a result, researchers explore alternative ways to perform computation that can tackle these problems and ideally offer some new functionality or features too.

One possibility lies in an idea that has existed for several decades but has yet to break through and become commercially viable, and that’s in optical computing.

With the recent release of the iPhone 16, which Apple has promised is optimized for artificial intelligence, it’s clear that AI is officially front of mind, once again, for the average consumer. Yet the technology still remains rather limited compared with the vast abilities the most forward-thinking AI technologists anticipate will be achievable in the near future.

As much excitement as there still is around the technology, many still fear the potentially negative consequences of integrating it so deeply into society. One common concern is that a sufficiently advanced AI could determine humanity to be a threat and turn against us all, a scenario imagined in many science fiction stories. However, according to a leading AI researcher, most people’s concerns can be alleviated by decentralizing and democratizing AI’s development.

On Episode 46 of The Agenda podcast, hosts Jonathan DeYoung and Ray Salmond separate fact from fiction by speaking with Ben Goertzel, the computer scientist and researcher who first popularized the term “artificial general intelligence,” or AGI. Goertzel currently serves as the CEO of SingularityNET and the ASI Alliance, where he leads the projects’ efforts to develop the world’s first AGI.

In recent years, these technological limitations have become far more pressing. Deep neural networks have radically expanded the limits of artificial intelligence—but they have also created a monstrous demand for computational resources, and these resources present an enormous financial and environmental burden. Training GPT-3, a text predictor so accurate that it easily tricks people into thinking its words were written by a human, costs $4.6 million and emits a sobering volume of carbon dioxide—as much as 1,300 cars, according to Boahen.

With the free time afforded by the pandemic, Boahen, who is faculty affiliate at the Wu Tsai Neurosciences Institute at Stanford and the Stanford Institute for Human-Centered AI (HAI), applied himself single mindedly to this problem. “Every 10 years, I realize some blind spot that I have or some dogma that I’ve accepted,” he says. “I call it ‘raising my consciousness.’”

This time around, raising his consciousness meant looking toward dendrites, the spindly protrusions that neurons use to detect signals, for a completely novel way of thinking about computer chips. And, as he writes in Nature, he thinks he’s figured out how to make chips so efficient that the enormous GPT-3 language prediction neural network could one day be run on a cell phone. Just as Feynman posited the “quantum supremacy” of quantum computers over traditional computers, Boahen wants to work toward a “neural supremacy.”