Toggle light / dark theme

In a megascience-scale collaboration with French researchers from College de France and the University of Montpellier, Skoltech scientists have shown a much-publicized problem with next-generation lithium-ion batteries to have been induced by the very experiments that sought to investigate it. Published in Nature Materials, the team’s findings suggest that the issue of lithium-rich cathode material deterioration should be approached from a different angle, giving hope for more efficient lithium-ion batteries that would store some 30% more energy.

Efficient energy storage is critical for the transition to a low-carbon economy, whether in grid-scale applications, electric vehicles, or portable devices. Lithium-ion batteries remain the best-developed electrochemical storage technology and promise further improvements. In particular, next-generation batteries with so-called lithium-rich cathodes could store about one-third more energy than their state-of-the-art counterparts with cathodes made of lithium nickel manganese cobalt oxide, or NMC.

A key challenge hindering the commercialization of lithium-rich batteries is voltage fade and capacity drop. As the battery is repeatedly charged and discharged in the course of normal use, its cathode material undergoes degradation of unclear nature, causing gradual voltage and capacity loss. The problem is known to be associated with the reduction and oxidation of the in NMC, but the precise nature of this redox process is not understood. This theoretical gap undermines the attempts to overcome voltage fade and bring next-generation batteries to the market.

Many companies invest heavily in hiring talent to create the high-performance library code that underpins modern artificial intelligence systems. NVIDIA, for instance, developed some of the most advanced high-performance computing (HPC) libraries, creating a competitive moat that has proven difficult for others to breach.

But what if a couple of students, within a few months, could compete with state-of-the-art HPC libraries with a few hundred lines of code, instead of tens or hundreds of thousands?

That’s what researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have shown with a new programming language called Exo 2.

Moving from fossil fuels to renewable energy sources like wind and solar will require better ways to store energy for use when the sun is not shining or the wind is not blowing. A new study by researchers at Penn State has found that taking advantage of natural geothermal heat in depleted oil and gas wells can improve the efficiency of one proposed energy storage solution: compressed-air energy storage (CAES).

The researchers recently published their findings in the Journal of Energy Storage.

CAES plants compress air and store it underground when is low and then extract the air to create electricity when demand is high. But startup costs currently limit commercial development of these projects, the scientists said.

Some of the world’s most advanced AI systems struggle to tell the time and work out dates on calendars, a study suggests.

While AI models can perform such as writing essays and generating art, they have yet to master some skills that humans carry out with ease, researchers say.

A team from the University of Edinburgh has shown that state-of-the-art AI models are unable to reliably interpret clock-hand positions or correctly answer questions about dates on calendars.

Fueling excitement, Tesla’s Cybercab was spotted navigating the expansive grounds of Gigafactory Texas autonomously. Tesla Cybercab, also labeled as the Robotaxi, was unveiled by CEO Elon Musk in October 2024, during the ‘We Robot’ event in California. The two-seat vehicle has no steering wheel or pedals – it represents Tesla’s end goal for a completely autonomous transportation network.

The Cybercab has butterfly doors that open automatically, a hatchback layout for the cargo room, and an inductive charging technique that eliminates the need for conventional charging ports. Tesla expects to start production of the Cybercabs before 2027, and the price is estimated at $30,000.

The Tesla CyberCab is an autonomous vehicle that Tesla plans to use in its upcoming ride-hailing system. The CyberCab represents its distinct vehicle type because it is specially designed without any human driver functionalities for enhanced efficiency combined with premium passenger comfort and an extended product life span.

Recent methods applying deep learning to model neural activity often rely on “black-box” approaches that lack an interpretable connection between neural activity and network parameters. Here, building on advances in interpretable AI, Tolooshams et al. propose a new method, deconvolutional unrolled neural learning (DUNL), and apply it to several datasets.

Artificial Intelligence (AI) is, without a doubt, the defining technological breakthrough of our time. It represents not only a quantum leap in our ability to solve complex problems but also a mirror reflecting our ambitions, fears, and ethical dilemmas. As we witness its exponential growth, we cannot ignore the profound impact it is having on society. But are we heading toward a bright future or a dangerous precipice?

This opinion piece aims to foster critical reflection on AI’s role in the modern world and what it means for our collective future.

AI is no longer the stuff of science fiction. It is embedded in nearly every aspect of our lives, from the virtual assistants on our smartphones to the algorithms that recommend what to watch on Netflix or determine our eligibility for a bank loan. In medicine, AI is revolutionizing diagnostics and treatments, enabling the early detection of cancer and the personalization of therapies based on a patient’s genome. In education, adaptive learning platforms are democratizing access to knowledge by tailoring instruction to each student’s pace.

These advancements are undeniably impressive. AI promises a more efficient, safer, and fairer world. But is this promise being fulfilled? Or are we inadvertently creating new forms of inequality, where the benefits of technology are concentrated among a privileged few while others are left behind?

One of AI’s most pressing challenges is its impact on employment. Automation is eliminating jobs across various sectors, including manufacturing, services, and even traditionally “safe” fields such as law and accounting. Meanwhile, workforce reskilling is not keeping pace with technological disruption. The result? A growing divide between those equipped with the skills to thrive in the AI-driven era and those displaced by machines.

Another urgent concern is privacy. AI relies on vast amounts of data, and the massive collection of personal information raises serious questions about who controls these data and how they are used. We live in an era where our habits, preferences, and even emotions are continuously monitored and analyzed. This not only threatens our privacy but also opens the door to subtle forms of manipulation and social control.

Then, there is the issue of algorithmic bias. AI is only as good as the data it is trained on. If these data reflect existing biases, AI can perpetuate and even amplify societal injustices. We have already seen examples of this, such as facial recognition systems that fail to accurately identify individuals from minority groups or hiring algorithms that inadvertently discriminate based on gender. Far from being neutral, AI can become a tool of oppression if not carefully regulated.

Who Decides What Is Right?

AI forces us to confront profound ethical questions. When a self-driving car must choose between hitting a pedestrian or colliding with another vehicle, who decides the “right” choice? When AI is used to determine parole eligibility or distribute social benefits, how do we ensure these decisions are fair and transparent?

The reality is that AI is not just a technical tool—it is also a moral one. The choices we make today about how we develop and deploy AI will shape the future of humanity. But who is making these decisions? Currently, AI’s development is largely in the hands of big tech companies and governments, often without sufficient oversight from civil society. This is concerning because AI has the potential to impact all of us, regardless of our individual consent.

A Utopia or a Dystopia?

The future of AI remains uncertain. On one hand, we have the potential to create a technological utopia, where AI frees us from mundane tasks, enhances productivity, and allows us to focus on what truly matters: creativity, human connection, and collective well-being. On the other hand, there is the risk of a dystopia where AI is used to control, manipulate, and oppress—dividing society between those who control technology and those who are controlled by it.

The key to avoiding this dark scenario lies in regulation and education. We need robust laws that protect privacy, ensure transparency, and prevent AI’s misuse. But we also need to educate the public on the risks and opportunities of AI so they can make informed decisions and demand accountability from those in power.

Artificial Intelligence is, indeed, the Holy Grail of Technology. But unlike the medieval legend, this Grail is not hidden in a distant castle—it is in our hands, here and now. It is up to us to decide how we use it. Will AI be a tool for building a more just and equitable future, or will it become a weapon that exacerbates inequalities and threatens our freedom?

The answer depends on all of us. As citizens, we must demand transparency and accountability from those developing and implementing AI. As a society, we must ensure that the benefits of this technology are shared by all, not just a technocratic elite. And above all, we must remember that technology is not an end in itself but a means to achieve human progress.

The future of AI is the future we choose to build. And at this critical moment in history, we cannot afford to get it wrong. The Holy Grail is within our reach—but its true value will only be realized if we use it for the common good.

__
Copyright © 2025, Henrique Jorge

[ This article was originally published in Portuguese in SAPO’s technology section at: https://tek.sapo.pt/opiniao/artigos/o-santo-graal-da-tecnologia ]

Researchers at the University of Turku, Finland, have succeeded in producing sensors from single-wall carbon nanotubes that could enable major advances in health care, such as continuous health monitoring. Single-wall carbon nanotubes are nanomaterial consisting of a single atomic layer of graphene.

A long-standing challenge in developing the material has been that the nanotube manufacturing process produces a mix of conductive and semi-conductive nanotubes which differ in their chirality, i.e., in the way the graphene sheet is rolled to form the cylindrical structure of the nanotube. The electrical and chemical properties of nanotubes are largely dependent on their chirality.

Han Li, Collegium Researcher in materials engineering at the University of Turku, has developed methods to separate nanotubes with different chirality. In the current study, published in Physical Chemistry Chemical Physics, the researchers succeeded in distinguishing between two carbon nanotubes with very similar chirality and identifying their typical electrochemical properties.