Toggle light / dark theme

Researchers use AI and models to improve EV battery safety:


One of the electric vehicles’ most critical safety concerns is keeping their batteries cool, as temperature spikes can lead to dangerous consequences.

New research led by a University of Arizona doctoral student proposes a way to predict and prevent temperature spikes in the lithium-ion batteries commonly used to power such vehicles.

The paper “Advancing Battery Safety,” led by College of Engineering doctoral student Basab Goswami, is published in the Journal of Power Sources.

Butterflies can see more of the world than humans, including more colors and the field oscillation direction, or polarization, of light. This special ability enables them to navigate with precision, forage for food and communicate with one another. Other species, like the mantis shrimp, can sense an even wider spectrum of light, as well as the circular polarization, or spinning states, of light waves. They use this capability to signal a “love code,” which helps them find and be discovered by mates.

Inspired by these abilities in the animal kingdom, a team of researchers at the Penn State College of Engineering has developed an ultrathin optical element known as a metasurface, which can attach to a conventional camera and encode the spectral and polarization data of images captured in a snapshot or video through tiny, antenna-like nanostructures that tailor light properties. A machine learning framework, also developed by the team, then decodes this multi-dimensional visual information in real-time on a standard laptop.

The researchers have published their work in Science Advances.

A clever use of machine learning guides researchers to a missing term that’s needed to accurately describe the dynamics of a complex fluid system.

Physical theories and machine-learning (ML) models are both judged on their ability to predict results in unseen scenarios. However, the bar for the former is much higher. To become accepted knowledge, a theory must conform to known physical laws and—crucially—be interpretable. An interpretable theory is capable of explaining why phenomena occur rather than simply predicting their form. Having such an interpretation can inform the scope of a new theory, allowing it to be applied in new contexts, while also connecting it to and incorporating prior knowledge. To date, researchers have largely struggled to get ML models (or any automated optimization process) to produce new theories that meet these standards. Jonathan Colen and Vincenzo Vitelli of the University of Chicago and their colleagues now show success at harnessing ML not as a stand-in for a researcher but rather as a guide to aid building a model of a complex system [1].

Billions of dollars worth of investment rounds later, the Financial Times is now reporting that the company is finally looking to shed its nonprofit status once and for all.

The company is reportedly in talks to raise further new funds, giving it a valuation of north of $100 billion and potentially making it one of the most valuable Silicon Valley firms ever.

OpenAI has since denied the reporting, arguing in a statement to the FT that “the nonprofit is core to our mission and will continue to exist.”