Toggle light / dark theme

DeepSeek jolts AI industry: Why AI’s next leap may not come from more data, but more compute at inference

The AI landscape continues to evolve at a rapid pace, with recent developments challenging established paradigms. Early in 2025, Chinese AI lab DeepSeek unveiled a new model that sent shockwaves through the AI industry and resulted in a 17% drop in Nvidia’s stock, along with other stocks related to AI data center demand. This market reaction was widely reported to stem from DeepSeek’s apparent ability to deliver high-performance models at a fraction of the cost of rivals in the U.S., sparking discussion about the implications for AI data centers.

To contextualize DeepSeek’s disruption, we think it’s useful to consider a broader shift in the AI landscape being driven by the scarcity of additional training data. Because the major AI labs have now already trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training. As a result, model providers are looking to “test-time compute” (TTC) where reasoning models (such as Open AI’s “o” series of models) “think” before responding to a question at inference time, as an alternative method to improve overall model performance. The current thinking is that TTC may exhibit scaling-law improvements similar to those that once propelled pre-training, potentially enabling the next wave of transformative AI advancements.

These developments indicate two significant shifts: First, labs operating on smaller (reported) budgets are now capable of releasing state-of-the-art models. The second shift is the focus on TTC as the next potential driver of AI progress. Below we unpack both of these trends and the potential implications for the competitive landscape and broader AI market.

Kawasaki unveils a hydrogen-powered, ride-on robot horse

Kawasaki Heavy Industries has pulled the covers off perhaps the most outrageous concept vehicle we’ve ever seen. The Corleo is a two-seater quadruped robot you steer with your body, capable of picking its way through rough terrain thanks to AI vision.

I mean look, sure, you could go buy yourself a horse. But some horses are very cheeky, and have you seen what comes out the back end of those things? All that’ll come out the back of the Corleo robot is fresh, clean water as a combustion product from its clean-burning, 150cc, hydrogen-fueled generator engine. Possibly chilled water, from an underslung dispenser – that’d be nice on a mountain picnic.

This is the latest concept vehicle from Kawasaki Heavy Industries – not the motorcycle division specifically, and I’m OK with that. I think the parent company would be wise to keep this machine far from any “Ninja” stickers that might give its AI brain the idea that it should learn martial arts.

Aurora: a Machine Learning Gwas Tool For Analyzing Microbial Habitat Adaptation

A primary goal of microbial genome-wide association studies is identifying genomic variants associated with a particular habitat. Existing tools fail to identify known causal variants if the analyzed trait shaped the phylogeny. Furthermore, due to inclusion of allochthonous strains or metadata errors, the stated sources of strains in public databases are often incorrect, and strains may not be adapted to the habitat from which they were isolated. We describe a new tool, aurora, that identifies autochthonous strains and the genes associated with habitats while acknowledging the potential role of the habitat adaptation trait in shaping phylogeny.

Novel memristors to overcome AI’s ‘catastrophic forgetting’

So-called “memristors” consume extremely little power and behave similarly to brain cells. Researchers from Jülich, led by Ilia Valov, have now introduced novel memristive components that offer significant advantages over previous versions: they are more robust, function across a wider voltage range, and can operate in both analog and digital modes. These properties could help address the problem of “catastrophic forgetting,” where artificial neural networks abruptly forget previously learned information.

The problem of catastrophic forgetting occurs when deep neural networks are trained for a new task. This is because a new optimization simply overwrites a previous one. The brain does not have this problem because it can apparently adjust the degree of synaptic change; something experts call “metaplasticity.”

They suspect that it is only through these different degrees of plasticity that our brain can permanently learn new tasks without forgetting old content. The new memristor accomplishes something similar.

New materials could deliver ultrathin solar panel

A race is on in solar engineering to create almost impossibly-thin, flexible solar panels. Engineers imagine them used in mobile applications, from self-powered wearable devices and sensors to lightweight aircraft and electric vehicles. Against that backdrop, researchers at Stanford University have achieved record efficiencies in a promising group of photovoltaic materials.

Chief among the benefits of these transition metal dichalcogenides – or TMDs – is that they absorb ultrahigh levels of the sunlight that strikes their surface compared to other solar materials.

“Imagine an autonomous drone that powers itself with a solar array atop its wing that is 15 times thinner than a piece of paper,” said Koosha Nassiri Nazif, a doctoral scholar in electrical engineering at Stanford and co-lead author of a study published in the Dec. 9 edition of Nature Communications. “That is the promise of TMDs.”

The search for new materials is necessary because the reigning king of solar materials, silicon, is much too heavy, bulky and rigid for applications where flexibility, lightweight and high power are preeminent, such as wearable devices and sensors or aerospace and electric vehicles.


New, ultrathin photovoltaic materials could eventually be used in mobile applications, from self-powered wearable devices and sensors to lightweight aircraft and electric vehicles.

/* */