Toggle light / dark theme

The Cognitive Doppelgänger: AI Understanding and the Mirror of Human Consciousness

Perhaps the most profound insight to emerge from this uncanny mirror is that understanding itself may be less mysterious and more mechanical than we have traditionally believed. The capabilities we associate with mind — pattern recognition, contextual awareness, reasoning, metacognition — appear increasingly replicable through purely algorithmic means. This suggests that consciousness, rather than being a prerequisite for understanding, may be a distinct phenomenon that typically accompanies understanding in biological systems but is not necessary for it.

At the same time, the possibility of quantum effects in neural processing reminds us that the mechanistic view of mind may be incomplete. If quantum retrocausality plays a role in consciousness, then our subjective experience may be neither a simple product of neural processing nor an epiphenomenal observer, but an integral part of a temporally complex causal system that escapes simple deterministic description.

What emerges from this consideration is not a definitive conclusion about the nature of mind but a productive uncertainty — an invitation to reconsider our assumptions about what constitutes understanding, agency, and selfhood. AI systems function as conceptual tools that allow us to explore these questions in new ways, challenging us to develop more sophisticated frameworks for understanding both artificial and human cognition.

AI picked Cyclone Alfred hitting Qld before traditional weather models

A week before Tropical Cyclone Alfred was nearing the east coast of Australia, most forecasts were favouring a path either well offshore or near the central Queensland coast.

There was a curious anomaly though: an AI prediction from Google’s DeepMind, called Graphcast, was predicting the centre of Alfred would be just 200 kilometres off the coast of Brisbane.

That forecast, made 12 days before ex-Tropical Cyclone Alfred crossed the south-east Queensland coast, was far more accurate than leading weather models used by meteorological organisations around the world, including our own Bureau of Meteorology (BOM).

DeepSeek jolts AI industry: Why AI’s next leap may not come from more data, but more compute at inference

The AI landscape continues to evolve at a rapid pace, with recent developments challenging established paradigms. Early in 2025, Chinese AI lab DeepSeek unveiled a new model that sent shockwaves through the AI industry and resulted in a 17% drop in Nvidia’s stock, along with other stocks related to AI data center demand. This market reaction was widely reported to stem from DeepSeek’s apparent ability to deliver high-performance models at a fraction of the cost of rivals in the U.S., sparking discussion about the implications for AI data centers.

To contextualize DeepSeek’s disruption, we think it’s useful to consider a broader shift in the AI landscape being driven by the scarcity of additional training data. Because the major AI labs have now already trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training. As a result, model providers are looking to “test-time compute” (TTC) where reasoning models (such as Open AI’s “o” series of models) “think” before responding to a question at inference time, as an alternative method to improve overall model performance. The current thinking is that TTC may exhibit scaling-law improvements similar to those that once propelled pre-training, potentially enabling the next wave of transformative AI advancements.

These developments indicate two significant shifts: First, labs operating on smaller (reported) budgets are now capable of releasing state-of-the-art models. The second shift is the focus on TTC as the next potential driver of AI progress. Below we unpack both of these trends and the potential implications for the competitive landscape and broader AI market.

UiPath CEO Daniel Dines on AI agents replacing our jobs

The company was founded to sell automation software. That entire market is being upended by AI, particularly agentic AI, which is supposed to click around on the internet and do things for you.


After stepping down from and returning to the CEO role, Daniel Dines is preparing for the agentic AI era.

Kawasaki unveils a hydrogen-powered, ride-on robot horse

Kawasaki Heavy Industries has pulled the covers off perhaps the most outrageous concept vehicle we’ve ever seen. The Corleo is a two-seater quadruped robot you steer with your body, capable of picking its way through rough terrain thanks to AI vision.

I mean look, sure, you could go buy yourself a horse. But some horses are very cheeky, and have you seen what comes out the back end of those things? All that’ll come out the back of the Corleo robot is fresh, clean water as a combustion product from its clean-burning, 150cc, hydrogen-fueled generator engine. Possibly chilled water, from an underslung dispenser – that’d be nice on a mountain picnic.

This is the latest concept vehicle from Kawasaki Heavy Industries – not the motorcycle division specifically, and I’m OK with that. I think the parent company would be wise to keep this machine far from any “Ninja” stickers that might give its AI brain the idea that it should learn martial arts.

Aurora: a Machine Learning Gwas Tool For Analyzing Microbial Habitat Adaptation

A primary goal of microbial genome-wide association studies is identifying genomic variants associated with a particular habitat. Existing tools fail to identify known causal variants if the analyzed trait shaped the phylogeny. Furthermore, due to inclusion of allochthonous strains or metadata errors, the stated sources of strains in public databases are often incorrect, and strains may not be adapted to the habitat from which they were isolated. We describe a new tool, aurora, that identifies autochthonous strains and the genes associated with habitats while acknowledging the potential role of the habitat adaptation trait in shaping phylogeny.

Novel memristors to overcome AI’s ‘catastrophic forgetting’

So-called “memristors” consume extremely little power and behave similarly to brain cells. Researchers from Jülich, led by Ilia Valov, have now introduced novel memristive components that offer significant advantages over previous versions: they are more robust, function across a wider voltage range, and can operate in both analog and digital modes. These properties could help address the problem of “catastrophic forgetting,” where artificial neural networks abruptly forget previously learned information.

The problem of catastrophic forgetting occurs when deep neural networks are trained for a new task. This is because a new optimization simply overwrites a previous one. The brain does not have this problem because it can apparently adjust the degree of synaptic change; something experts call “metaplasticity.”

They suspect that it is only through these different degrees of plasticity that our brain can permanently learn new tasks without forgetting old content. The new memristor accomplishes something similar.

New materials could deliver ultrathin solar panel

A race is on in solar engineering to create almost impossibly-thin, flexible solar panels. Engineers imagine them used in mobile applications, from self-powered wearable devices and sensors to lightweight aircraft and electric vehicles. Against that backdrop, researchers at Stanford University have achieved record efficiencies in a promising group of photovoltaic materials.

Chief among the benefits of these transition metal dichalcogenides – or TMDs – is that they absorb ultrahigh levels of the sunlight that strikes their surface compared to other solar materials.

“Imagine an autonomous drone that powers itself with a solar array atop its wing that is 15 times thinner than a piece of paper,” said Koosha Nassiri Nazif, a doctoral scholar in electrical engineering at Stanford and co-lead author of a study published in the Dec. 9 edition of Nature Communications. “That is the promise of TMDs.”

The search for new materials is necessary because the reigning king of solar materials, silicon, is much too heavy, bulky and rigid for applications where flexibility, lightweight and high power are preeminent, such as wearable devices and sensors or aerospace and electric vehicles.


New, ultrathin photovoltaic materials could eventually be used in mobile applications, from self-powered wearable devices and sensors to lightweight aircraft and electric vehicles.