Toggle light / dark theme

Perhaps the most profound insight to emerge from this uncanny mirror is that understanding itself may be less mysterious and more mechanical than we have traditionally believed. The capabilities we associate with mind — pattern recognition, contextual awareness, reasoning, metacognition — appear increasingly replicable through purely algorithmic means. This suggests that consciousness, rather than being a prerequisite for understanding, may be a distinct phenomenon that typically accompanies understanding in biological systems but is not necessary for it.

At the same time, the possibility of quantum effects in neural processing reminds us that the mechanistic view of mind may be incomplete. If quantum retrocausality plays a role in consciousness, then our subjective experience may be neither a simple product of neural processing nor an epiphenomenal observer, but an integral part of a temporally complex causal system that escapes simple deterministic description.

What emerges from this consideration is not a definitive conclusion about the nature of mind but a productive uncertainty — an invitation to reconsider our assumptions about what constitutes understanding, agency, and selfhood. AI systems function as conceptual tools that allow us to explore these questions in new ways, challenging us to develop more sophisticated frameworks for understanding both artificial and human cognition.

The return of the Dire wolves?


Colossal Biosciences’ project to revive the once-extinct dire wolf could also prevent existing but endangered animals from slipping into extinction themselves.

Get the day’s top headlines to your inbox, curated by TIME editors: https://ti.me/48dFNwQ
Follow us:
X (Twitter): https://ti.me/3xTVwSk.
Facebook: https://ti.me/3xWI2Fg.
Instagram: https://ti.me/3dO9Rcc

The world is littered with trillions of micro- and nanoscopic pieces of plastic. These can be smaller than a virus—just the right size to disrupt cells and even alter DNA. Researchers find them almost everywhere they’ve looked, from Antarctic snow to human blood.

A week before Tropical Cyclone Alfred was nearing the east coast of Australia, most forecasts were favouring a path either well offshore or near the central Queensland coast.

There was a curious anomaly though: an AI prediction from Google’s DeepMind, called Graphcast, was predicting the centre of Alfred would be just 200 kilometres off the coast of Brisbane.

That forecast, made 12 days before ex-Tropical Cyclone Alfred crossed the south-east Queensland coast, was far more accurate than leading weather models used by meteorological organisations around the world, including our own Bureau of Meteorology (BOM).

Northwestern Medicine investigators have discovered previously unknown metabolic changes that may contribute to the development of estrogen receptor–negative (ERneg) breast cancer, according to recent findings published in Science Advances.

The study, led by Susan Clare, ‘90 MD, ‘88 Ph.D., research associate professor of Surgery, and Seema Khan, MD, the Bluhm Family Professor of Cancer, has the potential to inform new targeted preventives and therapeutics for patients who currently have limited treatment options.

Mariana Bustamante Eduardo, Ph.D., a postdoctoral fellow in the Khan/Clare laboratory, was lead author of the study.

Suming Huang & team show the HoxBlin c long non-coding RNA serves as an oncogenic regulator that controls 3D nuclear organization, chromatin accessibility and gene transcription related to leukemogenesis.

The figure shows H&E staining of sternum and spleen from WT and B-ALL HoxBlin c Tg mice.


1Division of Pediatric Hematology/Oncology, Department of Pediatrics, Pennsylvania State University College of Medicine, Hershey, Pennsylvania, USA.

2Department of Molecular Medicine, University of Texas Health Science Center at San Antonio, San Antonio, Texas, USA.

3Genetics Branch, Center for Cancer Research, National Cancer Institute (NCI), NIH, Bethesda, Maryland, USA.

The AI landscape continues to evolve at a rapid pace, with recent developments challenging established paradigms. Early in 2025, Chinese AI lab DeepSeek unveiled a new model that sent shockwaves through the AI industry and resulted in a 17% drop in Nvidia’s stock, along with other stocks related to AI data center demand. This market reaction was widely reported to stem from DeepSeek’s apparent ability to deliver high-performance models at a fraction of the cost of rivals in the U.S., sparking discussion about the implications for AI data centers.

To contextualize DeepSeek’s disruption, we think it’s useful to consider a broader shift in the AI landscape being driven by the scarcity of additional training data. Because the major AI labs have now already trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training. As a result, model providers are looking to “test-time compute” (TTC) where reasoning models (such as Open AI’s “o” series of models) “think” before responding to a question at inference time, as an alternative method to improve overall model performance. The current thinking is that TTC may exhibit scaling-law improvements similar to those that once propelled pre-training, potentially enabling the next wave of transformative AI advancements.

These developments indicate two significant shifts: First, labs operating on smaller (reported) budgets are now capable of releasing state-of-the-art models. The second shift is the focus on TTC as the next potential driver of AI progress. Below we unpack both of these trends and the potential implications for the competitive landscape and broader AI market.