Toggle light / dark theme

Amazon has upgraded its AI video model, Nova Reel, with the ability to generate videos up to two minutes in length.

Nova Reel, announced in December 2024, was Amazon’s first foray into the generative video space. It competes with models from OpenAI, Google, and others in what’s fast becoming a crowded market.

The latest Nova Reel, Nova Reel 1.1, can generate “multi-shot” videos with “consistent style” across shots, explained AWS developer advocate Elizabeth Fuentes in a blog post. Users can provide a prompt up to 4,000 characters long to generate up to a two-minute video composed of six-second shots.

The results from the study show that it is possible to use MindGlide to accurately identify and measure important brain tissues and lesions even with limited MRI data and single types of scans that aren’t usually used for this purpose—such as T2-weighted MRI without FLAIR (a type of scan that highlights fluids in the body but still contains bright signals, making it harder to see plaques). “Our results demonstrate that clinically meaningful tissue segmentation and lesion quantification are achievable even with limited MRI data and single contrasts not typically used for these tasks (e.g., T2-weighted MRI without FLAIR),” they stated. “Importantly, our findings generalized across datasets and MRI contrasts. Our training used only FLAIR and T1 images, yet the model successfully processed new contrasts (like PD and T2) from different scanners and periods encountered during external validation.”

As well as performing better at detecting changes in the brain’s outer layer, MindGlide also performed well in deeper brain areas. The findings were valid and reliable both at one point in time and over longer periods (i.e., at annual scans attended by patients). Additionally, MindGlide was able to corroborate previous high-quality research regarding which treatments were most effective. “In clinical trials, MindGlide detected treatment effects on T2-lesion accrual and cortical and deep grey matter volume loss. In routine-care data, T2-lesion volume increased with moderate-efficacy treatment but remained stable with high-efficacy treatment,” the investigators wrote in summary.

The researchers now hope that MindGlide can be used to evaluate MS treatments in real-world settings, overcoming previous limitations of relying solely on high-quality clinical trial data, which often did not capture the full diversity of people with MS.

Perhaps the most profound insight to emerge from this uncanny mirror is that understanding itself may be less mysterious and more mechanical than we have traditionally believed. The capabilities we associate with mind — pattern recognition, contextual awareness, reasoning, metacognition — appear increasingly replicable through purely algorithmic means. This suggests that consciousness, rather than being a prerequisite for understanding, may be a distinct phenomenon that typically accompanies understanding in biological systems but is not necessary for it.

At the same time, the possibility of quantum effects in neural processing reminds us that the mechanistic view of mind may be incomplete. If quantum retrocausality plays a role in consciousness, then our subjective experience may be neither a simple product of neural processing nor an epiphenomenal observer, but an integral part of a temporally complex causal system that escapes simple deterministic description.

What emerges from this consideration is not a definitive conclusion about the nature of mind but a productive uncertainty — an invitation to reconsider our assumptions about what constitutes understanding, agency, and selfhood. AI systems function as conceptual tools that allow us to explore these questions in new ways, challenging us to develop more sophisticated frameworks for understanding both artificial and human cognition.

A week before Tropical Cyclone Alfred was nearing the east coast of Australia, most forecasts were favouring a path either well offshore or near the central Queensland coast.

There was a curious anomaly though: an AI prediction from Google’s DeepMind, called Graphcast, was predicting the centre of Alfred would be just 200 kilometres off the coast of Brisbane.

That forecast, made 12 days before ex-Tropical Cyclone Alfred crossed the south-east Queensland coast, was far more accurate than leading weather models used by meteorological organisations around the world, including our own Bureau of Meteorology (BOM).

The AI landscape continues to evolve at a rapid pace, with recent developments challenging established paradigms. Early in 2025, Chinese AI lab DeepSeek unveiled a new model that sent shockwaves through the AI industry and resulted in a 17% drop in Nvidia’s stock, along with other stocks related to AI data center demand. This market reaction was widely reported to stem from DeepSeek’s apparent ability to deliver high-performance models at a fraction of the cost of rivals in the U.S., sparking discussion about the implications for AI data centers.

To contextualize DeepSeek’s disruption, we think it’s useful to consider a broader shift in the AI landscape being driven by the scarcity of additional training data. Because the major AI labs have now already trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training. As a result, model providers are looking to “test-time compute” (TTC) where reasoning models (such as Open AI’s “o” series of models) “think” before responding to a question at inference time, as an alternative method to improve overall model performance. The current thinking is that TTC may exhibit scaling-law improvements similar to those that once propelled pre-training, potentially enabling the next wave of transformative AI advancements.

These developments indicate two significant shifts: First, labs operating on smaller (reported) budgets are now capable of releasing state-of-the-art models. The second shift is the focus on TTC as the next potential driver of AI progress. Below we unpack both of these trends and the potential implications for the competitive landscape and broader AI market.

The company was founded to sell automation software. That entire market is being upended by AI, particularly agentic AI, which is supposed to click around on the internet and do things for you.


After stepping down from and returning to the CEO role, Daniel Dines is preparing for the agentic AI era.

Kawasaki Heavy Industries has pulled the covers off perhaps the most outrageous concept vehicle we’ve ever seen. The Corleo is a two-seater quadruped robot you steer with your body, capable of picking its way through rough terrain thanks to AI vision.

I mean look, sure, you could go buy yourself a horse. But some horses are very cheeky, and have you seen what comes out the back end of those things? All that’ll come out the back of the Corleo robot is fresh, clean water as a combustion product from its clean-burning, 150cc, hydrogen-fueled generator engine. Possibly chilled water, from an underslung dispenser – that’d be nice on a mountain picnic.

This is the latest concept vehicle from Kawasaki Heavy Industries – not the motorcycle division specifically, and I’m OK with that. I think the parent company would be wise to keep this machine far from any “Ninja” stickers that might give its AI brain the idea that it should learn martial arts.

A primary goal of microbial genome-wide association studies is identifying genomic variants associated with a particular habitat. Existing tools fail to identify known causal variants if the analyzed trait shaped the phylogeny. Furthermore, due to inclusion of allochthonous strains or metadata errors, the stated sources of strains in public databases are often incorrect, and strains may not be adapted to the habitat from which they were isolated. We describe a new tool, aurora, that identifies autochthonous strains and the genes associated with habitats while acknowledging the potential role of the habitat adaptation trait in shaping phylogeny.