Toggle light / dark theme

GAIA-1 is a cutting-edge generative world model built for autonomous driving. A world model learns representations of the environment and its future dynamics, providing a structured understanding of the surroundings that can be leveraged for making informed decisions when driving. Predicting future events is a fundamental and critical aspect of autonomous systems. Accurate future prediction enables autonomous vehicles to anticipate and plan their actions, enhancing safety and efficiency on the road. Incorporating world models into driving models yields the potential to enable them to understand human decisions better and ultimately generalise to more real-world situations.

GAIA-1 is a model that leverages video, text and action inputs to generate realistic driving videos and offers fine-grained control over ego-vehicle behaviour and scene features. Due to its multi-modal nature, GAIA-1 can generate videos from many prompt modalities and combinations.


Examples of types of prompts that GAIA-1 can use to generate videos. GAIA-1 can generate videos by performing the future rollout starting from a video prompt. These future rollouts can be further conditioned on actions to influence particular behaviours of the ego-vehicle (e.g. steer left), or by text to drive a change in some aspects of the scene (e.g. change the colour of the traffic light). For speed and curvature, we condition GAIA-1 by passing the sequence of future speed and/or curvature values. GAIA-1 can also generate realistic videos from text prompts, or by simply drawing samples from its prior distribution (fully unconditional generation).

Hurricane Lee wasn’t bothering anyone in early September, churning far out at sea somewhere between Africa and North America. A wall of high pressure stood in its westward path, poised to deflect the storm away from Florida and in a grand arc northeast. Heading where, exactly? It was 10 days out from the earliest possible landfall—eons in weather forecasting—but meteorologists at the European Centre for Medium-Range Weather Forecasts, or ECMWF, were watching closely. The tiniest uncertainties could make the difference between a rainy day in Scotland or serious trouble for the US Northeast.

Typically, weather forecasters would rely on models of atmospheric physics to make that call. This time, they had another tool: a new generation of AI-based weather models developed by chipmaker Nvidia, Chinese tech giant Huawei, and Google’s AI unit DeepMind.

The — a crucial mission as AI capabilities are increasingly acquired, developed and integrated into U.S. defense and intelligence systems, the agency’s outgoing director announced Thursday.

Army Gen. Paul Nakasone said the center would be incorporated into the NSA’s Cybersecurity Collaboration Center, where it works with private industry and international partners to harden the U.S. defense-industrial base against threats from adversaries led by China and Russia.

“We maintain an advantage in AI in the United States today. That AI advantage should not be taken for granted,” Nakasone said at the National Press Club, emphasizing the threat from Beijing in particular.”


The National Security Agency is starting an artificial intelligence security center — a crucial mission as AI capabilities are increasingly acquired, developed and integrated into U.S. defense and intelligence systems.

CAMBRIDGE, Mass. — Researchers at MIT have achieved a significant breakthrough in quantum computing, bringing the potential of these incredible thinking machines closer to realization. Quantum computers promise to handle calculations far too complex for current supercomputers, but many hurdles remain. A primary challenge is addressing computational errors faster than they arise.

In a nutshell, quantum computers find better and quicker ways to solve problems. Scientists believe quantum technology could solve extremely complex problems in seconds, while traditional supercomputers you see today could need months or even years to crack certain codes.

What makes these next generation supercomputers different from your everyday smartphone and laptop is how they process data. Quantum computers harness the properties of quantum physics to store data and perform their functions. While traditional computers use “bits” (either a 1 or a 0) to encode information on your devices, quantum technology uses “qubits.”

Kristalyn Gallagher, DO, Kevin Chen, MD, and Shawn Gomez, EngScD, in the UNC School of Medicine have developed an AI model that can predict whether or not cancerous tissue has been fully removed from the body during breast cancer surgery.

Artificial intelligence (AI) and machine learning tools have received a lot of attention recently, with the majority of discussions focusing on proper use. However, this technology has a wide range of practical applications, from predicting natural disasters to addressing racial inequalities and now, assisting in cancer surgery.

A new clinical and research partnership between the UNC Department of Surgery, the Joint UNC-NCSU Department of Biomedical Engineering, and the UNC Lineberger Comprehensive Cancer Center has created an AI model that can predict whether or not cancerous tissue has been fully removed from the body during breast cancer surgery. Their findings were published in Annals of Surgical Oncology.

Summary: Pioneering artificial intelligence (AI) has astoundingly synthesized the design of a functional walking robot in a matter of seconds, illustrating a rapid-fire evolution in stark contrast to nature’s billion-year journey.

This AI, operational on a modest personal computer, crafts entirely innovative structures from scratch, distinguishing it from other AI models reliant on colossal data and high-power computing. The robot, emerging from a straightforward “design a walker” prompt, evolved from an immobile block to a bizarre, porously-holed, three-legged entity, capable of slow, steady locomotion.

Representing more than mere mechanical achievement, this AI-designed organism may mark a paradigm shift, offering a novel, unconstrained perspective on design, innovation, and potential applications in fields ranging from search-and-rescue to medical nanotechnology.

With Honda’s EV offensive finally starting, the Japanese automaker is already giving us a preview of what could be its next-gen electric SUV and sedan concepts in its latest video.

After releasing new details on its first electric SUV, the 2024 Prologue, Honda is showing off two new EV concepts.

The Honda Prologue is co-developed with General Motors. Built on GM’s Ultium platform (the same one powering upcoming EVs, including the Blazer, Equinox, and Silverado), Honda’s electric SUV will feature an expected range of over 300 miles.