Toggle light / dark theme

More human than human? How the future of video game AI will change the way that we play

EA, Ubisoft, Warner Bros, and more explore how artificial intelligence innovations will lead to more believable open worlds and personal adventures within them.


Most NPCs simply patrol a specific area until the player interacts with them, at which point they try to become a more challenging target to hit. That’s fine in confined spaces, but in big worlds where NPCs have the freedom to roam, it just doesn’t scale. More advanced AI techniques such as machine learning – which uses algorithms to study incoming data, interpret it, and decide on a course of action in real-time – give AI agents much more flexibility and freedom. But developing them is time-consuming, computationally expensive, and a risk because it makes NPCs less predictable – hence the Assassin’s Creed Valhalla stalking situation.

However, as open-world and narrative-based games become more complex, and as modern PCs and consoles display ever more authentic and detailed environments, the need for more advanced AI techniques is growing. It’s going to be weird and alienating to be thrust into an almost photorealistic world filled with intricate systems and narrative possibilities, only to discover that non-player characters still act like soulless robots.

This is something the developers pushing the boundaries of open-world game design understand. Ubisoft, for example, has dedicated AI research teams at its Chengdu, Mumbai, Pune, and Montpelier studios, as well as a Strategic Innovation Lab in Paris and the Montreal studio’s La Forge lab, and is working with tech firms and universities on academic AI research topics.

Using artificial intelligence to generate 3D holograms in real-time

Holograms deliver an exceptional representation of 3D world around us. Plus, they’re beautiful. (Go ahead — check out the holographic dove on your Visa card.) Holograms offer a shifting perspective based on the viewer’s position, and they allow the eye to adjust focal depth to alternately focus on foreground and background.

Researchers have long sought to make computer-generated holograms, but the process has traditionally required a supercomputer to churn through physics simulations, which is time-consuming and can yield less-than-photorealistic results. Now, MIT researchers have developed a new way to produce holograms almost instantly — and the deep learning-based method is so efficient that it can run on a laptop in the blink of an eye, the researchers say.

Remotely Piloted Underwater Glider Crosses the Atlantic

Circa 2010


About 48 kilometers off the eastern coast of the United States, scientists from Rutgers, the State University of New Jersey, peered over the side of a small research vessel, the Arabella. They had just launched RU27, a 2-meter-long oceanographic probe shaped like a torpedo with wings. Although it sported a bright yellow paint job for good visibility, it was unclear whether anyone would ever see this underwater robot again. Its mission, simply put, was to cross the Atlantic before its batteries gave out.

Unlike other underwater drones, RU27 and its kin are able to travel without the aid of a propeller. Instead, they move up and down through the top 100 to 200 meters of seawater by adjusting their buoyancy while gliding forward using their swept-back wings. With this strategy, they can go a remarkably long way on a remarkably small amount of energy.

When submerged and thus out of radio contact, RU27 steered itself with the aid of sensors that registered depth, heading, and angle from the horizontal. From those inputs, it could dead reckon about where it had glided since its last GPS navigational fix: Every 8 hours the probe broke the surface and briefly stuck its tail in the air, which exposed its GPS antenna as well as the antenna of an Iridium satellite modem. This allowed the vehicle to contact its operators, who were located in New Brunswick, N.J., in the Rutgers Coastal Ocean Observation Lab, or COOL Room.

Scientists use 3D-printed rocks, machine learning to detect unexpected earthquakes

Geoscientists at Sandia National Laboratories used 3D-printed rocks and an advanced, large-scale computer model of past earthquakes to understand and prevent earthquakes triggered by energy exploration.

Injecting water underground after unconventional oil and gas extraction, commonly known as fracking, geothermal energy stimulation and carbon dioxide sequestration all can trigger earthquakes. Of course, energy companies do their due diligence to check for faults—breaks in the earth’s upper crust that are prone to earthquakes—but sometimes earthquakes, even swarms of earthquakes, strike unexpectedly.

Sandia geoscientists studied how pressure and from injecting water can transfer through pores in rocks down to fault lines, including previously hidden ones. They also crushed rocks with specially engineered weak points to hear the sound of different types of fault failures, which will aid in early detection of an induced .

Successful trial shows way forward on quieter drone propellers

Researchers have published a study revealing their successful approach to designing much quieter propellers.

The Australian research team used machine learning to design their propellers, then 3D printed several of the most promising prototypes for experimental acoustic testing at the Commonwealth Scientific and Industrial Research Organisation’s specialized ‘echo-free’ chamber.

Results now published in Aerospace Research Central show the prototypes made around 15dB less noise than commercially available propellers, validating the team’s design methodology.

A Dyson Sphere Could Bring Humans Back From the Dead, Researchers Say

Imagine this: In the far, far future, long after you’ve died, you’ll eventually come back to life. So will everyone else who ever had a hand in the history of human civilization. But in this scenario, returning from the dead is the relatively normal part. The journey home will be a hell of a lot weirder than the destination.

Here’s how it will go down: A megastructure called a Dyson Sphere will provide a superintelligent artificial agent (AI) with the enormous amounts of power it needs to collect as much historical and personal data about you, so it can rebuild your exact digital copy. Once it’s finished, you’ll live your whole life (again) in a simulated reality, and when the time comes for you to die (again), you’ll be transported into a simulated afterlife, à la Black Mirror’s “San Junipero,” where you’ll get to hang out with your friends, family, and favorite celebrities forever.

Yes, this is mind-boggling. But someday, it might also be very real.

Neural network CLIP mirrors human brain neurons in image recognition

Open AI, the research company founded by Elon Musk, has just discovered that their artificial neural network CLIP shows behavior strikingly similar to a human brain. This find has scientists hopeful for the future of AI networks’ ability to identify images in a symbolic, conceptual and literal capacity.

While the human processes by correlating a series of abstract concepts to an overarching theme, the first biological neuron recorded to operate in a similar fashion was the “Halle Berry” neuron. This neuron proved capable of recognizing photographs and sketches of the actress and connecting those images with the name “Halle Berry.”

Now, OpenAI’s multimodal vision system continues to outperform existing systems, namely with traits such as the “Spider-Man” neuron, an artificial neuron which can identify not only the image of the text “spider” but also the comic book character in both illustrated and live action form. This ability to recognize a single concept represented in various contexts demonstrates CLIP’s abstraction capabilities. Similar to a human brain, the capacity for abstraction allows a vision system to tie a series of images and text to a central theme.

In a leap for battery research, machine learning gets scientific smarts

Scientists have taken a major step forward in harnessing machine learning to accelerate the design for better batteries: Instead of using it just to speed up scientific analysis by looking for patterns in data, as researchers generally do, they combined it with knowledge gained from experiments and equations guided by physics to discover and explain a process that shortens the lifetimes of fast-charging lithium-ion batteries.

It was the first time this approach, known as “scientific machine learning,” has been applied to cycling, said Will Chueh, an associate professor at Stanford University and investigator with the Department of Energy’s SLAC National Accelerator Laboratory who led the study. He said the results overturn long-held assumptions about how lithium-ion batteries charge and discharge and give researchers a new set of rules for engineering longer-lasting batteries.

The research, reported today in Nature Materials, is the latest result from a collaboration between Stanford, SLAC, the Massachusetts Institute of Technology and Toyota Research Institute (TRI). The goal is to bring together foundational research and industry know-how to develop a long-lived electric vehicle battery that can be charged in 10 minutes.