Toggle light / dark theme

Warehouse automation company Nimble Robotics today announced that it has raised a $50 million Series A. Led by DNS Capital and GSR Ventures and featuring Accel and Reinvent Capital, the round will go toward helping the company essentially double its headcount this year.

Founded by former Stanford PhD student Simon Kalouche, the system utilizes deep imitation learning – a popular concept in robotics research that helps systems map and improve through imitation.

“Instead of letting it sit in a lab for five years and creating this robotic application before it’s finally ready to deploy to the real world, we deployed it today,” says Kalouche. “It’s not fully autonomous – it’s autonomous maybe 90, 95% of the time. The other 5–10% is assisted by remote human operators, but it’s reliable on day one, and it’s reliable on day 10000.”

EA, Ubisoft, Warner Bros, and more explore how artificial intelligence innovations will lead to more believable open worlds and personal adventures within them.


Most NPCs simply patrol a specific area until the player interacts with them, at which point they try to become a more challenging target to hit. That’s fine in confined spaces, but in big worlds where NPCs have the freedom to roam, it just doesn’t scale. More advanced AI techniques such as machine learning – which uses algorithms to study incoming data, interpret it, and decide on a course of action in real-time – give AI agents much more flexibility and freedom. But developing them is time-consuming, computationally expensive, and a risk because it makes NPCs less predictable – hence the Assassin’s Creed Valhalla stalking situation.

However, as open-world and narrative-based games become more complex, and as modern PCs and consoles display ever more authentic and detailed environments, the need for more advanced AI techniques is growing. It’s going to be weird and alienating to be thrust into an almost photorealistic world filled with intricate systems and narrative possibilities, only to discover that non-player characters still act like soulless robots.

This is something the developers pushing the boundaries of open-world game design understand. Ubisoft, for example, has dedicated AI research teams at its Chengdu, Mumbai, Pune, and Montpelier studios, as well as a Strategic Innovation Lab in Paris and the Montreal studio’s La Forge lab, and is working with tech firms and universities on academic AI research topics.

Holograms deliver an exceptional representation of 3D world around us. Plus, they’re beautiful. (Go ahead — check out the holographic dove on your Visa card.) Holograms offer a shifting perspective based on the viewer’s position, and they allow the eye to adjust focal depth to alternately focus on foreground and background.

Researchers have long sought to make computer-generated holograms, but the process has traditionally required a supercomputer to churn through physics simulations, which is time-consuming and can yield less-than-photorealistic results. Now, MIT researchers have developed a new way to produce holograms almost instantly — and the deep learning-based method is so efficient that it can run on a laptop in the blink of an eye, the researchers say.

Circa 2010


About 48 kilometers off the eastern coast of the United States, scientists from Rutgers, the State University of New Jersey, peered over the side of a small research vessel, the Arabella. They had just launched RU27, a 2-meter-long oceanographic probe shaped like a torpedo with wings. Although it sported a bright yellow paint job for good visibility, it was unclear whether anyone would ever see this underwater robot again. Its mission, simply put, was to cross the Atlantic before its batteries gave out.

Unlike other underwater drones, RU27 and its kin are able to travel without the aid of a propeller. Instead, they move up and down through the top 100 to 200 meters of seawater by adjusting their buoyancy while gliding forward using their swept-back wings. With this strategy, they can go a remarkably long way on a remarkably small amount of energy.

When submerged and thus out of radio contact, RU27 steered itself with the aid of sensors that registered depth, heading, and angle from the horizontal. From those inputs, it could dead reckon about where it had glided since its last GPS navigational fix: Every 8 hours the probe broke the surface and briefly stuck its tail in the air, which exposed its GPS antenna as well as the antenna of an Iridium satellite modem. This allowed the vehicle to contact its operators, who were located in New Brunswick, N.J., in the Rutgers Coastal Ocean Observation Lab, or COOL Room.

Geoscientists at Sandia National Laboratories used 3D-printed rocks and an advanced, large-scale computer model of past earthquakes to understand and prevent earthquakes triggered by energy exploration.

Injecting water underground after unconventional oil and gas extraction, commonly known as fracking, geothermal energy stimulation and carbon dioxide sequestration all can trigger earthquakes. Of course, energy companies do their due diligence to check for faults—breaks in the earth’s upper crust that are prone to earthquakes—but sometimes earthquakes, even swarms of earthquakes, strike unexpectedly.

Sandia geoscientists studied how pressure and from injecting water can transfer through pores in rocks down to fault lines, including previously hidden ones. They also crushed rocks with specially engineered weak points to hear the sound of different types of fault failures, which will aid in early detection of an induced .

Researchers have published a study revealing their successful approach to designing much quieter propellers.

The Australian research team used machine learning to design their propellers, then 3D printed several of the most promising prototypes for experimental acoustic testing at the Commonwealth Scientific and Industrial Research Organisation’s specialized ‘echo-free’ chamber.

Results now published in Aerospace Research Central show the prototypes made around 15dB less noise than commercially available propellers, validating the team’s design methodology.

Imagine this: In the far, far future, long after you’ve died, you’ll eventually come back to life. So will everyone else who ever had a hand in the history of human civilization. But in this scenario, returning from the dead is the relatively normal part. The journey home will be a hell of a lot weirder than the destination.

Here’s how it will go down: A megastructure called a Dyson Sphere will provide a superintelligent artificial agent (AI) with the enormous amounts of power it needs to collect as much historical and personal data about you, so it can rebuild your exact digital copy. Once it’s finished, you’ll live your whole life (again) in a simulated reality, and when the time comes for you to die (again), you’ll be transported into a simulated afterlife, à la Black Mirror’s “San Junipero,” where you’ll get to hang out with your friends, family, and favorite celebrities forever.

Yes, this is mind-boggling. But someday, it might also be very real.

Open AI, the research company founded by Elon Musk, has just discovered that their artificial neural network CLIP shows behavior strikingly similar to a human brain. This find has scientists hopeful for the future of AI networks’ ability to identify images in a symbolic, conceptual and literal capacity.

While the human processes by correlating a series of abstract concepts to an overarching theme, the first biological neuron recorded to operate in a similar fashion was the “Halle Berry” neuron. This neuron proved capable of recognizing photographs and sketches of the actress and connecting those images with the name “Halle Berry.”

Now, OpenAI’s multimodal vision system continues to outperform existing systems, namely with traits such as the “Spider-Man” neuron, an artificial neuron which can identify not only the image of the text “spider” but also the comic book character in both illustrated and live action form. This ability to recognize a single concept represented in various contexts demonstrates CLIP’s abstraction capabilities. Similar to a human brain, the capacity for abstraction allows a vision system to tie a series of images and text to a central theme.