Toggle light / dark theme

Yesterday, Boston Dynamics announced it was retiring its hydraulic Atlas robot. Atlas has long been the standard bearer of advanced humanoid robots. Over the years, the company was known as much for its research robots as it was for slick viral videos of them working out in military fatigues, forming dance mobs, and doing parkour. Fittingly, the company put together a send-off video of Atlas’s greatest hits and blunders.

But there were clues this wasn’t really the end, not least of which was the specific inclusion of the word “hydraulic” and the last line of the video, “‘Til we meet again, Atlas.” It wasn’t a long hiatus. Today, the company released hydraulic Atlas’s successor—electric Atlas.

Summary: Researchers leveraged deep reinforcement learning (DRL) to enable a robot to adaptively switch gaits, mimicking animal movements like trotting and pronking, to traverse complex terrains effectively. Their study explores the concept of viability—or fall prevention—as a primary motivator for such gait transitions, challenging previous beliefs that energy efficiency is the key driver.

This novel approach not only enhances the robot’s ability to handle challenging terrains but also provides deeper insights into animal locomotion. The team’s findings suggest that prioritizing fall prevention may lead to more agile and efficient robotic and biological movement across uneven surfaces.

According to files accessed by journalist Jack Poulson, Microsoft presented OpenAI’s DALL-E as a tool to conduct Advanced Computer Vision Training of Battle Management Systems (BMS).

A BMS is a software suite that provides military leaders with an overview of a combat situation and helps them plan troop movements, artillery fire, and air strike targets. According to Microsoft’s presentation, the DALL-E tool could generate artificial images and train BMS to visualize the ground situation better and identify appropriate strike targets.

A smartphone shutting down on a sweltering day is an all-too-common annoyance that may accompany a trip to the beach on a sunny afternoon. Electronic memory within these devices isn’t built to handle extreme heat.

As temperatures climb, the electrons that store data become unstable and begin to escape, leading to device failure and loss of information. But what if gadgets could withstand not just a hot summer day but the searing conditions of a jet engine or the harsh surface of Venus?

In a paper published in the journal Nature Electronics, Deep Jariwala and Roy Olsson of the University of Pennsylvania and their teams at the School of Engineering and Applied Science demonstrated capable of enduring temperatures as high as 600° Celsius—more than twice the tolerance of any commercial drives on the market—and these characteristics were maintained for more than 60 hours, indicating exceptional stability and reliability.

For the first time, an AI fighter pilot faced off against a human pilot in a “dogfight” using actual jets in the air — a huge milestone in autonomous flight and military automation.

“The X-62A team demonstrated that cutting-edge machine learning-based autonomy could be safely used to fly dynamic combat maneuvers,” said Frank Kendall, secretary of the Air Force. “The team accomplished this while complying with American norms for safe and ethical use of autonomous technology.”

The challenge: AI-controlled planes could be a boon to the military. Not only could they reduce injuries and accidents to pilots, AIs also have the potential to rapidly analyze a lot of data — allowing for more informed decisions, more quickly.

Theoretical physicists employ their imaginations and their deep understanding of mathematics to decipher the underlying laws of the universe that govern particles, forces and everything in between. More and more often, theorists are doing that work with the help of machine learning.

As might be expected, the group of theorists using machine learning includes people classified as “computational” theorists. But it also includes “formal” theorists, the people interested in the self-consistency of theoretical frameworks, like string theory or quantum gravity. And it includes “phenomenologists,” the theorists who sit next to experimentalists, hypothesizing about new particles or interactions that could be tested by experiments; analyzing the data the experiments collect; and using results to construct new models and dream up how to test them experimentally.

In all areas of theory, machine-learning algorithms are speeding up processes, performing previously impossible calculations, and even causing theorists to rethink the way theoretical physics research is done.

A new kind of transistor allows AI hardware to remember and process information more like the human brain does.

By Anna Mattson

Artificial intelligence and human thought both run on electricity, but that’s about where the physical similarities end. AI’s output arises from silicon and metal circuitry; human cognition arises from a mass of living tissue. And the architectures of these systems are fundamentally different, too. Conventional computers store and compute information in distinct parts of the hardware, shuttling data back and forth between memory and microprocessor. The human brain, on the other hand, entangles memory with processing, helping to make it more efficient.