Toggle light / dark theme

Technique enables real-time rendering of scenes in 3D

Humans are pretty good at looking at a single two-dimensional image and understanding the full three-dimensional scene that it captures. Artificial intelligence agents are not.

Yet a machine that needs to interact with objects in the world—like a robot designed to harvest crops or assist with surgery—must be able to infer properties about a 3D from observations of the 2D images it’s trained on.

While scientists have had success using neural networks to infer representations of 3D scenes from images, these machine learning methods aren’t fast enough to make them feasible for many real-world applications.

Player of Games

Games have a long history of serving as a benchmark for progress in artificial intelligence. Recently, approaches using search and learning have shown strong performance across a set of perfect information games, and approaches using game-theoretic reasoning and learning have shown strong performance for specific imperfect information poker variants. We introduce, a general-purpose algorithm that unifies previous approaches, combining guided search, self-play… See more.


Games have a long history of serving as a benchmark for progress in.

Artificial intelligence. Recently, approaches using search and learning have.

Shown strong performance across a set of perfect information games, and.
approaches using game-theoretic reasoning and learning have shown strong.
performance for specific imperfect information poker variants. We introduce.
Player of Games, a general-purpose algorithm that unifies previous approaches.

Combining guided search, self-play learning, and game-theoretic reasoning.
Player of Games is the first algorithm to achieve strong empirical performance.

In large perfect and imperfect information games — an important step towards.

UC Berkeley’s Sergey Levine Says Combining Self-Supervised and Offline RL Could Enable Algorithms That Understand the World Through Actions

The idiom “actions speak louder than words” first appeared in print almost 300 years ago. A new study echoes this view, arguing that combining self-supervised and offline reinforcement learning (RL) could lead to a new class of algorithms that understand the world through actions and enable scalable representation learning.

Machine learning (ML) systems have achieved outstanding performance in domains ranging from computer vision to speech recognition and natural language processing, yet still struggle to match the flexibility and generality of human reasoning. This has led ML researchers to search for the “missing ingredient” that might boost these systems’ ability to understand, reason and generalize.

In the paper Understanding the World Through Action, UC Berkeley assistant professor in the department of electrical engineering and computer sciences Sergey Levine suggests that a general, principled, and powerful framework for utilizing unlabelled data could be derived from RL to enable ML systems leveraging large datasets to better understand the real world.

Robots Evolve Bodies and Brains Like Animals in MIT’s New AI Training Simulator

To set some benchmarks for their simulator, the researchers tried out three different design algorithms working in conjunction with a deep reinforcement learning algorithm that learned to control the robots through many rounds of trial and error.

The co-designed bots performed well on the simpler tasks, like walking or carrying things, but struggled with tougher challenges, like catching and lifting, suggesting there’s plenty of scope for advances in co-design algorithms. Nonetheless, the AI-designed bots outperformed ones design by humans on almost every task.

Intriguingly, many of the co-design bots took on similar shapes to real animals. One evolved to resemble a galloping horse, while another, set the task of climbing up a chimney, evolved arms and legs and clambered up somewhat like a monkey.

Human Brain Project

None.


Researchers at Human Brain Project partner University of Granada in Spain have designed a new artificial neural network that mimics the structure of the cerebellum, one of the evolutionarily older parts of the brain, which plays an important role in motor coordination. When linked to a robotic arm, their system learned to perform precise movements and interact with humans in different circumstances, surpassing performance of previous AI-based robotic steering systems. The results have been published in the journal Science Robotics.

Hopkins to use Artificial Intelligence to Promote Healthy Aging

Johns Hopkins gets the grant to use artificial intelligence to promote healthy aging. The National Institute of Aging has allocated over $20M to Hopkins for them to execute their plans to promote healthy aging.

This new development will considerably help in providing a better lifestyle and living experience to senior citizens. Johns Hopkins will use the allocated funds over five years to build an AI and technology collaboratory (AITC).

The new collaboratory will have members from the Johns Hopkins University schools of medicine and nursing, the Whiting School of Engineering, and the Carey Business School. The collaboratory will also have members from various industries, senior citizens of the country, and technology developers.

Kamikaze drones: A new weapon brings power and peril to the U.S. military

Americans have become accustomed to images of Hellfire missiles raining down from Predator and Reaper drones to hit terrorist targets in Pakistan or Yemen. But that was yesterday’s drone war.

A revolution in unmanned aerial vehicles is unfolding, and the U.S. has lost its monopoly on the technology.

Some experts believe the spread of the semi-autonomous weapons will change ground warfare as profoundly as the machine gun did.

SEIHAI: The hierarchical AI that won the NeurIPS-2020 MineRL competition

In recent years, computational tools based on reinforcement learning have achieved remarkable results in numerous tasks, including image classification and robotic object manipulation. Meanwhile, computer scientists have also been training reinforcement learning models to play specific human games and videogames.

To challenge research teams working on reinforcement learning techniques, the Neural Information Processing Systems (NeurIPS) annual conference introduced the MineRL competition, a contest in which different algorithms are tested on the same in Minecraft, the renowned computer game developed by Mojang Studios. More specifically, contestants are asked to create algorithms that will need to obtain a diamond from raw pixels in the Minecraft game.

The algorithms can only be trained for four days and on 8,000,000 samples created by the MineRL simulator, using a single GPU machine. In addition to the training dataset, participants are also provided with a large collection of human demonstrations (i.e., video frames in which the task is solved by human players).

/* */