Toggle light / dark theme

Quantum computing is inevitable, cryptography prepares for the future

Quantum computing began in the early 1980s. It operates on principles of quantum physics rather than the limitations of circuits and electricity which is why it is capable of processing highly complex mathematical problems so efficiently. Quantum computing could one day achieve things that classical computing simply cannot. The evolution of quantum computers has been slow, but things are accelerating, thanks to the efforts of academic institutions such as Oxford, MIT, and the University of Waterloo, as well as companies like IBM, Microsoft, Google, and Honeywell.

IBM has held a leadership role in this innovation push and has named optimization as the most likely application for consumers and organizations alike.

Honeywell expects to release what it calls the “world’s most powerful quantum computer” for applications like fraud detection, optimization for trading strategies, security, machine learning, and chemistry and materials science.

DeepMind says reinforcement learning is enough to reach general AI

In their decades-long chase to create artificial intelligence, computer scientists have designed and developed all kinds of complicated mechanisms and technologies to replicate vision, language, reasoning, motor skills, and other abilities associated with intelligent life. While these efforts have resulted in AI systems that can efficiently solve specific problems in limited environments, they fall short of developing the kind of general intelligence seen in humans and animals.

In a new paper submitted to the peer-reviewed Artificial Intelligence journal, scientists at U.K.-based AI lab DeepMind argue that intelligence and its associated abilities will emerge not from formulating and solving complicated problems but by sticking to a simple but powerful principle: reward maximization.

Titled “Reward is Enough,” the paper, which is still in pre-proof as of this writing, draws inspiration from studying the evolution of natural intelligence as well as drawing lessons from recent achievements in artificial intelligence. The authors suggest that reward maximization and trial-and-error experience are enough to develop behavior that exhibits the kind of abilities associated with intelligence. And from this, they conclude that reinforcement learning, a branch of AI that is based on reward maximization, can lead to the development of artificial general intelligence.

Scientists invent AI that creates COVID vaccine candidates within seconds

A team at the Viterbi School of Engineering at the University of Southern California have created something that could turn the tide in how fast vaccines come into existence.

They created an AI framework that can significantly speed-up the analysis of COVID vaccine candidates and also find the best preventative medical therapies. This is at a time when more and more COVID mutations are emerging, bringing existing vaccine efficiencies into question.

Virologists are concerned that the mutations will evolve past the first vaccines. The UK even set up a genomic consortium to look solely at where these mutations are cropping up. In the global picture, while some poorer countries wait for access to the vaccine, they become sitting ducks for highly infectious mutations.

MIT is Building a Dynamic, Acrobatic Humanoid Robot

This small-scale humanoid is designed to do parkour over challenging terrains.


For a long time, having a bipedal robot that could walk on a flat surface without falling over (and that could also maybe occasionally climb stairs or something) was a really big deal. But we’re more or less past that now. Thanks to the talented folks at companies like Agility Robotics and Boston Dynamics, we now expect bipedal robots to meet or exceed actual human performance for at least a small subset of dynamic tasks. The next step seems to be to find ways of pushing the limits of human performance, which it turns out means acrobatics. We know that IHMC has been developing their own child-size acrobatic humanoid named Nadia, and now it sounds like researchers from Sangbae Kim’s lab at MIT are working on a new acrobatic robot of their own.

We’ve seen a variety of legged robots from MIT’s Biomimetic Robotics Lab, including Cheetah and HERMES. Recently, they’ve been doing a bunch of work with their spunky little Mini Cheetahs (developed with funding and support from Naver Labs), which are designed for some dynamic stuff like gait exploration and some low-key four-legged acrobatics.

In a paper recently posted to arXiv (to be presented at Humanoids 2020 in July), Matthew Chignoli, Donghyun Kim, Elijah Stanger-Jones, and Sangbae Kim describe “a new humanoid robot design, an actuator-aware kino-dynamic motion planner, and a landing controller as part of a practical system design for highly dynamic motion control of the humanoid robot.” So it’s not just the robot itself, but all of the software infrastructure necessary to get it to do what they want it to do.

Army researchers develop innovative framework for training AI

Army researchers have developed a pioneering framework that provides a baseline for the development of collaborative multi-agent systems.

The framework is detailed in the survey paper “Survey of recent multi-agent learning algorithms utilizing centralized training,” which is featured in the SPIE Digital Library. Researchers said the work will support research in reinforcement learning approaches for developing collaborative multi-agent systems such as teams of robots that could work side-by-side with future soldiers.

“We propose that the underlying information sharing mechanism plays a critical role in centralized learning for multi-agent systems, but there is limited study of this phenomena within the research community,” said Army researcher and computer scientist Dr. Piyush K. Sharma of the U.S. Army Combat Capabilities Development Command, known as DEVCOM, Army Research Laboratory. “We conducted this survey of the state-of-the-art in reinforcement learning algorithms and their information sharing paradigms as a basis for asking fundamental questions on centralized learning for multi-agent systems that would improve their ability to work together.”