Toggle light / dark theme

The field of robotics took one step forward—followed by another, then several more—when a robot called Rainbow Dash recently taught itself to walk. The four-legged machine only required a few hours to learn to walk backward and forward, and turn right and left while doing so.

Researchers from Google, UC Berkeley and the Georgia Institute of Technology published a paper on the ArXiv preprint server describing a statistical AI technique known as learning they used to produce this accomplishment, which is significant for several reasons.

Most reinforcement learning deployments take place in computer-simulated environments. Rainbow Dash, however, used this technology to learn to walk in an actual physical environment.

DARPA has established a new partnership with U.S. industry to jointly develop and deploy advanced robotic capabilities in space. The agency has signed an Other Transactions for Prototypes agreement with Space Logistics, LLC, a wholly-owned subsidiary of Northrop Grumman Corporation, as its commercial partner for the Robotic Servicing of Geosynchronous Satellites (RSGS) program.

The RSGS program’s objective is to create a dexterous robotic operational capability in geosynchronous orbit that can extend satellite life spans, enhance resilience, and improve reliability for current U.S. space infrastructure. The first step is the RSGS program’s development of a dexterous robotic servicer, which a commercial enterprise will then operate.

“DARPA remains committed to a commercial partnership for the execution of the RSGS mission,” said Dr. Michael Leahy, director of DARPA’s Tactical Technology Office. “Building upon the successes of the DARPA Orbital Express mission and the recent successful docking of Space Logistics’ Mission Extension Vehicle-1, the agency seeks to bring dexterous on-orbit servicing to spacecraft in geosynchronous orbit (GEO), and to establish that inspection, repair, life extension, and improvement of our valuable GEO assets can be made possible and even routine.”

Autonomous and semi-autonomous systems need active illumination to navigate at night or underground. Switching on visible headlights or some other emitting system like lidar, however, has a significant drawback: It allows adversaries to detect a vehicle’s presence, in some cases from long distances away.

To eliminate this vulnerability, DARPA announced the Invisible Headlights program. The fundamental research effort seeks to discover and quantify information contained in ambient thermal emissions in a wide variety of environments and to create new passive 3D sensors and algorithms to exploit that information.

“We’re aiming to make completely passive navigation in pitch dark conditions possible,” said Joe Altepeter, program manager in DARPA’s Defense Sciences Office. “In the depths of a cave or in the dark of a moonless, starless night with dense fog, current autonomous systems can’t make sense of the environment without radiating some signal—whether it’s a laser pulse, radar or visible light beam—all of which we want to avoid. If it involves emitting a signal, it’s not invisible for the sake of this program.”

The news: A new type of artificial eye, made by combining light-sensing electronics with a neural network on a single tiny chip, can make sense of what it’s seeing in just a few nanoseconds, far faster than existing image sensors.

Why it matters: Computer vision is integral to many applications of AI—from driverless cars to industrial robots to smart sensors that act as our eyes in remote locations—and machines have become very good at responding to what they see. But most image recognition needs a lot of computing power to work. Part of the problem is a bottleneck at the heart of traditional sensors, which capture a huge amount of visual data, regardless of whether or not it is useful for classifying an image. Crunching all that data slows things down.

A sensor that captures and processes an image at the same time, without converting or passing around data, makes image recognition much faster using much less power. The design, published in Nature today by researchers at the Institute of Photonics in Vienna, Austria, mimics the way animals’ eyes pre-process visual information before passing it on to the brain.

The scientific revolution was ushered in at the beginning of the 17th century with the development of two of the most important inventions in history — the telescope and the microscope. With the telescope, Galileo turned his attention skyward, and advances in optics led Robert Hooke and Antonie van Leeuwenhoek toward the first use of the compound microscope as a scientific instrument, circa 1665. Today, we are witnessing an information technology-era revolution in microscopy, supercharged by deep learning algorithms that have propelled artificial intelligence to transform industry after industry.

One of the major breakthroughs in deep learning came in 2012, when the performance superiority of a deep convolutional neural network combined with GPUs for image classification was revealed by Hinton and colleagues [1] for the ImageNet Large Scale Visual Recognition Challenge (ILSVRC). In AI’s current innovation and implementation phase, deep learning algorithms are propelling nearly all computer vision-intensive applications, including autonomous vehicles (transportation, military), facial recognition (retail, IT, communications, finance), biomedical imaging (healthcare), autonomous weapons and targeting systems (military), and automation and robotics (military, manufacturing, heavy industry, retail).

It should come as no surprise that the field of microscopy would ripe for transformation by artificial intelligence-aided image processing, analysis and interpretation. In biological research, microscopy generates prodigious amounts of image data; a single experiment with a transmission electron microscope can generate a data set containing over 100 terabytes worth of images [2]. The myriad of instruments and image processing techniques available today can resolve structures ranging in size across nearly 10 orders of magnitude, from single molecules to entire organisms, and capture spatial (3D) as well as temporal (4D) dynamics on time scales of femtoseconds to seconds.

Circa 2015


University of Utah engineers have taken a step forward in creating the next generation of computers and mobile devices capable of speeds millions of times faster than current machines.

The Utah engineers have developed an ultracompact beamsplitter—the smallest on record—for dividing light waves into two separate channels of information. The device brings researchers closer to producing silicon photonic chips that compute and shuttle data with light instead of electrons. Electrical and computer engineering associate professor Rajesh Menon and colleagues describe their invention today in the journal Nature Photonics.

Silicon photonics could significantly increase the power and speed of machines such as supercomputers, data center servers and the specialized computers that direct autonomous cars and drones with collision detection. Eventually, the technology could reach home computers and mobile devices and improve applications from gaming to video streaming.

A new robot has overcome a fundamental challenge of locomotion by teaching itself how to walk.

Researchers from Google developed algorithms that helped the four-legged bot to learn how to walk across a range of surfaces within just hours of practice, annihilating the record times set by its human overlords.

Their system uses deep reinforcement learning, a form of AI that teaches through trial and error by providing rewards for certain actions.

AI/Humans, our brave now world, happening now.


Are we facing a golden digital age or will robots soon run the world? We need to establish ethical standards in dealing with artificial intelligence — and to answer the question: What still makes us as human beings unique?

Mankind is still decades away from self-learning machines that are as intelligent as humans. But already today, chatbots, robots, digital assistants and other artificially intelligent entities exist that can emulate certain human abilities. Scientists and AI experts agree that we are in a race against time: we need to establish ethical guidelines before technology catches up with us. While AI Professor Jürgen Schmidhuber predicts artificial intelligence will be able to control robotic factories in space, the Swedish-American physicist Max Tegmark warns against a totalitarian AI surveillance state, and the philosopher Thomas Metzinger predicts a deadly AI arms race. But Metzinger also believes that Europe in particular can play a pioneering role on the threshold of this new era: creating a binding international code of ethics.