Toggle light / dark theme

Four years ago, Google started to see the real potential for deploying neural networks to support a large number of new services. During that time it was also clear that, given the existing hardware, if people did voice searches for three minutes per day or dictated to their phone for short periods, Google would have to double the number of datacenters just to run machine learning models.

The need for a new architectural approach was clear, Google distinguished hardware engineer, Norman Jouppi, tells The Next Platform, but it required some radical thinking. As it turns out, that’s exactly what he is known for. One of the chief architects of the MIPS processor, Jouppi has pioneered new technologies in memory systems and is one of the most recognized names in microprocessor design. When he joined Google over three years ago, there were several options on the table for an inference chip to churn services out from models trained on Google’s CPU and GPU hybrid machines for deep learning but ultimately Jouppi says he never excepted to return back to what is essentially a CISC device.

We are, of course, talking about Google’s Tensor Processing Unit (TPU), which has not been described in much detail or benchmarked thoroughly until this week. Today, Google released an exhaustive comparison of the TPU’s performance and efficiencies compared with Haswell CPUs and Nvidia Tesla K80 GPUs. We will cover that in more detail in a separate article so we can devote time to an in-depth exploration of just what’s inside the Google TPU to give it such a leg up on other hardware for deep learning inference. You can take a look at the full paper, which was just released, and read on for what we were able to glean from Jouppi that the paper doesn’t reveal.

Read more

Big Dog marches again.


The US Marine Corps is preparing to resume testing on its four-legged robot, “Spot.”

A project of the Corps’ Warfighting Lab, the dog-sized device is slated to re-enter developmental testing in the fall.

Capt. Mike Malandra, who heads the Warfighting Lab’s science and technology branch, said that Spot’s hydraulic legs may make it more maneuverable than the small, unmanned Modular Advanced Armed Robotic System, which features treads similar to a tank rather than limbs.

Taking a cue from the Marvel Universe, researchers report that they have developed a self-healing polymeric material with an eye toward electronics and soft robotics that can repair themselves. The material is stretchable and transparent, conducts ions to generate current and could one day help your broken smartphone go back together again.

The researchers will present their work today at the 253rd National Meeting & Exposition of the American Chemical Society (ACS).

“When I was young, my idol was Wolverine from the X-Men,” Chao Wang, Ph.D., says. “He could save the world, but only because he could heal himself. A self-healing material, when carved into two parts, can go back together like nothing has happened, just like our human skin. I’ve been researching making a self-healing lithium ion battery, so when you drop your cell phone, it could fix itself and last much longer.”

Read more

A team of researchers has developed artificial synapses that are capable of learning autonomously and can improve how fast artificial neural networks learn.

Developments and advances in artificial intelligence (AI) have been due in large part to technologies that mimic how the human brain works. In the world of information technology, such AI systems are called neural networks. These contain algorithms that can be trained, among other things, to imitate how the brain recognizes speech and images. However, running an Artificial Neural Network consumes a lot of time and energy.

Now, researchers from the National Center for Scientific Research (CNRS) in Thales, the University of Bordeaux in Paris-Sud, and Evry have developed an artificial synapse called a memristor directly on a chip. It paves the way for intelligent systems that required less time and energy to learn, and it can learn autonomously.

Read more

It isn’t easy to capture the best shots in a golf tournament that is being televised. And that’s why IBM is applying the artificial intelligence of its Watson platform to the task of identifying the best shots at The Masters golf tournament.

For the first time at a sporting event, IBM is harnessing Watson’s ability to see, hear, and learn to identify great shots based on crowd noise, player gestures, and other indicators. IBM Watson will create its own highlight reels.

With 90 golfers playing multiple rounds over four days, video from every tee, every hole, and multiple camera angles can quickly add up to thousands of hours of footage.

Read more

Deep learning owes its rising popularity to its vast applications across an increasing number of fields. From healthcare to finance, automation to e-commerce, the RE•WORK Deep Learning Summit (27−28 April) will showcase the deep learning landscape and its impact on business and society.

Of notable interest is speaker Jeffrey De Fauw, Research Engineer at DeepMind. Prior to joining DeepMind, De Fauw developed a deep learning model to detect Diabetic Retinopathy (DR) in fundus images, which he will be presenting at the Summit. DR is a leading cause of blindness in the developed world and diagnosing it is a time-consuming process. De Fauw’s model was designed to reduce diagnostics time and to accurately identify patients at risk, to help them receive treatment as early as possible.

Joining De Fauw will be Brian Cheung, A PhD student from UC Berkeley, and currently working at Google Brain. At the event, he will explain how neural network models are able to extract relevant features from data with minimal feature engineering. Applied in the study of physiology, his research aims to use a retinal lattice model to examine retinal images.

Read more