Toggle light / dark theme

See Spot Run, Again: Marines Resume Testing on Quadruped Robot

Big Dog marches again.


The US Marine Corps is preparing to resume testing on its four-legged robot, “Spot.”

A project of the Corps’ Warfighting Lab, the dog-sized device is slated to re-enter developmental testing in the fall.

Capt. Mike Malandra, who heads the Warfighting Lab’s science and technology branch, said that Spot’s hydraulic legs may make it more maneuverable than the small, unmanned Modular Advanced Armed Robotic System, which features treads similar to a tank rather than limbs.

Materials may lead to self-healing smartphones

Taking a cue from the Marvel Universe, researchers report that they have developed a self-healing polymeric material with an eye toward electronics and soft robotics that can repair themselves. The material is stretchable and transparent, conducts ions to generate current and could one day help your broken smartphone go back together again.

The researchers will present their work today at the 253rd National Meeting & Exposition of the American Chemical Society (ACS).

“When I was young, my idol was Wolverine from the X-Men,” Chao Wang, Ph.D., says. “He could save the world, but only because he could heal himself. A self-healing material, when carved into two parts, can go back together like nothing has happened, just like our human skin. I’ve been researching making a self-healing lithium ion battery, so when you drop your cell phone, it could fix itself and last much longer.”

Ray Kurzweil responds to fears

Ray is not worried about A.I. though he does not dismiss the dangers.


James Bedsol interviewed Ray Kurzweil, one of the world’s leading minds on artificial intelligence, technology and futurism, in his Google office in Mountain View, CA, February 15, 2017.

Who is Raymond “Ray” Kurzweil?

Kurzweil is one of the world’s leading minds on artificial intelligence, technology and futurism. He is the author of five national best-selling books, including “The Singularity is Near” and “How to Create a Mind.”

We Just Created an Artificial Synapse That Can Learn Autonomously

A team of researchers has developed artificial synapses that are capable of learning autonomously and can improve how fast artificial neural networks learn.

Developments and advances in artificial intelligence (AI) have been due in large part to technologies that mimic how the human brain works. In the world of information technology, such AI systems are called neural networks. These contain algorithms that can be trained, among other things, to imitate how the brain recognizes speech and images. However, running an Artificial Neural Network consumes a lot of time and energy.

Now, researchers from the National Center for Scientific Research (CNRS) in Thales, the University of Bordeaux in Paris-Sud, and Evry have developed an artificial synapse called a memristor directly on a chip. It paves the way for intelligent systems that required less time and energy to learn, and it can learn autonomously.

IBM Watson AI will help spot great shots at The Masters golf tournament

It isn’t easy to capture the best shots in a golf tournament that is being televised. And that’s why IBM is applying the artificial intelligence of its Watson platform to the task of identifying the best shots at The Masters golf tournament.

For the first time at a sporting event, IBM is harnessing Watson’s ability to see, hear, and learn to identify great shots based on crowd noise, player gestures, and other indicators. IBM Watson will create its own highlight reels.

With 90 golfers playing multiple rounds over four days, video from every tee, every hole, and multiple camera angles can quickly add up to thousands of hours of footage.

Where is Deep Learning Being Applied? More from RE•WORK Global Summits

Deep learning owes its rising popularity to its vast applications across an increasing number of fields. From healthcare to finance, automation to e-commerce, the RE•WORK Deep Learning Summit (27−28 April) will showcase the deep learning landscape and its impact on business and society.

Of notable interest is speaker Jeffrey De Fauw, Research Engineer at DeepMind. Prior to joining DeepMind, De Fauw developed a deep learning model to detect Diabetic Retinopathy (DR) in fundus images, which he will be presenting at the Summit. DR is a leading cause of blindness in the developed world and diagnosing it is a time-consuming process. De Fauw’s model was designed to reduce diagnostics time and to accurately identify patients at risk, to help them receive treatment as early as possible.

Joining De Fauw will be Brian Cheung, A PhD student from UC Berkeley, and currently working at Google Brain. At the event, he will explain how neural network models are able to extract relevant features from data with minimal feature engineering. Applied in the study of physiology, his research aims to use a retinal lattice model to examine retinal images.

Enlitic To Partner With Paiyipai To Deploy Deep Learning In Health Check Centers Across China

SAN FRANCISCO, April 4, 2017 /PRNewswire/ — Enlitic, a medical deep learning company, is pleased to announce that it has executed a Memorandum of Understanding (“MOU”) with Beijing Hao Yun Dao Information & Technology Co., Ltd (“Paiyipai”) to provide Enlitic’s deep learning solution to Paiyipai for diagnostic imaging in Health Check centers across China.

Paiyipai is a medical big data company. The company is a market leader in China in the analysis of individual laboratory medical test results, and the storage and distribution of user medical records.

The MOU forms the basis of collaboration for the first large-scale commercial deployment of Enlitic’s deep learning technology in China. It was executed following a successful 10,000 chest x-ray trial of Enlitic’s patient triage platform.