A group of researchers at the University of Tokyo and their collaborators showed that using a virtual reality system to treat phantom limb pain by creating the illusion that patients are moving their absent limbs by will and having them repeat this exercise helped ease their perceived pain.
Intel has planted some solid stakes in the ground for the future of deep learning over the last month with its acquisition of deep learning chip startup, Nervana Systems, and most recently, mobile and embedded machine learning company, Movidius.
These new pieces will snap into Intel’s still-forming puzzle for capturing the supposed billion-plus dollar market ahead for deep learning, which is complemented by its own Knights Mill effort and software optimization work on machine learning codes and tooling. At the same time, just down the coast, Nvidia is firming up the market for its own GPU training and inference chips as well as its own hardware outfitted with the latest Pascal GPUs and requisite deep learning libraries.
While Intel’s efforts have garnered significant headlines recently with that surprising pair of acquisitions, a move which is pushing Nvidia harder to demonstrate how GPU acceleration (thus far the dominant compute engine for model training), they still have some work to do to capture mindshare for this emerging market. Further complicating this is the fact that the last two years have brought a number of newcomers to the field—deep learning chip upstarts touting the idea that general purpose architectures (including GPUs) cannot compare to a low precision, fixed point, specialized approach. In fact, we could be moving into a “Cambrian explosion” for computer architecture–one that is brought about by the new requirements of deep learning. Assuming, of course, there are really enough applications and users in a short enough window that the chip startups don’t fall over waiting for their big bang.
Meet the iPhone 7
Posted in mobile phones
Dobby Selfie-Drone: Hands-On
Posted in drones, entertainment
In an attempt to bring the next generation of computers to life, teams around the globe have been working with carbon nanotubes — one of the most conductive materials ever discovered. Now, for the first time ever, scientists made a transistor using carbon nanotubes that beats silicon.
For the first time, scientists have built a transistor out of carbon nanotubes that can run almost twice as fast as its silicon counterparts.
This is big, because for decades, scientists have been trying to figure out how to build the next generation of computers using carbon nanotube components, because their unique properties could form the basis of faster devices that consume way less power.