Toggle light / dark theme

Stanford University PhD candidate, Song Han, who works under advisor and networking pioneer, Dr. Bill Dally, responded in a most soft-spoken and thoughtful way to the question of whether the coupled software and hardware architecture he developed might change the world.

In fact, instead of answering the question directly, he pointed to the range of applications, both in the present and future, that will be driven by near real-time inference for complex deep neural networks—all a roundabout way of showing not just why what he is working toward is revolutionary, but why the missing pieces he is filling in have kept neural network-fed services at a relative constant.

There is one large barrier to that future Han considers imminent—one pushed by an existing range of neural network-driven applications powering all aspects of the consumer economy and, over time, the enterprise. And it’s less broadly technical than it is efficiency-driven. After all, considering the mode of service delivery of these applications, often lightweight, power-aware devices, how much computation can be effectively packed into the memory of such devices—and at what cost to battery life or overall power? Devices aside, these same concerns, at a grander level of scale, are even more pertinent at the datacenter where some bulk of the inference is handled.

Read more

A new method of closing wounds has been discovered by researchers from University of St. Andrews and Harvard Medical School.

In the future, your wounds might not be closed by stitches or staples. Instead, they will be fixed with lasers.

Rose Bengal Dye, a common dye used by optometrists, can be used in tandem with a laser to suture wounds; however, notably, the dye will only go as deep as the laser does, which makes it somewhat less than effective for wounds that penetrate many layers of skin, but it does eliminate the need for staples and traditional sutures or stitching in relation to a number of different injuries.

Read more

What if computers could recognize objects as well as the human brain could? Electrical engineers at the University of California, San Diego have taken an important step toward that goal by developing a pedestrian detection system that performs in near real-time (2−4 frames per second) and with higher accuracy (close to half the error) compared to existing systems. The technology, which incorporates deep learning models, could be used in “smart” vehicles, robotics and image and video search systems.

Read more