Rice University computer scientists have adapted a widely used technique for rapid data lookup to slash the amount of computation — and thus energy and time — required for deep learning, a computationally intense form of machine learning.
“This applies to any deep-learning architecture, and the technique scales sublinearly, which means that the larger the deep neural network to which this is applied, the more the savings in computations there will be,” said lead researcher Anshumali Shrivastava, an assistant professor of computer science at Rice.
The research will be presented in August at the KDD 2017 conference in Halifax, Nova Scotia. It addresses one of the biggest issues facing tech giants like Google, Facebook and Microsoft as they race to build, train and deploy massive deep-learning networks for a growing body of products as diverse as self-driving cars, language translators and intelligent replies to emails.
Comments are closed.