{"id":125720,"date":"2021-08-01T05:23:12","date_gmt":"2021-08-01T12:23:12","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2021\/08\/the-future-of-deep-learning-is-photonic"},"modified":"2021-08-01T05:23:12","modified_gmt":"2021-08-01T12:23:12","slug":"the-future-of-deep-learning-is-photonic","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2021\/08\/the-future-of-deep-learning-is-photonic","title":{"rendered":"The Future of Deep Learning Is Photonic"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/the-future-of-deep-learning-is-photonic.jpg\"><\/a><\/p>\n<p>Over the years, deep learning has required an ever-growing number of these multiply-and-accumulate operations. Consider LeNet, a pioneering deep neural network, designed to do image classification. In 1998 it was shown to outperform other machine techniques for recognizing handwritten letters and numerals. But by 2012 AlexNet, a neural network that crunched through about 1600 times as many multiply-and-accumulate operations as LeNet, was able to recognize thousands of different types of objects in images.<\/p>\n<p>Advancing from LeNet\u2019s initial success to AlexNet required almost 11 doublings of computing performance. During the 14 years that took, Moore\u2019s law provided much of that increase. The challenge has been to keep this trend going now that Moore\u2019s law is running out of steam. The usual solution is simply to throw more computing resources\u2014along with time, money, and energy\u2014at the problem.<\/p>\n<p>As a result, training today\u2019s large neural networks often has a significant environmental footprint. One <a href=\"https:\/\/arxiv.org\/abs\/1906.02243\">2019 study<\/a> found, for example, that training a certain deep neural network for natural-language processing produced five times the CO<sub>2<\/sub> emissions typically associated with driving an automobile over its lifetime.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Over the years, deep learning has required an ever-growing number of these multiply-and-accumulate operations. Consider LeNet, a pioneering deep neural network, designed to do image classification. In 1998 it was shown to outperform other machine techniques for recognizing handwritten letters and numerals. But by 2012 AlexNet, a neural network that crunched through about 1600 times [\u2026]<\/p>\n","protected":false},"author":396,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6,1491],"tags":[],"class_list":["post-125720","post","type-post","status-publish","format-standard","hentry","category-robotics-ai","category-transportation"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/125720","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/396"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=125720"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/125720\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=125720"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=125720"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=125720"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}