{"id":132490,"date":"2021-12-16T00:24:53","date_gmt":"2021-12-16T08:24:53","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2021\/12\/data-frugal-deep-learning-optimizes-microstructure-imaging"},"modified":"2021-12-16T00:24:53","modified_gmt":"2021-12-16T08:24:53","slug":"data-frugal-deep-learning-optimizes-microstructure-imaging","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2021\/12\/data-frugal-deep-learning-optimizes-microstructure-imaging","title":{"rendered":"Data-frugal deep learning optimizes microstructure imaging"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/data-frugal-deep-learning-optimizes-microstructure-imaging3.jpg\"><\/a><\/p>\n<p>Most often, we recognize deep learning as the magic behind self-driving cars and facial recognition, but what about its ability to safeguard the quality of the materials that make up these advanced devices? Professor of Materials Science and Engineering Elizabeth Holm and Ph.D. student Bo Lei have adopted computer vision methods for microstructural images that not only require a fraction of the data deep learning typically relies on but can save materials researchers an abundance of time and money.<\/p>\n<p>Quality control in materials processing requires the analysis and classification of complex material microstructures. For instance, the properties of some high strength steels depend on the amount of lath-type bainite in the material. However, the process of identifying bainite in microstructural images is time-consuming and expensive as researchers must first use two types of <a href=\"https:\/\/techxplore.com\/tags\/microscopy\/\" rel=\"tag\" class=\"\">microscopy<\/a> to take a closer look and then rely on their own expertise to identify bainitic regions. \u201cIt\u2019s not like identifying a person crossing the street when you\u2019re driving a car,\u201d Holm explained \u201cIt\u2019s very difficult for humans to categorize, so we will benefit a lot from integrating a <a href=\"https:\/\/techxplore.com\/tags\/deep+learning+approach\/\" rel=\"tag\" class=\"\">deep learning approach<\/a>.\u201d<\/p>\n<p>Their approach is very similar to that of the wider computer-vision community that drives facial recognition. The model is trained on existing material microstructure images to evaluate new images and interpret their classification. While companies like Facebook and Google train their models on millions or billions of images, materials scientists rarely have access to even ten thousand images. Therefore, it was vital that Holm and Lei use a \u201cdata-frugal method,\u201d and train their model using only 30\u201350 microscopy images. \u201cIt\u2019s like learning how to read,\u201d Holm explained. \u201cOnce you\u2019ve learned the alphabet you can apply that knowledge to any book. We are able to be data-frugal in part because these systems have already been trained on a large database of natural images.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Most often, we recognize deep learning as the magic behind self-driving cars and facial recognition, but what about its ability to safeguard the quality of the materials that make up these advanced devices? Professor of Materials Science and Engineering Elizabeth Holm and Ph.D. student Bo Lei have adopted computer vision methods for microstructural images that [\u2026]<\/p>\n","protected":false},"author":427,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6,1491],"tags":[],"class_list":["post-132490","post","type-post","status-publish","format-standard","hentry","category-robotics-ai","category-transportation"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/132490","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/427"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=132490"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/132490\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=132490"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=132490"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=132490"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}