{"id":192098,"date":"2024-06-30T12:25:48","date_gmt":"2024-06-30T17:25:48","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2024\/06\/like-a-child-this-brain-inspired-ai-can-explain-its-reasoning"},"modified":"2024-06-30T12:25:48","modified_gmt":"2024-06-30T17:25:48","slug":"like-a-child-this-brain-inspired-ai-can-explain-its-reasoning","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2024\/06\/like-a-child-this-brain-inspired-ai-can-explain-its-reasoning","title":{"rendered":"Like a Child, This Brain-Inspired AI Can Explain Its Reasoning"},"content":{"rendered":"<p style=\"padding-right: 20px\"><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/like-a-child-this-brain-inspired-ai-can-explain-its-reasoning3.jpg\"><\/a><\/p>\n<p>But deep learning has a massive drawback: The algorithms can\u2019t justify their answers. Often called the \u201cblack box\u201d problem, this opacity stymies their use in high-risk situations, such as in medicine. Patients want an explanation when diagnosed with a life-changing disease. For now, deep learning-based algorithms\u2014even if they have high diagnostic accuracy\u2014can\u2019t provide that information.<\/p>\n<p>To open the black box, a team from the University of Texas Southwestern Medical Center tapped the human mind for inspiration. In <a href=\"https:\/\/www.nature.com\/articles\/s43588-024-00593-9\">a study<\/a> in <em>Nature Computational Science<\/em>, they combined principles from the study of brain networks with a more traditional AI approach that relies on explainable building blocks.<\/p>\n<p>The resulting AI acts a bit like a child. It condenses different types of information into \u201chubs.\u201d Each hub is then transcribed into coding guidelines for humans to read\u2014CliffsNotes for programmers that explain the algorithm\u2019s conclusions about patterns it found in the data in plain English. It can also generate fully executable programming code to try out.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>But deep learning has a massive drawback: The algorithms can\u2019t justify their answers. Often called the \u201cblack box\u201d problem, this opacity stymies their use in high-risk situations, such as in medicine. Patients want an explanation when diagnosed with a life-changing disease. For now, deep learning-based algorithms\u2014even if they have high diagnostic accuracy\u2014can\u2019t provide that information. [\u2026]<\/p>\n","protected":false},"author":513,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[11,41,6],"tags":[],"class_list":["post-192098","post","type-post","status-publish","format-standard","hentry","category-biotech-medical","category-information-science","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/192098","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/513"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=192098"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/192098\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=192098"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=192098"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=192098"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}