{"id":137790,"date":"2022-04-07T03:02:50","date_gmt":"2022-04-07T08:02:50","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2022\/04\/new-method-compares-machine-learning-models-reasoning-to-that-of-a-human"},"modified":"2022-04-07T03:02:50","modified_gmt":"2022-04-07T08:02:50","slug":"new-method-compares-machine-learning-models-reasoning-to-that-of-a-human","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2022\/04\/new-method-compares-machine-learning-models-reasoning-to-that-of-a-human","title":{"rendered":"New method compares machine-learning model\u2019s reasoning to that of a human"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/new-method-compares-machine-learning-models-reasoning-to-that-of-a-human.jpg\"><\/a><\/p>\n<p>In machine learning, understanding why a model makes certain decisions is often just as important as whether those decisions are correct. For instance, a machine-learning model might correctly predict that a skin lesion is cancerous, but it could have done so using an unrelated blip on a clinical photo.<\/p>\n<p>While tools exist to help experts make sense of a model\u2019s reasoning, often these methods only provide insights on one decision at a time, and each must be manually evaluated. Models are commonly trained using millions of data inputs, making it almost impossible for a human to evaluate enough decisions to identify patterns.<\/p>\n<p>Now, researchers at MIT and IBM Research have created a method that enables a user to aggregate, sort, and rank these individual explanations to rapidly analyze a <a href=\"https:\/\/techxplore.com\/tags\/machine-learning+model\/\" rel=\"tag\" class=\"\">machine-learning model<\/a>\u2019s behavior. Their technique, called Shared Interest, incorporates quantifiable metrics that compare how well a model\u2019s reasoning matches that of a human.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In machine learning, understanding why a model makes certain decisions is often just as important as whether those decisions are correct. For instance, a machine-learning model might correctly predict that a skin lesion is cancerous, but it could have done so using an unrelated blip on a clinical photo. While tools exist to help experts [\u2026]<\/p>\n","protected":false},"author":662,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[11,6],"tags":[],"class_list":["post-137790","post","type-post","status-publish","format-standard","hentry","category-biotech-medical","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/137790","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/662"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=137790"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/137790\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=137790"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=137790"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=137790"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}