{"id":215080,"date":"2025-05-30T13:05:19","date_gmt":"2025-05-30T18:05:19","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2025\/05\/team-teaches-ai-models-to-spot-misleading-scientific-reporting"},"modified":"2025-05-30T13:05:19","modified_gmt":"2025-05-30T18:05:19","slug":"team-teaches-ai-models-to-spot-misleading-scientific-reporting","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2025\/05\/team-teaches-ai-models-to-spot-misleading-scientific-reporting","title":{"rendered":"Team teaches AI models to spot misleading scientific reporting"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/team-teaches-ai-models-to-spot-misleading-scientific-reporting2.jpg\"><\/a><\/p>\n<p>Artificial intelligence isn\u2019t always a reliable source of information: large language models (LLMs) like Llama and ChatGPT can be prone to \u201challucinating\u201d and inventing bogus facts. But what if AI could be used to detect mistaken or distorted claims, and help people find their way more confidently through a sea of potential distortions online and elsewhere?<\/p>\n<p>As presented at a workshop at the annual conference of the Association for the Advancement of Artificial Intelligence, researchers at Stevens Institute of Technology <a href=\"https:\/\/openreview.net\/pdf\/17a3c9632a6f71e59171f7a8f245c9dce44cf559.pdf\" target=\"_blank\">present an AI architecture<\/a> designed to do just that, using open-source LLMs and free versions of commercial LLMs to identify potential misleading narratives in <a href=\"https:\/\/techxplore.com\/tags\/news\/\" rel=\"tag\" class=\"\">news<\/a> reports on <a href=\"https:\/\/techxplore.com\/tags\/scientific+discoveries\/\" rel=\"tag\" class=\"\">scientific discoveries<\/a>.<\/p>\n<p>\u201cInaccurate information is a big deal, especially when it comes to scientific content\u2014we hear all the time from doctors who worry about their patients reading things online that aren\u2019t accurate, for instance,\u201d said K.P. Subbalakshmi, the paper\u2019s co-author and a professor in the Department of Electrical and Computer Engineering at Stevens.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Artificial intelligence isn\u2019t always a reliable source of information: large language models (LLMs) like Llama and ChatGPT can be prone to \u201challucinating\u201d and inventing bogus facts. But what if AI could be used to detect mistaken or distorted claims, and help people find their way more confidently through a sea of potential distortions online and [\u2026]<\/p>\n","protected":false},"author":732,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[11,6],"tags":[],"class_list":["post-215080","post","type-post","status-publish","format-standard","hentry","category-biotech-medical","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/215080","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/732"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=215080"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/215080\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=215080"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=215080"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=215080"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}