{"id":176567,"date":"2023-11-23T10:22:29","date_gmt":"2023-11-23T16:22:29","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2023\/11\/scientists-warn-that-ai-threatens-science-itself"},"modified":"2023-11-23T10:22:29","modified_gmt":"2023-11-23T16:22:29","slug":"scientists-warn-that-ai-threatens-science-itself","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2023\/11\/scientists-warn-that-ai-threatens-science-itself","title":{"rendered":"Scientists Warn That AI Threatens Science Itself"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/scientists-warn-that-ai-threatens-science-itself2.jpg\"><\/a><\/p>\n<p>What role should text-generating large language models (LLMs) have in the scientific research process? According to a team of Oxford scientists, the answer \u2014 at least for now \u2014 is: pretty much none.<\/p>\n<p>In a <a href=\"https:\/\/www.nature.com\/articles\/s41562-023-01744-0\" class=\"\">new essay<\/a>, researchers from the Oxford Internet Institute argue that scientists should abstain from using LLM-powered tools like chatbots to assist in scientific research on the grounds that AI\u2019s penchant for hallucinating and fabricating facts, combined with the human tendency to anthropomorphize the human-mimicking word engines, could lead to larger information breakdowns \u2014 a fate that could ultimately threaten the fabric of science itself.<\/p>\n<p>\u201cOur tendency to anthropomorphize machines and trust models as human-like truth-tellers, consuming and spreading the bad information that they produce in the process,\u201d the researchers write in the essay, which was published this week in the journal <em>Nature Human Behavior<\/em>, \u201cis uniquely worrying for the future of science.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"<p>What role should text-generating large language models (LLMs) have in the scientific research process? According to a team of Oxford scientists, the answer \u2014 at least for now \u2014 is: pretty much none. In a new essay, researchers from the Oxford Internet Institute argue that scientists should abstain from using LLM-powered tools like chatbots to [\u2026]<\/p>\n","protected":false},"author":705,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[418,6,224],"tags":[],"class_list":["post-176567","post","type-post","status-publish","format-standard","hentry","category-internet","category-robotics-ai","category-science"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/176567","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/705"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=176567"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/176567\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=176567"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=176567"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=176567"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}