{"id":179206,"date":"2023-12-26T15:44:50","date_gmt":"2023-12-26T21:44:50","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2023\/12\/large-language-models-repeat-conspiracy-theories-and-other-forms-of-misinformation-research-finds"},"modified":"2023-12-26T15:44:50","modified_gmt":"2023-12-26T21:44:50","slug":"large-language-models-repeat-conspiracy-theories-and-other-forms-of-misinformation-research-finds","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2023\/12\/large-language-models-repeat-conspiracy-theories-and-other-forms-of-misinformation-research-finds","title":{"rendered":"Large language models repeat conspiracy theories and other forms of misinformation, research finds"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/large-language-models-repeat-conspiracy-theories-and-other-forms-of-misinformation-research-finds3.jpg\"><\/a><\/p>\n<p>New research into large language models shows that they repeat conspiracy theories, harmful stereotypes, and other forms of misinformation.<\/p>\n<p>In a recent study, researchers at the University of Waterloo systematically tested an early version of ChatGPT\u2019s understanding of statements in six categories: facts, conspiracies, controversies, misconceptions, stereotypes, and fiction. This was part of Waterloo researchers\u2019 efforts to investigate human-technology interactions and explore how to mitigate risks.<\/p>\n<p>They discovered that GPT-3 frequently made mistakes, contradicted itself within the course of a single answer, and repeated harmful misinformation. The study, \u201cReliability Check: An Analysis of GPT-3\u2019s Response to Sensitive Topics and Prompt Wording,\u201d was published in Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>New research into large language models shows that they repeat conspiracy theories, harmful stereotypes, and other forms of misinformation. In a recent study, researchers at the University of Waterloo systematically tested an early version of ChatGPT\u2019s understanding of statements in six categories: facts, conspiracies, controversies, misconceptions, stereotypes, and fiction. This was part of Waterloo researchers\u2019 [\u2026]<\/p>\n","protected":false},"author":661,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6],"tags":[],"class_list":["post-179206","post","type-post","status-publish","format-standard","hentry","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/179206","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/661"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=179206"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/179206\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=179206"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=179206"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=179206"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}