{"id":178445,"date":"2023-12-16T10:26:59","date_gmt":"2023-12-16T16:26:59","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2023\/12\/dictionary-com-2023-word-of-the-year-hallucinate-is-an-ai-health-issue"},"modified":"2023-12-16T10:26:59","modified_gmt":"2023-12-16T16:26:59","slug":"dictionary-com-2023-word-of-the-year-hallucinate-is-an-ai-health-issue","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2023\/12\/dictionary-com-2023-word-of-the-year-hallucinate-is-an-ai-health-issue","title":{"rendered":"Dictionary.com 2023 Word Of The Year \u2018Hallucinate\u2019 Is An AI Health Issue"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/dictionary-com-2023-word-of-the-year-hallucinate-is-an-ai-health-issue2.jpg\"><\/a><\/p>\n<p>Bad things can happen when you hallucinate. If you are human, you can end up doing things like putting your underwear in the oven. If you happen to be a chatbot or some other type of artificial intelligence (AI) tool, you can spew out false and misleading information, which\u2014depending on the info\u2014could affect many, many people in a bad-for-your-health-and-well-being type of way. And this latter type of hallucinating has become increasingly common in 2023 with the continuing proliferation of AI. That\u2019s why <a href=\"http:\/\/Dictionary.com\">Dictionary.com<\/a> has an AI-specific definition of \u201challucinate\u201d and <a href=\"https:\/\/content.dictionary.com\/word-of-the-year-2023\/?adobe_mc=MCMID%3D63253621474619423594826067209075272732%7CMCORGID%3DAA9D3B6A630E2C2A0A495C40%2540AdobeOrg%7CTS%3D1702474366\" target=\"_blank\" class=\"\" title=\"https:\/\/content.dictionary.com\/word-of-the-year-2023\/?adobe_mc=MCMID%3D63253621474619423594826067209075272732%7CMCORGID%3DAA9D3B6A630E2C2A0A495C40%2540AdobeOrg%7CTS%3D1702474366\" rel=\"nofollow noopener noreferrer\" aria-label=\"has named the word as its 2023 Word of the Year\">has named the word as its 2023 Word of the Year<\/a>.<\/p>\n<p><a href=\"http:\/\/Dictionary.com\">Dictionary.com<\/a> noticed a 46% jump in dictionary lookups for the word \u201challucinate\u201d from 2022 to 2023 with a comparable increase in searches for \u201challucination\u201d as well. Meanwhile, there was a 62% jump in searches for AI-related words like \u201cchatbot\u201d, \u201cGPT\u201d, \u201cgenerative AI\u201d, and \u201cLLM.\u201d So the increases in searches for \u201challucinate\u201d is likely due more to the following AI-specific definition of the word from <a href=\"http:\/\/Dictionary.com\">Dictionary.com<\/a> rather than the traditional human definition:<\/p>\n<p><strong>hallucinate<\/strong> [ h <em>uh<\/em>-<strong>loo<\/strong>-s <em>uh<\/em>-neyt ]-verb-(of artificial <a href=\"https:\/\/www.psychologytoday.com\/us\/basics\/intelligence\" target=\"_blank\" class=\"\" title=\"https:\/\/www.psychologytoday.com\/us\/basics\/intelligence\" rel=\"nofollow noopener noreferrer\" aria-label=\"i\">i <\/a>ntelligence) to produce false information contrary to the intent of the user and present it as if true and factual. Example: <em>When chatbots hallucinate, the result is often not just inaccurate but completely fabricated.<\/em><\/p>\n<p>Here\u2019s a non-AI-generated new flash: AI can lie, just like humans. Not all AI, of course. But AI tools can be programmed to serve like little political animals or snake oil salespeople, generating false information while making it seem like it\u2019s all about facts. The difference from humans is that AI can churn out this misinformation and disinformation at even greater speeds. For example, a study published in JAMA Internal Medicine last month showed how OpenAI\u2019s GPT Playground could generate 102 different blog articles \u201cthat contained more than 17,000 words of disinformation related to vaccines and vaping\u201d within just 65 minutes. Yes, just 65 minutes. That\u2019s about how long it takes to watch the TV show 60 Minutes and then make a quick uncomplicated bathroom trip that doesn\u2019t involve texting on the toilet. Moreover, the study demonstrated how \u201cadditional generative AI tools created an accompanying 20 realistic images in less than 2 minutes.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Bad things can happen when you hallucinate. If you are human, you can end up doing things like putting your underwear in the oven. If you happen to be a chatbot or some other type of artificial intelligence (AI) tool, you can spew out false and misleading information, which\u2014depending on the info\u2014could affect many, many [\u2026]<\/p>\n","protected":false},"author":578,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[11,1495,6],"tags":[],"class_list":["post-178445","post","type-post","status-publish","format-standard","hentry","category-biotech-medical","category-health","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/178445","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/578"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=178445"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/178445\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=178445"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=178445"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=178445"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}