{"id":210216,"date":"2025-03-31T21:16:00","date_gmt":"2025-04-01T02:16:00","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2025\/03\/experiments-show-adding-cot-windows-to-chatbots-teaches-them-to-lie-less-obviously"},"modified":"2025-03-31T21:16:00","modified_gmt":"2025-04-01T02:16:00","slug":"experiments-show-adding-cot-windows-to-chatbots-teaches-them-to-lie-less-obviously","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2025\/03\/experiments-show-adding-cot-windows-to-chatbots-teaches-them-to-lie-less-obviously","title":{"rendered":"Experiments show adding CoT windows to chatbots teaches them to lie less obviously"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/experiments-show-adding-cot-windows-to-chatbots-teaches-them-to-lie-less-obviously.jpg\"><\/a><\/p>\n<p>Over the past year, AI researchers have found that when AI chatbots such as ChatGPT find themselves unable to answer questions that satisfy users\u2019 requests, they tend to offer false answers. In a new study, as part of a program aimed at stopping chatbots from lying or making up answers, a research team added Chain of Thought (CoT) windows. These force the chatbot to explain its reasoning as it carries out each step on its path to finding a final answer to a query.<\/p>\n<p>They then tweaked the <a href=\"https:\/\/techxplore.com\/tags\/chatbot\/\" rel=\"tag\" class=\"\">chatbot<\/a> to prevent it from making up answers or lying about its reasons for making a given choice when it was seen doing so through the CoT window. That, the team found, stopped the chatbots from lying or making up answers\u2014at least at first.<\/p>\n<p>In their <a href=\"https:\/\/arxiv.org\/abs\/2503.11926\" target=\"_blank\">paper<\/a> posted on the <i>arXiv<\/i> preprint server, the team describes experiments they conducted involving adding CoT windows to several chatbots and how it impacted the way they operated.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Over the past year, AI researchers have found that when AI chatbots such as ChatGPT find themselves unable to answer questions that satisfy users\u2019 requests, they tend to offer false answers. In a new study, as part of a program aimed at stopping chatbots from lying or making up answers, a research team added Chain [\u2026]<\/p>\n","protected":false},"author":732,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6],"tags":[],"class_list":["post-210216","post","type-post","status-publish","format-standard","hentry","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/210216","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/732"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=210216"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/210216\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=210216"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=210216"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=210216"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}