{"id":224870,"date":"2025-11-11T00:18:19","date_gmt":"2025-11-11T06:18:19","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2025\/11\/microsoft-finds-security-flaw-in-ai-chatbots-that-could-expose-conversation-topics"},"modified":"2025-11-11T00:18:19","modified_gmt":"2025-11-11T06:18:19","slug":"microsoft-finds-security-flaw-in-ai-chatbots-that-could-expose-conversation-topics","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2025\/11\/microsoft-finds-security-flaw-in-ai-chatbots-that-could-expose-conversation-topics","title":{"rendered":"Microsoft finds security flaw in AI chatbots that could expose conversation topics"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/microsoft-finds-security-flaw-in-ai-chatbots-that-could-expose-conversation-topics3.jpg\"><\/a><\/p>\n<p>Your conversations with AI assistants such as ChatGPT and Google Gemini may not be as private as you think they are. Microsoft has revealed a serious flaw in the large language models (LLMs) that power these AI services, potentially exposing the topic of your conversations with them. Researchers dubbed the vulnerability \u201cWhisper Leak\u201d and found it affects nearly all the models they tested.<\/p>\n<p>When you chat with AI assistants built into major search engines or apps, the information is protected by TLS (Transport Layer Security), the same <a href=\"https:\/\/techxplore.com\/tags\/encryption\/\" rel=\"tag\" class=\"\">encryption<\/a> used for online banking. These secure connections stop would-be eavesdroppers from reading the words you type. However, Microsoft discovered that the metadata (how your messages are traveling across the internet) remains visible. Whisper Leak doesn\u2019t break encryption, but it takes advantage of what encryption cannot hide.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Your conversations with AI assistants such as ChatGPT and Google Gemini may not be as private as you think they are. Microsoft has revealed a serious flaw in the large language models (LLMs) that power these AI services, potentially exposing the topic of your conversations with them. Researchers dubbed the vulnerability \u201cWhisper Leak\u201d and found [\u2026]<\/p>\n","protected":false},"author":427,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1625,45,418,6,1492],"tags":[],"class_list":["post-224870","post","type-post","status-publish","format-standard","hentry","category-encryption","category-finance","category-internet","category-robotics-ai","category-security"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/224870","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/427"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=224870"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/224870\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=224870"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=224870"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=224870"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}