Toggle light / dark theme

Microsoft finds security flaw in AI chatbots that could expose conversation topics

Your conversations with AI assistants such as ChatGPT and Google Gemini may not be as private as you think they are. Microsoft has revealed a serious flaw in the large language models (LLMs) that power these AI services, potentially exposing the topic of your conversations with them. Researchers dubbed the vulnerability “Whisper Leak” and found it affects nearly all the models they tested.

When you chat with AI assistants built into major search engines or apps, the information is protected by TLS (Transport Layer Security), the same used for online banking. These secure connections stop would-be eavesdroppers from reading the words you type. However, Microsoft discovered that the metadata (how your messages are traveling across the internet) remains visible. Whisper Leak doesn’t break encryption, but it takes advantage of what encryption cannot hide.

Leave a Comment

Lifeboat Foundation respects your privacy! Your email address will not be published.

/* */