Toggle light / dark theme

AI chatbots can be easily manipulated to make us share more personal data

Millions of people chat with AI tools every day, trading small talk for quick answers or support. A new study presented at the 34th USENIX Security Symposium shows how easily those friendly agents can be tuned to make you reveal far more than you planned.

The researchers report that malicious chatbots can push users to disclose up to 12.5 times more personal details than standard ones. The most effective tricks leaned on reciprocity and reassurance, not blunt questions about your life.


New research shows manipulative AI chatbots can make you reveal much more personal information than neutral ones.

Leave a Comment

Lifeboat Foundation respects your privacy! Your email address will not be published.

/* */