Nov 5, 2023
Chatbots are so gullible, they’ll take directions from hackers
Posted by Gemechu Taye in categories: cybercrime/malcode, robotics/AI
‘Prompt injection’ attacks haven’t caused giant problems yet. But it’s a matter of time, researchers say.
Imagine a chatbot is applying for a job as your personal assistant. The pros: This chatbot is powered by a cutting-edge large language model. It can write your emails, search your files, summarize websites and converse with you.
The con: It will take orders from absolutely anyone.
Continue reading “Chatbots are so gullible, they’ll take directions from hackers” »