Menu

Blog

Mar 16, 2024

Researchers jailbreak AI chatbots with ASCII art — ArtPrompt bypasses safety measures to unlock malicious queries

Posted by in category: robotics/AI

ArtPrompt bypassed safety measures in ChatGPT, Gemini, Claude, and Llama2.

Comments are closed.