Menu

Blog

Dec 6, 2023

A New Trick Uses AI to Jailbreak AI Models—Including GPT-4

Posted by in categories: information science, robotics/AI

Adversarial algorithms can systematically probe large language models like OpenAI’s GPT-4 for weaknesses that can make them misbehave.

Comments are closed.