Toggle light / dark theme

Asking AI to act like an expert can make it less reliable

To get the best out of AI, some users tell it to provide answers as if it were an expert. Others ask it to adopt a persona, such as a safety monitor, to guide its responses. However, this approach can sometimes hurt performance, according to a study available on the arXiv preprint server.

To see how well large language models (LLMs) behave when they are told to be someone else, researchers from the University of California ran a huge test using 12 different personas across six language models. These included experts in fields like math, coding and STEM (science, technology, engineering and mathematics) as well as general roles such as creative writer or safety monitor.

The team found that adopting a persona was something of a double-edged sword. While it makes AI sound more professional and keeps it safer (more likely to follow rules and less likely to generate harmful content), it sometimes performs worse at recalling facts.

Leave a Comment

Lifeboat Foundation respects your privacy! Your email address will not be published.

/* */