Menu

Blog

Oct 12, 2023

Meta shows how to reduce hallucinations in ChatGPT & Co with prompt engineering

Posted by in category: robotics/AI

When ChatGPT & Co. have to check their answers themselves, they make fewer mistakes, according to a new study by Meta.

ChatGPT and other language models repeatedly reproduce incorrect information — even when they have learned the correct information. There are several approaches to reducing hallucination. Researchers at Meta AI now present Chain-of-Verification (CoVe), a prompt-based method that significantly reduces this problem.

New method relies on self-verification of the language model.

Comments are closed.