This fall, Sam Altman, OpenAI’s once-(and possibly future-) CEO made a surprising statement about artificial intelligence. AI systems, including that company’s ChatGPT, are known to “hallucinate”: perceive patterns and generate outputs that are nonsensical. That wasn’t a flaw in AI systems, Altman said, it was part of their “magic.” The fact “that these AI systems can come up with new ideas and be creative, that’s a lot of the power.” That raised eyebrows: We humans are rather good at creativity without getting our facts all wrong. How could such an appeal to creativity make a decent counter to the many concerns about accuracy?
To begin, what do people mean when they say an AI system “hallucinates”? Take this example of what happens when GPT4 tries its hand at academic citations:
Comments are closed.