Until a few years ago, no one had heard of bixonimania. Then, in 2024, a group of scientists posted findings online announcing the condition, which they claimed affected the eyes after computer use. However, the scientists had made it up—not just the work, but the authors’ names, affiliations, locations and funding, which was the University of Fellowship of the Ring and the Galactic Triad.
Large language models like ChatGPT and Gemini treated it as real anyway, and in doing so, helped turn a fictional disease into a legitimate-sounding health concern.
Bixonimania is not an isolated case. Being deceived—whether you are a person or an AI model—is concerningly common, in science and beyond. Whether we’re talking about AI hallucinations, state-backed disinformation or just everyday lies, humans have a remarkable knack for naivety, owing to our biases and increasing need to outsource learning to others. These are problems we—individually and collectively—urgently need to better understand and overcome.
