Toggle light / dark theme

Both humans and AI hallucinate — but not in the same way

Large language models have been shown to ‘hallucinate’ entirely false information, but aren’t humans guilty of the same thing? So what’s the difference between both?