Menu

Blog

May 15, 2024

How does ChatGPT ‘think’? Psychology and neuroscience crack open AI large language models

Posted by in categories: neuroscience, robotics/AI

“It is nonsensical to say that an LLM has feelings,” Hagendorff says. “It is nonsensical to say that it is self-aware or that it has intentions. But I don’t think it is nonsensical to say that these machines are able to learn or to deceive.”

Brain scans

Other researchers are taking tips from neuroscience to explore the inner workings of LLMs. To examine how chatbots deceive, Andy Zou, a computer scientist at Carnegie Mellon University in Pittsburgh, Pennsylvania, and his collaborators interrogated LLMs and looked at the activation of their ‘neurons’. “What we do here is similar to performing a neuroimaging scan for humans,” Zou says. It’s also a bit like designing a lie detector.

Leave a reply