Large Language Models (LLMs) both threaten the uniqueness of human social intelligence and promise opportunities to better understand it. In this talk, I evaluate the extent to which distributional information learned by LLMs allows them to approximate human behavior on tasks that appear to require social intelligence. In the first half, I will compare human and LLM responses in experiments designed to measure theory of mind—the ability to represent and reason about the mental states of other agents. Second, I present the results of an evaluation of LLMs using the Turing test, which measures a machine’s ability to imitate humans in a multi-turn social interaction.
Cameron Jones recently graduated with a PhD in Cognitive Science from the Language and Cognition Lab at UC San Diego. His work focuses on comparing humans and Large Language Models (LLMs) to learn more about how each of those systems works. He is interested in the extent to which LLMs can explain human behavior that appears to rely on world knowledge, reasoning, and social intelligence. In particular, he is interested in whether LLMs can approximate human social behavior, for instance in the Turing test, or by persuading or deceiving human interlocutors.
https://scholar.google.com/citations?… / camrobjones.
/ camrobjones.