Altruism, the tendency to behave in ways that benefit others even if it comes at a cost to oneself, is a valuable human quality that can facilitate cooperation with others and promote meaningful social relationships. Behavioral scientists have been studying human altruism for decades, typically using tasks or games rooted in economics.
Two researchers based at Willamette University and the Laureate Institute for Brain Research recently set out to explore the possibility that large language models (LLMs), such as the model underpinning the functioning of the conversational platform ChatGPT, can simulate the altruistic behavior observed in humans. Their findings, published in Nature Human Behavior, suggest that LLMs do in fact simulate altruism in specific social experiments, offering a possible explanation for this.
“My paper with Nick Obradovich emerged from my longstanding interest in altruism and cooperation,” Tim Johnson, co-author of the paper, told Tech Xplore. “Over the course of my career, I have used computer simulation to study models in which agents in a population interact with each other and can incur a cost to benefit another party. In parallel, I have studied how people make decisions about altruism and cooperation in laboratory settings.