Menu

Blog

Dec 12, 2024

Generative language models exhibit social identity biases

Posted by in category: futurism

Researchers show that large language models exhibit social identity biases similar to humans, having favoritism toward ingroups and hostility toward outgroups. These biases persist across models, training data and real-world human–LLM conversations.

Leave a reply