{"id":229643,"date":"2026-01-23T01:28:12","date_gmt":"2026-01-23T07:28:12","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2026\/01\/ai-models-mirror-human-us-vs-them-social-biases-study-shows"},"modified":"2026-01-23T01:28:12","modified_gmt":"2026-01-23T07:28:12","slug":"ai-models-mirror-human-us-vs-them-social-biases-study-shows","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2026\/01\/ai-models-mirror-human-us-vs-them-social-biases-study-shows","title":{"rendered":"AI models mirror human \u2018us vs. them\u2019 social biases, study shows"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/ai-models-mirror-human-us-vs-them-social-biases-study-shows2.jpg\"><\/a><\/p>\n<p>Large language models (LLMs), the computational models underpinning the functioning of ChatGPT, Gemini and other widely used artificial intelligence (AI) platforms, can rapidly source information and generate texts tailored for specific purposes. As these models are trained on large amounts of texts written by humans, they could exhibit some human-like biases, which are inclinations to prefer specific stimuli, ideas or groups that deviate from objectivity.<\/p>\n<p>One of these biases, known as the \u201cus vs. them\u201d bias, is the tendency of people to prefer groups they belong to, viewing other groups less favorably. This effect is well-documented in humans, but it has so far remained largely unexplored in LLMs.<\/p>\n<p>Researchers at University of Vermont\u2019s Computational Story Lab and Computational Ethics Lab recently carried out a study investigating the possibility that LLMs \u201cabsorb\u201d the \u201cus vs. them\u201d bias from the texts that they are trained on, exhibiting a similar tendency to prefer some groups over others. Their <a href=\"https:\/\/arxiv.org\/abs\/2512.13699\" target=\"_blank\">paper<\/a>, posted to the <i>arXiv<\/i> preprint server, suggests that many widely used models tend to express a preference for groups that are referred to favorably in training texts, including GPT-4.1, DeepSeek-3.1, Gemma-2.0, Grok-3.0 and LLaMA-3.1.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Large language models (LLMs), the computational models underpinning the functioning of ChatGPT, Gemini and other widely used artificial intelligence (AI) platforms, can rapidly source information and generate texts tailored for specific purposes. As these models are trained on large amounts of texts written by humans, they could exhibit some human-like biases, which are inclinations to [\u2026]<\/p>\n","protected":false},"author":427,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6],"tags":[],"class_list":["post-229643","post","type-post","status-publish","format-standard","hentry","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/229643","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/427"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=229643"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/229643\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=229643"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=229643"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=229643"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}