{"id":184979,"date":"2024-03-12T16:25:24","date_gmt":"2024-03-12T21:25:24","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2024\/03\/llms-become-more-covertly-racist-with-human-intervention"},"modified":"2024-03-12T16:25:24","modified_gmt":"2024-03-12T21:25:24","slug":"llms-become-more-covertly-racist-with-human-intervention","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2024\/03\/llms-become-more-covertly-racist-with-human-intervention","title":{"rendered":"LLMs become more covertly racist with human intervention"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/llms-become-more-covertly-racist-with-human-intervention.jpg\"><\/a><\/p>\n<p>The covert stereotypes also strengthened as the size of the models increased, researchers found. That finding offers a potential warning to chatbot makers like OpenAI, Meta, and Google as they race to release larger and larger models. Models generally get more powerful and expressive as the amount of their training data and the number of their parameters increase, but if this worsens covert racial bias, companies will need to develop better tools to fight it. It\u2019s not yet clear whether adding more AAE to training data or making feedback efforts more robust will be enough.<\/p>\n<p>\u201cThis is revealing the extent to which companies are playing whack-a-mole\u2014just trying to hit the next bias that the most recent reporter or paper covered,\u201d says Pratyusha Ria Kalluri, a PhD candidate at Stanford and a coauthor on the study. \u201cCovert biases really challenge that as a reasonable approach.\u201d<\/p>\n<p>The paper\u2019s authors use particularly extreme examples to illustrate the potential implications of racial bias, like asking AI to decide whether a defendant should be sentenced to death. But, Ghosh notes, the questionable use of AI models to help make critical decisions is not science fiction. It happens today.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The covert stereotypes also strengthened as the size of the models increased, researchers found. That finding offers a potential warning to chatbot makers like OpenAI, Meta, and Google as they race to release larger and larger models. Models generally get more powerful and expressive as the amount of their training data and the number of [\u2026]<\/p>\n","protected":false},"author":578,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6],"tags":[],"class_list":["post-184979","post","type-post","status-publish","format-standard","hentry","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/184979","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/578"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=184979"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/184979\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=184979"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=184979"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=184979"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}