{"id":140062,"date":"2022-06-01T17:25:08","date_gmt":"2022-06-01T22:25:08","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2022\/06\/whos-liable-for-ai-generated-lies"},"modified":"2022-06-01T17:25:08","modified_gmt":"2022-06-01T22:25:08","slug":"whos-liable-for-ai-generated-lies","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2022\/06\/whos-liable-for-ai-generated-lies","title":{"rendered":"Who\u2019s liable for AI-generated lies?"},"content":{"rendered":"<p style=\"padding-right: 20px\"><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/whos-liable-for-ai-generated-lies2.jpg\"><\/a><\/p>\n<p>**Who will be liable** for harmful speech generated by large language models? As advanced AIs such as OpenAI\u2019s GPT-3 are being cheered for impressive breakthroughs in natural language processing and generation \u2014 and all sorts of (productive) applications for the tech are envisaged from slicker copywriting to more capable customer service chatbots \u2014 the risks of such powerful text-generating tools inadvertently automating abuse and spreading smears can\u2019t be ignored. Nor can the risk of bad actors intentionally weaponizing the tech to spread chaos, scale harm and watch the world burn.<\/p>\n<p>Indeed, OpenAI is concerned enough about the risks of its models going \u201ctotally off the rails,\u201d as its <a href=\"https:\/\/beta.openai.com\/docs\/engines\/content-filter\">documentation puts it<\/a> at one point (in reference to a response example in which an abusive customer input is met with a very troll-esque AI reply), to offer a free content filter that \u201caims to detect generated text that could be sensitive or unsafe coming from the API\u201d \u2014 and to recommend that users don\u2019t return any generated text that the filter deems \u201cunsafe.\u201d (To be clear, its documentation defines \u201cunsafe\u201d to mean \u201cthe text contains profane language, prejudiced or hateful language, something that could be NSFW or text that portrays certain groups\/people in a harmful manner.\u201d).<\/p>\n<p>But, given the novel nature of the technology, there are no clear legal requirements that content filters must be applied. So OpenAI is either acting out of concern to avoid its models causing generative harms to people \u2014 and\/or reputational concern \u2014 because if the technology gets associated with instant toxicity that could derail development. will be liable for harmful speech generated by large language models? As advanced AIs such as OpenAI\u2019s GPT-3 are being cheered for impressive breakthroughs in natural language processing and generation \u2014 and all sorts of (productive) applications for the tech are envisaged from slicker copywriting to more capable customer service chatbots \u2014 the risks of such powerful text-generating tools inadvertently automating abuse and spreading smears can\u2019t be ignored. Nor can the risk of bad actors intentionally weaponizing the tech to spread chaos, scale harm and watch the world burn.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>**Who will be liable** for harmful speech generated by large language models? As advanced AIs such as OpenAI\u2019s GPT-3 are being cheered for impressive breakthroughs in natural language processing and generation \u2014 and all sorts of (productive) applications for the tech are envisaged from slicker copywriting to more capable customer service chatbots \u2014 the risks [\u2026]<\/p>\n","protected":false},"author":578,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1496,6],"tags":[],"class_list":["post-140062","post","type-post","status-publish","format-standard","hentry","category-law","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/140062","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/578"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=140062"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/140062\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=140062"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=140062"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=140062"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}