{"id":144105,"date":"2022-08-16T00:10:30","date_gmt":"2022-08-16T05:10:30","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2022\/08\/new-and-improved-content-moderation-tooling"},"modified":"2022-08-16T00:10:30","modified_gmt":"2022-08-16T05:10:30","slug":"new-and-improved-content-moderation-tooling","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2022\/08\/new-and-improved-content-moderation-tooling","title":{"rendered":"New-and-Improved Content Moderation Tooling"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/new-and-improved-content-moderation-tooling2.jpg\"><\/a><\/p>\n<p>To help developers protect their applications against possible misuse, we are introducing the faster and more accurate <a href=\"https:\/\/beta.openai.com\/docs\/api-reference\/moderations\">Moderation endpoint<\/a>. This endpoint provides OpenAI API developers with free access to <a href=\"https:\/\/openai.com\/blog\/customized-gpt-3\/\">GPT-based<\/a> classifiers that detect undesired content \u2014 an instance of <a href=\"https:\/\/openai.com\/blog\/critiques\/\">using AI systems<\/a> to assist with human supervision of these systems. We have also released both a <a href=\"https:\/\/arxiv.org\/abs\/2208.03274\">technical paper<\/a> describing our methodology and the <a href=\"https:\/\/github.com\/openai\/moderation-api-release\">dataset<\/a> used for evaluation.<\/p>\n<p>When given a text input, the Moderation endpoint assesses whether the content is sexual, hateful, violent, or promotes self-harm \u2014 content prohibited by our <a href=\"https:\/\/beta.openai.com\/docs\/usage-guidelines\/content-policy\">content policy<\/a>. The endpoint has been trained to be quick, accurate, and to perform robustly across a range of applications. Importantly, this reduces the chances of products \u201csaying\u201d the wrong thing, even when deployed to users at-scale. As a consequence, AI can unlock benefits in sensitive settings, like education, where it could not otherwise be used with confidence.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>To help developers protect their applications against possible misuse, we are introducing the faster and more accurate Moderation endpoint. This endpoint provides OpenAI API developers with free access to GPT-based classifiers that detect undesired content \u2014 an instance of using AI systems to assist with human supervision of these systems. We have also released both [\u2026]<\/p>\n","protected":false},"author":556,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[32,13,31,6],"tags":[],"class_list":["post-144105","post","type-post","status-publish","format-standard","hentry","category-education","category-open-access","category-policy","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/144105","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/556"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=144105"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/144105\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=144105"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=144105"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=144105"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}