{"id":137942,"date":"2022-04-10T18:02:26","date_gmt":"2022-04-10T23:02:26","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2022\/04\/why-openai-recruited-human-contractors-to-improve-gpt-3"},"modified":"2022-04-10T18:02:26","modified_gmt":"2022-04-10T23:02:26","slug":"why-openai-recruited-human-contractors-to-improve-gpt-3","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2022\/04\/why-openai-recruited-human-contractors-to-improve-gpt-3","title":{"rendered":"Why OpenAI recruited human contractors to improve GPT-3"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/why-openai-recruited-human-contractors-to-improve-gpt-3.jpg\"><\/a><\/p>\n<p>There are ways around this, but they don\u2019t have the exciting scalability story and worse, they have to rely on a rather non-tech crutch: human input. Smaller language models fine-tuned with actual human-written answers are ultimately better at generating less biased text than a much larger, more powerful system.<\/p>\n<p>And further complicating matters is that models like OpenAI\u2019s GPT-3 don\u2019t always generate text that\u2019s particularly useful because they\u2019re trained to basically \u201cautocomplete\u201d sentences based on a huge trove of text scraped from the internet. They have no knowledge of what a user is asking it to do and what responses they are looking for. \u201cIn other words, these models aren\u2019t aligned with their users,\u201d OpenAI <a href=\"https:\/\/openai.com\/blog\/instruction-following\/\" rel=\"nofollow\">said<\/a>.<\/p>\n<p>Any test of this idea would be to see what happens with pared-down models and a little human input to keep those trimmed neural networks more\u2026humane. This is exactly what OpenAI did with GPT-3 recently when it contracted 40 human contractors to help steer the model\u2019s behavior.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>There are ways around this, but they don\u2019t have the exciting scalability story and worse, they have to rely on a rather non-tech crutch: human input. Smaller language models fine-tuned with actual human-written answers are ultimately better at generating less biased text than a much larger, more powerful system. And further complicating matters is that [\u2026]<\/p>\n","protected":false},"author":556,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[418,6],"tags":[],"class_list":["post-137942","post","type-post","status-publish","format-standard","hentry","category-internet","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/137942","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/556"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=137942"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/137942\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=137942"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=137942"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=137942"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}