Menu

Blog

Apr 10, 2022

Why OpenAI recruited human contractors to improve GPT-3

Posted by in categories: internet, robotics/AI

There are ways around this, but they don’t have the exciting scalability story and worse, they have to rely on a rather non-tech crutch: human input. Smaller language models fine-tuned with actual human-written answers are ultimately better at generating less biased text than a much larger, more powerful system.

And further complicating matters is that models like OpenAI’s GPT-3 don’t always generate text that’s particularly useful because they’re trained to basically “autocomplete” sentences based on a huge trove of text scraped from the internet. They have no knowledge of what a user is asking it to do and what responses they are looking for. “In other words, these models aren’t aligned with their users,” OpenAI said.

Any test of this idea would be to see what happens with pared-down models and a little human input to keep those trimmed neural networks more…humane. This is exactly what OpenAI did with GPT-3 recently when it contracted 40 human contractors to help steer the model’s behavior.

Comments are closed.