Menu

Blog

Dec 9, 2022

DeepMind’s AlphaCode Can Outcompete Human Coders

Posted by in category: robotics/AI

AlphaCode received an average ranking in the top 54.3% in simulated evaluations in recent coding competitions on the Codeforces competitive coding platform when limited to generation 10 solutions per problem. 66% of those problems, however, were solved using its first submission.

That might not sound all that impressive, particularly when compared to seemingly stronger model performances against humans in complex board games, though the researchers note that succeeding at coding competitions are uniquely difficult. To succeed, AlphaCode had to first understand complex coding problems in natural languages and then “reason” about unforeseen problems rather than simply memorizing code snippets. AlphaCode was able to solve problems it hadn’t seen before, and the researchers claim they found no evidence that their model simply copied core logix from the training data. Combined, the researchers say those factors make AlphaCode’s performance a “big step forward.”

Comments are closed.