San Francisco-based AI research laboratory OpenAI has added another member to its popular GPT (Generative Pre-trained Transformer) family. In a new paper, OpenAI researchers introduce GPT-f, an automated prover and proof assistant for the Metamath formalization language.
While artificial neural networks have made considerable advances in computer vision, natural language processing, robotics and so on, OpenAI believes they also have potential in the relatively underexplored area of reasoning tasks. The new research explores this potential by applying a transformer language model to automated theorem proving.
Automated theorem proving tends to require general and flexible reasoning to efficiently check the correctness of proofs. This makes it an appealing domain for checking the reasoning capabilities of language models and for the study of reasoning in general. The ability to verify proofs also helps researchers as it enables the automatic generation of new problems that can be used as training data.
Comments are closed.