Logical reasoning is still a major challenge for language models. DeepMind has found a way to support reasoning tasks.
A study by Google’s AI division DeepMind shows that the order of the premises in a task has a significant impact on the logical reasoning performance of language models.
They work best when the premises are presented in the same order as they appear in the logical conclusions. According to the researchers, this is also true for mathematical problems. The researchers make the systematically generated tests available in the R-GSM benchmark for further investigation.
Comments are closed.