Toggle light / dark theme

A team of computer scientists at Google’s DeepMind project in the U.K., working with a colleague from the University of Wisconsin-Madison and another from Université de Lyon, has developed a computer program that combines a pretrained large language model (LLM) with an automated “evaluator” to produce solutions to problems in the form of computer code.

In their paper published in the journal Nature, the group describes their ideas, how they were implemented and the types of output produced by the new system.

Researchers throughout the scientific community have taken note of the things people are doing with LLMs, such as ChatGPT, and it has occurred to many of them that LLMs might be used to help speed up the process of scientific discovery. But they have also noted that for that to happen, a method is required to prevent confabulations, answers that seem reasonable but are wrong—they need output that is verifiable. To address this problem, the team working in the U.K. used what they call an automated evaluator to assess the answers given by an LLM.

ICYMI: DeepSouth uses a #neuromorphiccomputing system which mimics biological processes, using hardware to efficiently emulate large networks of spiking #neurons at 228 trillion #Synaptic operations per second — rivalling the estimated rate of operations in the human brain.


Australian researchers are putting together a supercomputer designed to emulate the world’s most efficient learning machine – a neuromorphic monster capable of the same estimated 228 trillion synaptic operations per second that human brains handle.

As the age of AI dawns upon us, it’s clear that this wild technological leap is one of the most significant in the planet’s history, and will very soon be deeply embedded in every part of our lives. But it all relies on absolutely gargantuan amounts of computing power. Indeed, on current trends, the AI servers NVIDIA sells alone will likely be consuming more energy annually than many small countries. In a world desperately trying to decarbonize, that kind of energy load is a massive drag.

But as often happens, nature has already solved this problem. Our own necktop computers are still the state of the art, capable of learning super quickly from small amounts of messy, noisy data, or processing the equivalent of a billion billion mathematical operations every second – while consuming a paltry 20 watts of energy.

Artificial intelligence researchers claim to have made the world’s first genuine scientific discovery using a large language model (LLM), which is behind ChatGPT and similar programs. This signals a major breakthrough.

The discovery was made by Google DeepMind, an AI research laboratory where scientists are investigating whether LLMs can do more than just repackage information learned in training and actually generate new insights.

It turns out that they can, and the implications are potentially huge. DeepMind said in a blog post that its FunSearch, a method to search for new solutions in mathematics and computer science, made “the first discoveries in open problems in mathematical sciences using LLMs.”

No one knows who might get there first. The United States and China are considered the leaders in the field; many experts believe America still holds an edge.

As the race to master quantum computing continues, a scramble is on to protect critical data. Washington and its allies are working on new encryption standards known as post-quantum cryptography – essentially codes that are much harder to crack, even for a quantum computer. Beijing is trying to pioneer quantum communications networks, a technology theoretically impossible to hack, according to researchers. The scientist spearheading Beijing’s efforts has become a minor celebrity in China.

Quantum computing is radically different. Conventional computers process information as bits – either 1 or 0, and just one number at a time. Quantum computers process in quantum bits, or “qubits,” which can be 1, 0 or any number in between, all at the same time, which physicists say is an approximate way of describing a complex mathematical concept.

The teacher shortage crisis is a major concern, casting a shadow on educational quality across the globe. In this academic climate, the rise of AI in the classroom sparks both hope and skepticism. Alpha school is leading the way, devoid of traditional teachers and reliant on its AI-powered curriculum and “guide” system. This innovative approach offers a glimpse of a promising future where technology and human ingenuity merge to redefine education.

AI has become a game-changer in education by customizing learning experiences according to students’ individual learning styles and paces. Alpha’s app-based tutoring system is a prime example of this. It is personalized for each student’s strengths and weaknesses, a significant departure from the traditional “one-size-fits-all” classroom approach. For instance, consider a child who struggles with math concepts. AI can modify the exercises and explanations to suit their learning style, enabling them to understand the material better.

Moreover, this AI-driven education system offers instant and detailed feedback, which may be lacking in some schools. Such immediate response fosters a deeper understanding and encourages a more engaged learning process. This level of individualized attention is a powerful tool for enhancing knowledge and engagement.

DeepMind’s FunSearch discovers new mathematical knowledge and algorithms.


Google DeepMind has triumphantly cracked an age-old mathematical mystery using a method called FunSearch.

The math problem that FunSearch has solved is the famous cap set problem in pure mathematics, which has stumped even the brightest human mathematicians.

For the first time ever, researchers show how a large language model can help discover novel solutions to long-standing problems in math and computer science.


The card game Set has long inspired mathematicians to create interesting problems.

Now, a technique based on large language models (LLMs) is showing that artificial intelligence (AI) can help mathematicians to generate new solutions.

The AI system, called FunSearch, made progress on Set-inspired problems in combinatorics, a field of mathematics that studies how to count the possible arrangements of sets containing finitely many objects. But its inventors say that the method, described in Nature on 14 December1, could be applied to a variety of questions in maths and computer science.

There is no computer even remotely as powerful and complex as the human brain. The lumps of tissue ensconced in our skulls can process information at quantities and speeds that computing technology can barely touch.

Key to the brain’s success is the neuron’s efficiency in serving as both a processor and memory device, in contrast to the physically separated units in most modern computing devices.

There have been many attempts to make computing more brain-like, but a new effort takes it all a step further – by integrating real, actual, human brain tissue with electronics.