Toggle light / dark theme

Quanta Magazine’s full list of the major computer science discoveries from 2023.


In 2023, artificial intelligence dominated popular culture — showing up in everything from internet memes to Senate hearings. Large language models such as those behind ChatGPT fueled a lot of this excitement, even as researchers still struggled to pry open the “black box” that describes their inner workings. Image generation systems also routinely impressed and unsettled us with their artistic abilities, yet these were explicitly founded on concepts borrowed from physics.

The year brought many other advances in computer science. Researchers made subtle but important progress on one of the oldest problems in the field, a question about the nature of hard problems referred to as “P versus NP.” In August, my colleague Ben Brubaker explored this seminal problem and the attempts of computational complexity theorists to answer the question: Why is it hard (in a precise, quantitative sense) to understand what makes hard problems hard? “It hasn’t been an easy journey — the path is littered with false turns and roadblocks, and it loops back on itself again and again,” Brubaker wrote. “Yet for meta-complexity researchers, that journey into an uncharted landscape is its own reward.”

Occam’s razor—the principle that when faced with competing explanations, we should choose the simplest that fits the facts—is not just a tool of science. Occam’s razor is science, insists a renowned molecular geneticist from the University of Surrey.

In a paper published in the Annals of the New York Academy of Sciences, Professor Johnjoe McFadden argues Occam’s razor—attributed to the Surrey-born Franciscan friar William of Occam (1285–1347)—is the only feature that differentiates science from superstition, pseudoscience or .

Professor McFadden said, “What is science? The rise of issues such as , climate skepticism, , and mysticism reveals significant levels of distrust or misunderstanding of science among the general public. The ongoing COVID inquiry also highlights how scientific ignorance extends into the heart of government. Part of the problem is that most people, even most scientists, have no clear idea of what science is actually about.”

A research team has revealed that ultrashort laser pulses can magnetize iron alloys, a discovery with significant potential for applications in magnetic sensor technology, data storage, and spintronics.

To magnetize an iron nail, one simply has to stroke its surface several times with a bar magnet. Yet, there is a much more unusual method: A team led by the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) discovered some time ago that a certain iron alloy can be magnetized with ultrashort laser pulses. The researchers have now teamed up with the Laserinstitut Hochschule Mittweida (LHM) to investigate this process further. They discovered that the phenomenon also occurs with a different class of materials – which significantly broadens potential application prospects. The working group presents its findings in the scientific journal Advanced Functional Materials.

Breakthrough Discovery in Magnetization.

A team of computer scientists at Google’s DeepMind project in the U.K., working with a colleague from the University of Wisconsin-Madison and another from Université de Lyon, has developed a computer program that combines a pretrained large language model (LLM) with an automated “evaluator” to produce solutions to problems in the form of computer code.

In their paper published in the journal Nature, the group describes their ideas, how they were implemented and the types of output produced by the new system.

Researchers throughout the scientific community have taken note of the things people are doing with LLMs, such as ChatGPT, and it has occurred to many of them that LLMs might be used to help speed up the process of scientific discovery. But they have also noted that for that to happen, a method is required to prevent confabulations, answers that seem reasonable but are wrong—they need output that is verifiable. To address this problem, the team working in the U.K. used what they call an automated evaluator to assess the answers given by an LLM.