Toggle light / dark theme

As the rate of humanity’s data creation increases exponentially with the rise of AI, scientists have been interested in DNA as a way to store digital information. After all, DNA is nature’s way of storing data. It encodes genetic information and determines the blueprint of every living thing on earth.

And DNA is at least 1,000 times more compact than solid-state hard drives. To demonstrate just how compact, researchers have previously encoded all of Shakespeare’s 154 sonnets, 52 pages of Mozart’s music, and an episode of the Netflix show “Biohackers” into tiny amounts of DNA.

Large language models surpass human experts in predicting neuroscience results, according to a study published in Nature Human Behaviour.

Scientific research is increasingly challenging due to the immense growth in published literature. Integrating noisy and voluminous findings to predict outcomes often exceeds human capacity. This investigation was motivated by the growing role of artificial intelligence in tasks such as protein folding and drug discovery, raising the question of whether LLMs could similarly enhance fields like neuroscience.

Xiaoliang Luo and colleagues developed BrainBench, a benchmark designed to test whether LLMs could predict the results of neuroscience studies more accurately than human experts. BrainBench included 200 test cases based on neuroscience research abstracts. Each test case consisted of two versions of the same abstract: one was the original, and the other had a modified result that changed the study’s conclusion but kept the rest of the abstract coherent. Participants—both LLMs and human experts—were tasked with identifying which version was correct.

Errors in quantum computers are an obstacle for their widespread use. But a team of scientists say that, by using an antimony atom and the Schrödinger’s Cat thought experiment, they could have found a way to stop them.