Toggle light / dark theme

Reservoir computing (RC) is a powerful machine learning module designed to handle tasks involving time-based or sequential data, such as tracking patterns over time or analyzing sequences. It is widely used in areas such as finance, robotics, speech recognition, weather forecasting, natural language processing, and predicting complex nonlinear dynamical systems. What sets RC apart is its efficiency―it delivers powerful results with much lower training costs compared to other methods.

RC uses a fixed, randomly connected network layer, known as the reservoir, to turn input data into a more complex representation. A readout layer then analyzes this representation to find patterns and connections in the data. Unlike traditional neural networks, which require extensive training across multiple network layers, RC only trains the readout layer, typically through a simple linear regression process. This drastically reduces the amount of computation needed, making RC fast and computationally efficient.

Inspired by how the brain works, RC uses a fixed network structure but learns the outputs in an adaptable way. It is especially good at predicting and can even be used on physical devices (called physical RC) for energy-efficient, high-performance computing. Nevertheless, can it be optimized further?

As the rate of humanity’s data creation increases exponentially with the rise of AI, scientists have been interested in DNA as a way to store digital information. After all, DNA is nature’s way of storing data. It encodes genetic information and determines the blueprint of every living thing on earth.

And DNA is at least 1,000 times more compact than solid-state hard drives. To demonstrate just how compact, researchers have previously encoded all of Shakespeare’s 154 sonnets, 52 pages of Mozart’s music, and an episode of the Netflix show “Biohackers” into tiny amounts of DNA.

But these were research projects or media stunts. DNA data storage isn’t exactly mainstream yet, but it might be getting closer. Now you can buy what may be the first commercially available book written in DNA. Today, Asimov Press debuted an anthology of biotechnology essays and science fiction stories encoded in strands of DNA. For $60, you can get a physical copy of the book plus the nucleic acid version—a metal capsule filled with dried DNA.

Large language models surpass human experts in predicting neuroscience results, according to a study published in Nature Human Behaviour.

Scientific research is increasingly challenging due to the immense growth in published literature. Integrating noisy and voluminous findings to predict outcomes often exceeds human capacity. This investigation was motivated by the growing role of artificial intelligence in tasks such as protein folding and drug discovery, raising the question of whether LLMs could similarly enhance fields like neuroscience.

Xiaoliang Luo and colleagues developed BrainBench, a benchmark designed to test whether LLMs could predict the results of neuroscience studies more accurately than human experts. BrainBench included 200 test cases based on neuroscience research abstracts. Each test case consisted of two versions of the same abstract: one was the original, and the other had a modified result that changed the study’s conclusion but kept the rest of the abstract coherent. Participants—both LLMs and human experts—were tasked with identifying which version was correct.

Artificial intelligence (AI) once seemed like a fantastical construct of science fiction, enabling characters to deploy spacecraft to neighboring galaxies with a casual command. Humanoid AIs even served as companions to otherwise lonely characters. Now, in the very real 21st century, AI is becoming part of everyday life, with tools like chatbots available and useful for everyday tasks like answering questions, improving writing, and solving mathematical equations.

AI does, however, have the potential to revolutionize —in ways that can feel like but are within reach.

At the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory, scientists are already using AI to automate experiments and discover new materials. They’re even designing an AI scientific companion that communicates in ordinary language and helps conduct experiments. Kevin Yager, the Electronic Nanomaterials Group leader at the Center for Functional Nanomaterials (CFN), has articulated an overarching vision for the role of AI in scientific research.

Snakebites affect 1.8 to 2.7 million people annually, causing around 100,000 deaths and three times as many permanent disabilities, according to the World Health Organization. Victims are predominantly in regions with fragile healthcare systems, such as Africa, Asia, and Latin America. Traditional antivenoms derived from animal plasma come with significant drawbacks: high costs, limited efficacy, and serious side effects.

The diversity of snake venoms further complicates treatment, as current antivenoms often target specific species. However, advances in toxin research and computational tools are now driving a new era in snakebite therapy.

Baker’s team, in collaboration with Timothy Patrick Jenkins from Denmark’s Technical University (DTU), harnessed AI to design proteins that bind to and neutralize three-finger toxins—among the deadliest components of cobra venom. These toxins are notorious for evading the immune system, rendering conventional treatments ineffective.