Toggle light / dark theme

Columbia researchers created an AI model that predicts gene activity in any human cell, advancing disease research and treatment. It has already uncovered mechanisms behind pediatric leukemia and may reveal hidden genome functions.

Researchers at Columbia University.

Columbia University is a private Ivy League research university in New York City that was established in 1754. This makes it the oldest institution of higher education in New York and the fifth-oldest in the United States. It is often just referred to as Columbia, but its official name is Columbia University in the City of New York.

OpenAI says no money changed hands in the collaboration. But because the work could benefit Retro—whose biggest investor is Altman—the announcement may add to questions swirling around the OpenAI CEO’s side projects.

Last year, the Wall Street Journal said Altman’s wide-ranging investments in private tech startups amount to an “opaque investment empire” that is “creating a mounting list of potential conflicts,” since some of these companies also do business with OpenAI.

In Retro’s case, simply being associated with Altman, OpenAI, and the race toward AGI could boost its profile and increase its ability to hire staff and raise funds. Betts-Lacroix did not answer questions about whether the early-stage company is currently in fundraising mode.

Tomorrow at 1PM PT / 4PM ET, we Premiere a new episode of Robots In Space, and this is about bots, including the latest on Phoenix from Sanctuary AI, the impact of cognitive automation on jobs, the Economic Singularity, plus our proprietary Event Horizon Indicator.


Discover how robotics and AI are reshaping our economic landscape in this eye-opening analysis. As an engineer, I break down the latest developments in humanoid robots, particularly Sanctuary AI’s breakthrough in hydraulic robotics and robot dexterity. Learn about my proprietary Event Horizon Indicator that tracks our progression toward the Economic Singularity through labor force participation and unemployment trends. From warehouse robotics to manufacturing automation, understand how the robot workforce is transforming industries and what this means for the future of work. Whether you’re interested in AI economics or concerned about tech unemployment, this video provides crucial insights into the ongoing robot revolution and its impact on our economy.

Reservoir computing (RC) is a powerful machine learning module designed to handle tasks involving time-based or sequential data, such as tracking patterns over time or analyzing sequences. It is widely used in areas such as finance, robotics, speech recognition, weather forecasting, natural language processing, and predicting complex nonlinear dynamical systems. What sets RC apart is its efficiency―it delivers powerful results with much lower training costs compared to other methods.

RC uses a fixed, randomly connected network layer, known as the reservoir, to turn input data into a more complex representation. A readout layer then analyzes this representation to find patterns and connections in the data. Unlike traditional neural networks, which require extensive training across multiple network layers, RC only trains the readout layer, typically through a simple linear regression process. This drastically reduces the amount of computation needed, making RC fast and computationally efficient.

Inspired by how the brain works, RC uses a fixed network structure but learns the outputs in an adaptable way. It is especially good at predicting and can even be used on physical devices (called physical RC) for energy-efficient, high-performance computing. Nevertheless, can it be optimized further?

As the rate of humanity’s data creation increases exponentially with the rise of AI, scientists have been interested in DNA as a way to store digital information. After all, DNA is nature’s way of storing data. It encodes genetic information and determines the blueprint of every living thing on earth.

And DNA is at least 1,000 times more compact than solid-state hard drives. To demonstrate just how compact, researchers have previously encoded all of Shakespeare’s 154 sonnets, 52 pages of Mozart’s music, and an episode of the Netflix show “Biohackers” into tiny amounts of DNA.

But these were research projects or media stunts. DNA data storage isn’t exactly mainstream yet, but it might be getting closer. Now you can buy what may be the first commercially available book written in DNA. Today, Asimov Press debuted an anthology of biotechnology essays and science fiction stories encoded in strands of DNA. For $60, you can get a physical copy of the book plus the nucleic acid version—a metal capsule filled with dried DNA.

Large language models surpass human experts in predicting neuroscience results, according to a study published in Nature Human Behaviour.

Scientific research is increasingly challenging due to the immense growth in published literature. Integrating noisy and voluminous findings to predict outcomes often exceeds human capacity. This investigation was motivated by the growing role of artificial intelligence in tasks such as protein folding and drug discovery, raising the question of whether LLMs could similarly enhance fields like neuroscience.

Xiaoliang Luo and colleagues developed BrainBench, a benchmark designed to test whether LLMs could predict the results of neuroscience studies more accurately than human experts. BrainBench included 200 test cases based on neuroscience research abstracts. Each test case consisted of two versions of the same abstract: one was the original, and the other had a modified result that changed the study’s conclusion but kept the rest of the abstract coherent. Participants—both LLMs and human experts—were tasked with identifying which version was correct.

Artificial intelligence (AI) once seemed like a fantastical construct of science fiction, enabling characters to deploy spacecraft to neighboring galaxies with a casual command. Humanoid AIs even served as companions to otherwise lonely characters. Now, in the very real 21st century, AI is becoming part of everyday life, with tools like chatbots available and useful for everyday tasks like answering questions, improving writing, and solving mathematical equations.

AI does, however, have the potential to revolutionize —in ways that can feel like but are within reach.

At the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory, scientists are already using AI to automate experiments and discover new materials. They’re even designing an AI scientific companion that communicates in ordinary language and helps conduct experiments. Kevin Yager, the Electronic Nanomaterials Group leader at the Center for Functional Nanomaterials (CFN), has articulated an overarching vision for the role of AI in scientific research.