Toggle light / dark theme

AI-powered data analysis tools have the potential to significantly improve the quality of scientific publications. A new study by Professor Mathias Christmann, a chemistry professor at Freie Universität Berlin, has uncovered shortcomings in chemical publications.

Using a Python script developed with the help of modern AI language models, Christmann analyzed more than 3,000 published in Organic Letters over the past two years. The analysis revealed that only 40% of the chemical research papers contained error-free mass measurements. The AI-based data analysis tool used for this purpose could be created without any prior programming knowledge.

“The results demonstrate how powerful AI-powered tools can be in everyday research. They not only make complex analyses accessible but also improve the reliability of scientific data,” explains Christmann.

Gould’s thesis has sparked widespread debate ever since, with some advocating for determinism and others supporting contingency. In his 1952 short story A Sound of Thunder, science fiction author Ray Bradbury recounted how a time traveler’s simple act of stepping on a butterfly in the age of the dinosaurs changed the course of the future. Gould made a similar point: “Alter any early event, ever so slightly and without apparent importance at the time, and evolution cascades into a radically different channel.”

Scientists have been exploring this problem through experiments designed to recreate evolution in the lab or in nature, or by comparing species that have emerged under similar conditions. Today, a new avenue has opened up: AI. In New York, a group of former researchers from Meta — the parent company of social networks Facebook, Instagram, and WhatsApp — founded EvolutionaryScale, an AI startup focused on biology. The EvolutionaryScale Model 3 (ESM3) system created by the company is a generative language model — the same kind of platform that powers ChatGPT. However, while ChatGPT generates text, ESM3 generates proteins, the fundamental building blocks of life.

ESM3 feeds on sequence, structure, and function data from existing proteins to learn the biological language of these molecules and create new ones. Its creators have trained it with 771 billion data packets derived from 3.15 billion sequences, 236 million structures, and 539 million functional traits. This adds up to more than one trillion teraflops (a measure of computational performance) — the most computing power ever used in biology, according to the company.

On a broader level, by pushing AI toward more human-like processing, Titans could mean AI that thinks more deeply than humans — challenging our understanding of human uniqueness and our role in an AI-augmented world.

At the heart of Titans’ design is a concerted effort to more closely emulate the functioning of the human brain. While previous models like Transformers introduced the concept of attention—allowing AI to focus on specific, relevant information—Titans takes this several steps further. The new architecture incorporates analogs to human cognitive processes, including short-term memory, long-term memory, and even the ability to “forget” less relevant information. Perhaps most intriguingly, Titans introduces a concept that’s surprisingly human: the ability to prioritize surprising or unexpected information. This mimics the human tendency to more easily remember events that violate our expectations, a feature that could lead to more nuanced and context-aware AI systems.

The key technical innovation in Titans is the introduction of a neural long-term memory module. This component learns to memorize historical context and works in tandem with the attention mechanisms that have become standard in modern AI models. The result is a system that can effectively utilize both immediate context (akin to short-term memory) and broader historical information (long-term memory) when processing data or generating responses.

Neural network models that are able to make decisions or store memories have long captured scientists’ imaginations. In these models, a hallmark of the computation being performed by the network is the presence of stereotyped sequences of activity, akin to one-way paths. This idea was pioneered by John Hopfield, who was notably co-awarded the 2024 Nobel Prize in Physics. Whether one-way activity paths are used in the brain, however, has been unknown.

A collaborative team of researchers from Carnegie Mellon University and the University of Pittsburgh designed a clever experiment to perform a causal test of this question using a (BCI). Their findings provide empirical support of one-way activity paths in the brain and the computational principles long hypothesized by neural network models.

Stereotyped sequences of neural population activity, also known as , is believed to underlie numerous brain functions, including , sensory perception, decision making, timing, and memory, among others. The group focused on the brain’s motor system for their work, recently published in Nature Neuroscience, where neural population activity can be used to control a BCI.

In today’s AI news, OpenAI CEO Sam Altman is trying to calm the online hype surrounding his company. On Monday, the tech boss took to X to quell viral rumors that the company had achieved artificial general intelligence. “Twitter hype is out of control again,” he wrote. “We are not gonna deploy AGI next month, nor have we built it.”

In other advancements, Stuttgart, Germany-based Sereact has secured €25mn to advance its embodied AI software that enables robots to carry out tasks they were never trained to do. “With our technology, robots act situationally rather than following rigidly programmed sequences. They adapt to dynamic tasks in real-time, enabling an unprecedented level of autonomy,” said Ralf Gulde, CEO of Sereact (short for “sense, reason, act”).

Then, seven years and seven months ago, Google changed the world with the Transformer architecture, which lies at the heart of generative AI applications like OpenAI’s ChatGPT. Now Google has unveiled a new architecture called Titans, a direct evolution of the Transformer that takes us a step closer to AI that can think like humans.

And, the World Economic Forum Global Risks Report 2025 reveals a world teetering between technological triumph and profound risk. As a structural force, it “has the potential to blur boundaries between technology and humanity” and rapidly introduce novel, unpredictable challenges.

In videos, The next frontier of AI is physical AI. NVIDIA Cosmos—a platform of state-of-the-art generative world foundation models, advanced tokenizers, guardrails, and an accelerated data processing and curation pipeline—accelerates the development of physical-AI-embodied systems such as robots and autonomous vehicles.

The Full Self-Driving version 13.2.2 successfully navigates challenging snowy Canadian roads with impressive performance and minimal driver intervention ## Advanced Navigation in Challenging Conditions.

🚗13.2.2 successfully navigated snowy, slippery roads in Canada without interventions, handling obscured lane lines, vehicles, and signs even when the roadway was difficult to discern.

🌨️The system demonstrated impressive adaptive driving, moving slowly and smoothly with minimal slipping, and never requesting driver takeover despite challenging road conditions. ## Complex Intersection Management.

🛑13.2.2 effectively managed various challenging intersections, including a five-way intersection at an odd angle and a busy roadway with an obscured angle. ## Safety-First Approach.

OpenAI says it trained a new AI model called GPT-4b micro with Retro Biosciences, a longevity science startup trying to extend the human lifespan by 10 years, according to the MIT Technology Review.

Retro, which is backed by Sam Altman, has been working with OpenAI for roughly a year on this research, according to the report. The GPT-4b micro model tries to re-engineer proteins — a specific set called the Yamanaka factors — that can turn human skin cells into young-seeming stem cells. Retro believes these proteins are a promising step toward building human organs and providing supplies of replacement cells.

The model differs slightly from Google’s Nobel prize-winning AlphaFold, which predicts the shape of proteins, but it appears to be OpenAI’s first model that is custom-built for biological research. OpenAI and Retro tell the MIT Technology Review they plan to release research on the model and its outputs.