Toggle light / dark theme

A new algorithm, Evo 2, trained on roughly 128,000 genomes—9.3 trillion DNA letter pairs—spanning all of life’s domains, is now the largest generative AI model for biology to date. Built by scientists at the Arc Institute, Stanford University, and Nvidia, Evo 2 can write whole chromosomes and small genomes from scratch.

It also learned how DNA mutations affect proteins, RNA, and overall health, shining light on “non-coding” regions, in particular. These mysterious sections of DNA don’t make proteins but often control gene activity and are linked to diseases.

The team has released Evo 2’s software code and model parameters to the scientific community for further exploration. Researchers can also access the tool through a user-friendly web interface. With Evo 2 as a foundation, scientists may develop more specific AI models. These could predict how mutations affect a protein’s function, how genes operate differently across cell types, or even help researchers design new genomes for synthetic biology.

A new formula that connects a material’s magnetic permeability to spin dynamics has been derived and tested 84 years after the debut of its electric counterpart.

If antiferromagnets, altermagnets, and other emerging quantum materials are to be harnessed for spintronic devices, physicists will need to better understand the spin dynamics in these materials. One possible path forward is to exploit the duality between electric and magnetic dynamics expressed by Maxwell’s equations. From this duality, one could naively expect mirror-like similarities in the behavior of electric and magnetic dipoles. However, a profound difference between the quantized lattice electric excitations—such as phonons—and spin excitations—such as paramagnetic and antiferromagnetic spin resonances and magnons—has now been unveiled in terms of their corresponding contributions to the static electric susceptibility and magnetic permeability. Viktor Rindert of Lund University in Sweden and his collaborators have derived and verified a formula that relates a material’s magnetic permeability to the frequencies of magnetic spin resonances [1].

Join cognitive scientist and AI researcher Joscha Bach for an in-depth interview on the nature of consciousness, in which he argues that the brain is hardware, consciousness its software and that, in order to understand our reality, we must unlock the algorithms of consciousness.

So, to put it in a very straightforward way – the term “AI agents” refers to a specific application of agentic AI, and “agentic” refers to the AI models, algorithms and methods that make them work.

Why Is This Important?

AI agents and agentic AI are two closely related concepts that everyone needs to understand if they’re planning on using technology to make a difference in the coming years.

In the late 1960s, physicists like Charles Misner proposed that the regions surrounding singularities—points of infinite density at the centers of black holes—might exhibit chaotic behavior, with space and time undergoing erratic contractions and expansions. This concept, termed the “Mixmaster universe,” suggested that an astronaut venturing into such a black hole would experience a tumultuous mixing of their body parts, akin to the action of a kitchen mixer.

S general theory of relativity, which describes the gravitational dynamics of black holes, employs complex mathematical formulations that intertwine multiple equations. Historically, researchers like Misner introduced simplifying assumptions to make these equations more tractable. However, even with these assumptions, the computational tools of the time were insufficient to fully explore the chaotic nature of these regions, leading to a decline in related research. + Recently, advancements in mathematical techniques and computational power have reignited interest in studying the chaotic environments near singularities. Physicists aim to validate the earlier approximations made by Misner and others, ensuring they accurately reflect the predictions of Einsteinian gravity. Moreover, by delving deeper into the extreme conditions near singularities, researchers hope to bridge the gap between general relativity and quantum mechanics, potentially leading to a unified theory of quantum gravity.

Understanding the intricate and chaotic space-time near black hole singularities not only challenges our current physical theories but also promises to shed light on the fundamental nature of space and time themselves.


Physicists hope that understanding the churning region near singularities might help them reconcile gravity and quantum mechanics.

Have you ever questioned the deep nature of time? While some physicists argue that time is just an illusion, dismissing it outright contradicts our lived experience. In my latest work, Temporal Mechanics: D-Theory as a Critical Upgrade to Our Understanding of the Nature of Time (2025), I explore how time is deeply rooted in the computational nature of reality and information processing by conscious systems. This paper tackles why the “now” is all we have.

In the absence of observers, the cosmic arrow of time doesn’t exist. This statement is not merely philosophical; it is a profound implication of the problem of time in physics. In standard quantum mechanics, time is an external parameter, a backdrop against which events unfold. However, in quantum gravity and the Wheeler-DeWitt equation, the problem of time emerges because there is no preferred universal time variable—only a timeless wavefunction of the universe. The flow of time, as we experience it, arises not from any fundamental law but from the interaction between observers and the informational structure of reality.

In a randomized controlled trial in individuals with persistent atrial fibrillation, an individualized ablation procedure, in which areas with abnormal electrophysiological characteristics—as detected by an AI algorithm—were targeted for ablation, led to improved efficacy for reducing arrhythmia recurrence at 12 months following the ablation procedure.

This approach significantly enhances performance, as observed in Atari video games and several other tasks involving multiple potential outcomes for each decision.

“They basically asked what happens if rather than just learning average rewards for certain actions, the algorithm learns the whole distribution, and they found it improved performance significantly,” explained Professor Drugowitsch.

In the latest study, Drugowitsch collaborated with Naoshige Uchida, a professor of molecular and cellular biology at Harvard University. The goal was to gain a better understanding of how the potential risks and rewards of a decision are weighed in the brain.

Molecular Dynamics (MD) simulation serves as a crucial technique across various disciplines including biology, chemistry, and material science1,2,3,4. MD simulations are typically based on interatomic potential functions that characterize the potential energy surface of the system, with atomic forces derived as the negative gradients of the potential energies. Subsequently, Newton’s laws of motion are applied to simulate the dynamic trajectories of the atoms. In ab initio MD simulations5, the energies and forces are accurately determined by solving the equations in quantum mechanics. However, the computational demands of ab initio MD limit its practicality in many scenarios. By learning from ab initio calculations, machine learning interatomic potentials (MLIPs) have been developed to achieve much more efficient MD simulations with ab initio-level accuracy6,7,8.

Despite their successes, the crucial challenge of implementing MLIPs is the distribution shift between training and test data. When using MLIPs for MD simulations, the data for inference are atomic structures that are continuously generated during simulations based on the predicted forces, and the training set should encompass a wide range of atomic structures to guarantee the accuracy of predictions. However, in fields such as phaseion9,10, catalysis11,12, and crystal growth13,14, the configurational space that needs to be explored is highly complex. This complexity makes it challenging to sample sufficient data for training and easy to make a potential that is not smooth enough to extrapolate to every relevant point. Consequently, a distribution shift between training and test datasets often occurs, which causes the degradation of test performance and leads to the emergence of unrealistic atomic structures, and finally the MD simulations collapse15.