Toggle light / dark theme

Lab-in-the-loop framework enables rapid evolution of complex multi-mutant proteins

The search space for protein engineering grows exponentially with complexity. A protein of just 100 amino acids has 20100 possible variants—more combinations than atoms in the observable universe. Traditional engineering methods might test hundreds of variants but limit exploration to narrow regions of the sequence space. Recent machine learning approaches enable broader searches through computational screening. However, these approaches still require tens of thousands of measurements, or 5–10 iterative rounds.

With the advent of these foundational protein models, the bottleneck for protein engineering swings back to the lab. For a single protein engineering campaign, researchers can only efficiently build and test hundreds of variants. What is the best way to choose those hundreds to most effectively uncover an evolved protein with substantially increased function? To address this problem, researchers have developed MULTI-evolve, a framework for efficient protein evolution that applies machine learning models trained on datasets of ~200 variants focused specifically on pairs of function-enhancing mutations.

Published in Science, this work represents Arc Institute’s first lab-in-the-loop framework for biological design, where computational prediction and experimental design are tightly integrated from the outset, reflecting a broader investment in AI-guided research.

PromptSpy is the first known Android malware to use generative AI at runtime

Researchers have discovered the first known Android malware to use generative AI in its execution flow, using Google’s Gemini model to adapt its persistence across different devices.

In a report today, ESET researcher Lukas Stefanko explains how a new Android malware family named “PromptSpy” is abusing the Google Gemini AI model to help it achieve persistence on infected devices.

“In February 2026, we uncovered two versions of a previously unknown Android malware family,” explains ESET.

Artificial intelligence in medicine: How it works, how it fails

Artificial intelligence (AI) is transforming healthcare, with large language models emerging as important tools for clinical practice, education, and research. To use it safely and effectively, healthcare professionals need to understand how it works, and how it fails. Using practical clinical examples, the authors explain the subset of AI called large language models, highlighting their capabilities and their limitations.

Key Points

  • AI is trained on vast amounts of data, which can itself be biased, leading to biased results.

AI tool debuts with better genomic predictions and explanations

Artificial intelligence has taken the world by storm. In biology, AI tools called deep neural networks (DNNs) have proven invaluable for predicting the results of genomic experiments. Their usefulness has these tools poised to set the stage for efficient, AI-guided research and potentially lifesaving discoveries—if scientists can work out the kinks. The findings are published in the journal npj Artificial Intelligence.

“Right now, there are a lot of different AI tools where you’ll give an input, and they’ll give an output, but we don’t have a good way of assessing the certainty, or how confident they are, in their answers,” explains Cold Spring Harbor Laboratory (CSHL) Associate Professor Peter Koo. “They all come out in the same format, whether you’re using a large language model or DNNs used in genomics and other fields of biology.”

It’s one of the greatest challenges today’s researchers face. Now, Koo, former CSHL postdoc Jessica Zhou, and graduate student Kaeli Rizzo have devised a potential solution—DEGU (Distilling Ensembles for Genomic Uncertainty-aware models). DNNs trained using DEGU are more efficient and more accurate in their predictions than those learning via standard methods.

SEEDANCE 2.0 : The Most Insane AI Fight You’ve Ever Seen!

# Hollywood is cooked!


Seedance 2.0 just changed the AI video generation game forever. Forget everything you know about Sora or Veo. In this 16-minute cinematic masterclass, we push this AI model to its absolute limits to create the ultimate visual crossover. From hyper-realistic magic clashes inspired by the Marvel universe, to epic sci-fi battles worthy of Star Wars, and mind-blowing Pokémon realistic adaptations, the visual quality is simply unmatched.

Is Sora officially over? Watch this full showcase to see the insane capabilities, volumetric lighting, and face consistency that Seedance 2.0 can generate.

Don’t forget to LIKE and SUBSCRIBE to support the channel and never miss the latest AI tech news! Tell me in the comments which fight scene blew your mind the most. 👇

🇫🇷 FR : Seedance 2.0 vient de détruire la concurrence. Dans cette masterclass de 15 minutes, découvrez les combats générés par IA les plus épiques jamais créés. Des affrontements magiques intenses aux batailles spatiales, en passant par des créatures hyper-réalistes, la qualité cinématographique de Seedance 2.0 repousse toutes les limites. Pensez à vous abonner pour soutenir la chaîne!

Exposing biases, moods, personalities and abstract concepts hidden in large language models

Now a team from MIT and the University of California San Diego has developed a way to test whether a large language model (LLM) contains hidden biases, personalities, moods, or other abstract concepts. Their method can zero in on connections within a model that encode for a concept of interest. What’s more, the method can then manipulate, or “steer” these connections, to strengthen or weaken the concept in any answer a model is prompted to give.

The team proved their method could quickly root out and steer more than 500 general concepts in some of the largest LLMs used today. For instance, the researchers could home in on a model’s representations for personalities such as “social influencer” and “conspiracy theorist,” and stances such as “fear of marriage” and “fan of Boston.” They could then tune these representations to enhance or minimize the concepts in any answers that a model generates.

/* */