Toggle light / dark theme

PAPO: Perception-Aware Policy Optimization for Multimodal Reasoning

Suppose you’re trying to solve a puzzle that includes both words and pictures — like reading a comic strip and figuring out what happens next. That’s the kind of challenge today’s AI faces in “multimodal reasoning,” where it must understand both text and images to think and respond accurately.

New advanced imaging technology enables detailed disease mapping in tissue samples

Researchers from Aarhus University—in a major international collaboration—have developed a groundbreaking method that can provide more information from the tissue samples doctors take from patients every day.

The new technique, called Pathology-oriented multiPlexing or PathoPlex, can look under a microscope at over 100 different proteins in the same small piece of tissue—instead of just 1–2 at a time, as is done now.

The technology, which has just been published in the journal Nature, combines advanced image processing with machine learning to map complex disease processes in detail.

“We Gave AI $20 Million to Rethink Science”: Nexus Super-System Set to Turbocharge US Innovation Like Never Before

IN A NUTSHELL 🚀 Nexus, a $20 million supercomputer, is set to transform U.S. scientific research with its AI power. 🔬 Designed for accessibility, Nexus democratizes high-performance computing, allowing researchers nationwide to apply for access. 🌐 Georgia Tech, in collaboration with the NCSA, is creating a shared national research infrastructure through Nexus. 📈 With unprecedented

What is consciousness, and could AI have it?

In the Voltaire Lecture 2025, Professor Anil Seth will set out an approach to understanding consciousness which, rather than trying to solve the mystery head-on, tries to dissolve it by building explanatory bridges from physics and biology to experience and function. In this view, conscious experiences of the world around us, and of being a ‘self’ within that world, can be understood in terms of perceptual predictions that are deeply rooted in a fundamental biological imperative – the desire to stay alive.

At this event, Professor Seth will explore how widely distributed beyond human beings consciousness may be, with a particular focus on AI. He will consider whether consciousness might depend not just on ‘information processing’, but on properties unique to living, biological organisms, before ending with an exploration of the ethical implications of an artificial intelligence that is either actually conscious – or can convincingly pretend to be.

How Distillation Makes AI Models Smaller and Cheaper

Considering that the distillation requires access to the innards of the teacher model, it’s not possible for a third party to sneakily distill data from a closed-source model like OpenAI’s o1, as DeepSeek was thought to have done. That said, a student model could still learn quite a bit from a teacher model just through prompting the teacher with certain questions and using the answers to train its own models — an almost Socratic approach to distillation.

Meanwhile, other researchers continue to find new applications. In January, the NovaSky lab at the University of California, Berkeley, showed that distillation works well for training chain-of-thought reasoning models, which use multistep “thinking” to better answer complicated questions. The lab says its fully open-source Sky-T1 model cost less than $450 to train, and it achieved similar results to a much larger open-source model. “We were genuinely surprised by how well distillation worked in this setting,” said Dacheng Li, a Berkeley doctoral student and co-student lead of the NovaSky team. “Distillation is a fundamental technique in AI.”

/* */