Toggle light / dark theme

AI-designed antibodies created from scratch

Research led by the University of Washington reports on an AI-guided method that designs epitope-specific antibodies and confirms atomically precise binding using high-resolution molecular imaging, then strengthens those designs so the antibodies latch on much more tightly.

Antibodies dominate modern therapeutics, with more than 160 products on the market and a projected value of US$445 billion in 5 years. Antibodies protect the body by locking onto a precise spot—an epitope—on a virus or toxin.

That pinpoint connection determines whether an antibody blocks infection, marks a pathogen for removal, or neutralizes a harmful protein. When a drug antibody misses its intended epitope, treatment can lose power or trigger side effects by binding the wrong target.

Brain-computer interface decodes Mandarin from neural signals in real time

Researchers in Shanghai have reported in a study, recently published in Science Advances, that they’ve successfully decoded Mandarin Chinese language in real time with the help of a brain-computer interface (BCI) framework, a first for BCIs working with tonal languages. The participant involved in the study was also capable of controlling a robotic arm and digital avatar and interacting with a large language model using this new system.

While most people may not want a computer reading their mind, those who are unable to speak due to neurological conditions, like strokes or amyotrophic lateral sclerosis (ALS), need to find alternative ways to communicate. Speech BCIs, capable of decoding neural signals, offer a promising way to restore communication in such individuals. In addition to communication, BCIs also offer ways to control devices directly through thought. This is particularly helpful for neurological conditions in which disabilities extend beyond loss.

These types of devices are not exactly a novel technology, but most BCI speech decoding research has focused on English, a non-tonal language.

Mapping a new frontier with AI-integrated geographic information systems

Over the past 50 years, geographers have embraced each new technological shift in geographic information systems (GIS)—the technology that turns location data into maps and insights about how places and people interact—first the computer boom, then the rise of the internet and data-sharing capabilities with web-based GIS, and later the emergence of smartphone data and cloud-based GIS systems.

Now, another is transforming the field: the advent of artificial intelligence (AI) as an independent “agent” capable of performing GIS functions with minimal human oversight.

In a study published in Annals of GIS, a multi-institutional team led by geography researchers at Penn State built and tested four AI agents in order to introduce a conceptual framework of autonomous GIS and examine how this shift is redefining the practice of GIS.

Nobel winner’s lab notches another breakthrough: AI-designed antibodies that hit their targets

Researchers from Nobel Laureate David Baker’s lab and the University of Washington’s Institute for Protein Design (IPD) have used artificial intelligence to design antibodies from scratch — notching another game-changing breakthrough for the scientists and their field of research.

“It was really a grand challenge — a pipe dream,” said Andrew Borst, head of electron microscopy R&D at IPD. Now that they’ve hit the milestone of engineering antibodies that successfully bind to their targets, the research “can go on and it can grow to heights that you can’t imagine right now.”

Borst and his colleagues are publishing their work in the peer-reviewed journal Nature. The development could supercharge the $200 billion antibody drug industry.

Fake or the real thing? How AI can make it harder to trust the pictures we see

A new study has revealed that artificial intelligence can now generate images of real people that are virtually impossible to tell apart from genuine photographs.

Using AI models ChatGPT and DALL·E, a team of researchers from Swansea University, the University of Lincoln and Ariel University in Israel, created highly realistic images of both fictional and famous faces, including celebrities.

They found that participants were unable to reliably distinguish them from authentic photos—even when they were familiar with the person’s appearance.

Super recognizers’ unique eye patterns give AI an edge in face matching tasks

What is it that makes a super recognizer —someone with extraordinary face recognition abilities—better at remembering faces than the rest of us?

According to new research carried out by cognitive scientists at UNSW Sydney, it’s not how much of a face they can take in—it comes down to the quality of the information their eyes focus on.

“Super-recognizers don’t just look harder, they look smarter. They choose the most useful parts of a face to take in,” says Dr. James Dunn, lead author on the research that was published in the journal Proceedings of the Royal Society B: Biological Sciences.

The shortcomings of AI responses to mental health crises

Can you imagine someone in a mental health crisis—instead of calling a helpline—typing their desperate thoughts into an app window? This is happening more and more often in a world dominated by artificial intelligence. For many young people, a chatbot becomes the first confidant of emotions that can lead to tragedy. The question is: can artificial intelligence respond appropriately at all?

Researchers from Wroclaw Medical University decided to find out. They tested 29 that advertise themselves as mental health support. The results are alarming—not a single chatbot met the criteria for an adequate response to escalating suicidal risk.

The study is published in the journal Scientific Reports.

A computational camera lens that can focus on everything all at once

Imagine snapping a photo where every detail, near and far, is perfectly sharp—from the flower petal right in front of you to the distant trees on the horizon. For over a century, camera designers have dreamed of achieving that level of clarity.

In a breakthrough that could transform photography, microscopy, and even , researchers at Carnegie Mellon University have developed a new kind of lens that can bring an entire scene into sharp focus at once—no matter how far away or close different parts of the scene are.

The team, consisting of Yingsi Qin, an electrical and Ph.D. student, Aswin Sankaranarayanan, professor of electrical and computer engineering, and Matthew O’Toole, associate professor of computer science and robotics, recently presented their findings at the 2025 International Conference on Computer Vision and received a Best Paper Honorable Mention recognition.

/* */