Toggle light / dark theme

DeepChopper model improves RNA sequencing research by mitigating chimera artifacts

Scientists in the laboratory of Rendong Yang, Ph.D., associate professor of Urology, have developed a new large language model that can interpret transcriptomic data in cancer cell lines more accurately than conventional approaches, as detailed in a recent study published in Nature Communications.

Long-read RNA sequencing technologies have transformed transcriptomics research by detecting complex RNA splicing and gene fusion events that have often been missed by conventional short-read RNA-sequencing methods.

Among these technologies includes nanopore direct RNA sequencing (dRNA-seq), which can sequence full-length RNA molecules directly and produce more accurate analyses of RNA biology. However, previous work suggests this approach may generate chimera artifacts—in which multiple RNA sequences incorrectly join to form a single RNA sequence—and limit the reliability and utility of the data.

Quantum dots reveal entropy production, a key measure of nanoscale energy dissipation

In order to build the computers and devices of tomorrow, we have to understand how they use energy today. That’s harder than it sounds. Memory storage, information processing, and energy use in these technologies involve constant energy flow—systems never settle into thermodynamic balance. To complicate things further, one of the most precise ways to study these processes starts at the smallest scale: the quantum domain.

New Stanford research published in Nature Physics combines theory, experimentation, and machine learning to quantify energy costs during a non-equilibrium process with ultrahigh sensitivity. Researchers used extremely small nanocrystals called quantum dots, which have unique light-emitting properties that arise from quantum effects at the nanoscale.

They measured the entropy production of quantum dots—a quantity that describes how reversible a microscopic process is, and encodes information about memory, information loss, and energy costs. Such measurements can determine the ultimate speed limits for a device or how efficient it can be.

Ordered ‘supercrystal’ could make lasers faster, smaller and more efficient

An advance from Monash University could pave the way for faster, smaller, and more energy-efficient lasers and other light-based technologies. Engineers have developed a new type of perovskite material arranged into an ordered “supercrystal.” In this structure, tiny packets of energy called excitons work together rather than individually, allowing the material to amplify light far more efficiently. The findings, published in Laser & Photonics Reviews, could have applications in communications, sensors, and computing, improving the performance of devices that rely on light, such as sensors in autonomous vehicles, medical imaging, or electronics.

Corresponding author Professor Jacek Jasieniak at Monash Materials Science and Engineering highlighted the potential for faster, more energy-efficient optical devices. “What’s exciting here is that we’re not changing the material itself, but how it’s organized. By assembling nanocrystals into an ordered supercrystal, the excitations created by light can cooperate rather than compete, which allows light to be amplified much more efficiently,” Professor Jasieniak said.

Dr. Manoj Sharma, who led the experimental work at Monash, said their approach revealed new possibilities in nanocrystal assemblies. “By assembling nanocrystals into a highly ordered supercrystal, we show that optical gain is no longer limited by single-particle biexcitons, which are inefficient and prone to energy losses, but instead arises from collective excitonic interactions across the whole structure,” Dr. Sharma said.

Can medical AI lie? Large study maps how LLMs handle health misinformation

Medical artificial intelligence (AI) is often described as a way to make patient care safer by helping clinicians manage information. A new study by the Icahn School of Medicine at Mount Sinai and collaborators confronts a critical vulnerability: when a medical lie enters the system, can AI pass it on as if it were true?

Analyzing more than a million prompts across nine leading language models, the researchers found that these systems can repeat false medical claims when they appear in realistic hospital notes or social-media health discussions.

The findings, published in The Lancet Digital Health, suggest that current safeguards do not reliably distinguish fact from fabrication once a claim is wrapped in familiar clinical or social-media language. The paper is titled “Mapping LLM Susceptibility to Medical Misinformation Across Clinical Notes and Social Media.”

How AI Could Threaten Human Survival: Insights from a Professor — Hugo De Garis

🏦 Invest In Luxury Dubai Property https://londonreal.tv/dubai-ytd.
🍿 Watch the full interview for free at https://londonreal.tv/countdown-to-extinction-how-ai-will-ex…poses-all/

Expert in robotics & artificial intelligence.

“I’m known for predicting that later this century there will be a terrible war, killing billions of people over the issue of species dominance.”

From a philosophical perspective, the concept of AI ending humanity challenges our assumptions about evolution, survival, and the nature of progress. Throughout history, humans have viewed themselves as top of the food chain, but advanced AI raises the possibility that we are merely a stepping stone.

🚨 Learn To Make Money In Crypto:
💰The Investment club: https://londonreal.tv/club.
💰Crypto & DeFi Academy: https://londonreal.tv/defi-ytd.

🔔 SUBSCRIBE ON YOUTUBE: http://bit.ly/SubscribeToLondonReal.

View a PDF of the paper titled When Models Manipulate Manifolds: The Geometry of a Counting Task, by Wes Gurnee and 6 other authors

When you look at text, you subconsciously track how much space remains on each line. If you’re writing “Happy Birthday” and “Birthday” won’t fit, your brain automatically moves it to the next line. You don’t calculate this—you *see* it. But AI models don’t have eyes. They receive only sequences of numbers (tokens) and must somehow develop a sense of visual space from scratch.

Inside your brain, “place cells” help you navigate physical space by firing when you’re in specific locations. Remarkably, Claude develops something strikingly similar. The researchers found that the model represents character counts using low-dimensional curved manifolds—mathematical shapes that are discretized by sparse feature families, much like how biological place cells divide space into discrete firing zones.

The researchers validated their findings through causal interventions—essentially “knocking out” specific neurons to see if the model’s counting ability broke in predictable ways. They even discovered visual illusions—carefully crafted character sequences that trick the model’s counting mechanism, much like optical illusions fool human vision.

2. Attention mechanisms are geometric engines: The “attention heads” that power modern AI don’t just connect related words—they perform sophisticated geometric transformations on internal representations.

1. What other “sensory” capabilities have models developed implicitly? Can AI develop senses we don’t have names for?


Language models can perceive visual properties of text despite receiving only sequences of tokens-we mechanistically investigate how Claude 3.5 Haiku accomplishes one such task: linebreaking in fixed-width text. We find that character counts are represented on low-dimensional curved manifolds discretized by sparse feature families, analogous to biological place cells. Accurate predictions emerge from a sequence of geometric transformations: token lengths are accumulated into character count manifolds, attention heads twist these manifolds to estimate distance to the line boundary, and the decision to break the line is enabled by arranging estimates orthogonally to create a linear decision boundary. We validate our findings through causal interventions and discover visual illusions—character sequences that hijack the counting mechanism.

How scientists are trying to use AI to unlock the human mind

Compared with conventional psychological models, which use simple math equations, Centaur did a far better job of predicting behavior. Accurate predictions of how humans respond in psychology experiments are valuable in and of themselves: For example, scientists could use Centaur to pilot their experiments on a computer before recruiting, and paying, human participants. In their paper, however, the researchers propose that Centaur could be more than just a prediction machine. By interrogating the mechanisms that allow Centaur to effectively replicate human behavior, they argue, scientists could develop new theories about the inner workings of the mind.

But some psychologists doubt whether Centaur can tell us much about the mind at all. Sure, it’s better than conventional psychological models at predicting how humans behave—but it also has a billion times more parameters. And just because a model behaves like a human on the outside doesn’t mean that it functions like one on the inside. Olivia Guest, an assistant professor of computational cognitive science at Radboud University in the Netherlands, compares Centaur to a calculator, which can effectively predict the response a math whiz will give when asked to add two numbers. “I don’t know what you would learn about human addition by studying a calculator,” she says.

Even if Centaur does capture something important about human psychology, scientists may struggle to extract any insight from the model’s millions of neurons. Though AI researchers are working hard to figure out how large language models work, they’ve barely managed to crack open the black box. Understanding an enormous neural-network model of the human mind may not prove much easier than understanding the thing itself.

/* */