Note: This post is co-authored with Stacy Li, a PhD student at Berkeley studying aging biology! Highly appreciate all her help in writing, editing, and fact-checking my understanding!
Some of these problems are as simple as factoring a large number into primes. Others are among the most important facing Earth today, like quickly modeling complex molecules for drugs to treat emerging diseases, and developing more efficient materials for carbon capture or batteries.
However, in the next decade, we expect a new form of supercomputing to emerge unlike anything prior. Not only could it potentially tackle these problems, but we hope it’ll do so with a fraction of the cost, footprint, time, and energy. This new supercomputing paradigm will incorporate an entirely new computing architecture, one that mirrors the strange behavior of matter at the atomic level—quantum computing.
For decades, quantum computers have struggled to reach commercial viability. The quantum behaviors that power these computers are extremely sensitive to environmental noise, and difficult to scale to large enough machines to do useful calculations. But several key advances have been made in the last decade, with improvements in hardware as well as theoretical advances in how to handle noise. These advances have allowed quantum computers to finally reach a performance level where their classical counterparts are struggling to keep up, at least for some specific calculations.
The improved gene-editing approach combines prime editors with a more efficient recombinase to insert or substitute gene-sized DNA segments.
Quantum computing is in all likelihood the 21st century’s great computer revolution. What is IBM doing to make the quantum dream a reality?
Meissner Effect
Posted in materials
A permanent magnet begins to hover above a ceramic material as it cools and transitions to a superconducting state; the magnet remains aloft until the ceramic warms above a critical temperature.
The ceramic material is a 25mm disc of yttrium barium copper oxide (YBa2Cu3O7, also commonly referred to as \.
The new research centers on the use of LSPs to achieve atomic-level control of chemical reactions. A team has successfully extended LSP functionality to semiconductor platforms. By using a plasmon-resonant tip in a low-temperature scanning tunneling microscope, they enabled the reversible lift-up and drop-down of single organic molecules on a silicon surface.
The LSP at the tip induces breaking and forming specific chemical bonds between the molecule and silicon, resulting in the reversible switching. The switching rate can be tuned by the tip position with exceptional precision down to 0.01 nanometer. This precise manipulation allows for reversible changes between two different molecular configurations.
An additional key aspect of this breakthrough is the tunability of the optoelectronic function through atomic-level molecular modification. The team confirmed that photoswitching is inhibited for another organic molecule, in which only one oxygen atom not bonding to silicon is substituted for a nitrogen atom. This chemical tailoring is essential for tuning the properties of single-molecule optoelectronic devices, enabling the design of components with specific functionalities and paving the way for more efficient and adaptable nano-optoelectronic systems.
The active zone primes synaptic vesicles and clusters voltage-gated Ca2+ channels fast neurotransmitter release. Here the authors dissect the underlying molecular architecture and show that distinct protein machineries execute these functions.
Forget Hollywood depictions of gun-toting robots running wild in the streets – the reality of artificial intelligence is far more dangerous, warns the historian and author in an exclusive extract from his new book.
Can LLMs Think Like Us?
Posted in neuroscience
LLMs don’t just memorize word pairs or sequences—they learn to encode abstract representations of language. These models are trained on immense amounts of text data, allowing them to infer relationships between words, phrases, and concepts in ways that extend beyond mere surface-level patterns. This is why LLMs can handle diverse contexts, respond to novel prompts, and even generate creative outputs.
In this sense, LLMs are performing a kind of machine inference. They compress linguistic information into abstract representations that allow them to generalize across contexts—similar to how the hippocampus compresses sensory and experiential data into abstract rules or principles that guide human thought.
But can LLMs really achieve the same level of inference as the human brain? Here, the gap becomes more apparent. While LLMs are impressive at predicting the next word in a sequence and generating text that often appears to be the product of thoughtful inference, their ability to truly understand or infer abstract concepts is still limited. LLMs operate on correlations and patterns rather than understanding the underlying causality or relational depth that drives human inference.