Toggle light / dark theme

Dario Amodei is Wrong: Why No “MRI for AI” Will Ever Read the Mind of the Machine

The phrase “MRI for AI” rolls off the tongue with the seductive clarity of a metaphor that feels inevitable. Dario Amodei, CEO of Anthropic, describes the goal in precisely those terms, envisioning “the analogue of a highly precise and accurate MRI that would fully reveal the inner workings of an AI model” (Amodei, 2025, para. 6). The promise is epistemic X‑ray vision — peek inside the black box, label its cogs, excise its vices, certify its virtues.

Yet the metaphor is misguided not because the engineering is hard (it surely is) but because it mistakes what cognition is. An artificial mind, like a biological one, is not a spatial object whose secret can be exposed slice by slice. It is a dynamical pattern of distinctions sustained across time: self‑referential, operationally closed, and constitutionally allergic to purely third‑person capture. Attempting to exhaust that pattern with an interpretability scanner is as quixotic as hoping an fMRI might one day disclose why Kierkegaard chooses faith over reason in a single axial slice of BOLD contrast.

Phenomenology has warned us for more than a century that interiority is not an in‑there to be photographed but an ongoing enactment of world‑directed sense‑making. Husserl’s insight that “consciousness is always consciousness of something” (Ideas I, 1913) irreversibly welds experience to the horizon that occasions it; any observation from the outside forfeits the very structure it hopes to catch.

DARPA Wants ‘Large Bio-Mechanical Space Structures’

These are your rudimentary seed packages… Some will combine in place to form more complicated structures.’ — Greg Bear, 2015.

Robot Bricklayer Or Passer-By Bricklayer? ‘Oscar picked up a trowel. ‘I’m the tool for the mortar,’ the little trowel squeaked cheerfully.’ — Bruce Sterling, 1998.

Organic Non-Planar 3D Printing ‘It makes drawings in the air following drawings…’ — Murray Leinster, 1945.

3D printers leave hidden ‘fingerprints’ that reveal part origins

A new artificial intelligence system pinpoints the origin of 3D printed parts down to the specific machine that made them. The technology could allow manufacturers to monitor their suppliers and manage their supply chains, detecting early problems and verifying that suppliers are following agreed upon processes.

A team of researchers led by Bill King, a professor of mechanical science and engineering at the University of Illinois Urbana-Champaign, has discovered that parts made by , also known as 3D printing, carry a unique signature from the specific machine that fabricated them. This inspired the development of an AI system which detects the signature, or “fingerprint,” from a photograph of the part and identifies its origin.

“We are still amazed that this works: we can print the same part design on two identical machines –same model, same process settings, same material—and each machine leaves a unique fingerprint that the AI model can trace back to the machine,” King said. “It’s possible to determine exactly where and how something was made. You don’t have to take your supplier’s word on anything.”

Microsoft AI weather forecast faster, cheaper, truer: Study

Microsoft has developed an artificial intelligence (AI) model that beats current forecasting methods in tracking air quality, weather patterns, and climate-addled tropical storms, according to findings published Wednesday.

Dubbed Aurora, the new system—which has not been commercialized—generated 10-day weather forecasts and predicted hurricane trajectories more accurately and faster than traditional forecasting, and at lower costs, researchers reported in the journal Nature.

“For the first time, an AI system can outperform all operational centers for hurricane forecasting,” said senior author Paris Perdikaris, an associate professor of mechanical engineering at the University of Pennsylvania.

AI may soon account for half of data center power use if trends persist

Alex de Vries-Gao, a PhD candidate at VU Amsterdam Institute for Environmental Studies, has published an opinion piece about the results of a simple study he conducted involving the possible amount of electricity used by AI companies to generate answers to user queries. In his paper published in the journal Joule, he describes how he calculated past and current global electricity usage by AI data centers and how he made estimates regarding the future.

Recently, the International Energy Agency reported that data centers were responsible for up to 1.5% of in 2024—a number that is rising rapidly. Data centers are used for more things than crunching AI queries, as de Vries-Gao notes. They are also used to process and store cloud data, notably as part of bitcoin mining.

Over the past few years, AI makers have acknowledged that running LLMs such as ChatGPT takes a lot of computing power. So much so, that some of them have begun to generate their own electricity to ensure their needs are met. Over the past year, as de Vries-Gao notes, AI makers have become less forthcoming with details regarding energy use. Because of that, he set about making some estimates of his own.

Microscopic movies capture brain proteins in action, revealing new insight into shapes and functions

Our cells rely on microscopic highways and specialized protein vehicles to move everything—from positioning organelles to carting protein instructions to disposing of cellular garbage. These highways (called microtubules) and vehicles (called motor proteins) are indispensable to cellular function and survival.

The dysfunction of motor proteins and their associated proteins can lead to severe neurodevelopmental and neurodegenerative disorders. For example, the dysfunction of Lis1, a partner protein to the motor protein , can lead to the rare fatal birth defect lissencephaly, or “smooth brain,” for which there is no cure. But therapeutics that target and restore dynein or Lis1 function could change those dismal outcomes—and developing those therapeutics depends on thoroughly understanding how dynein and Lis1 interact.

New research from the Salk Institute and UC San Diego captured short movies of Lis1 “turning on” dynein. The movies allowed the team to catalog 16 shapes that the two proteins take as they interact, some of which have never been seen before. These insights will be foundational for designing future therapeutics that restore dynein and Lis1 function, since they shine a light on precise locations where drugs could interact with the proteins.

Bimodal video imaging platform predicts hyperspectral frames from RGB video

Hyperspectral imaging (HSI), or imaging spectroscopy, captures detailed information across the electromagnetic spectrum by acquiring a spectrum for each pixel in an image. This enables precise identification of materials through their spectral signatures.

HSI supports Earth remote sensing applications such as automated classification, abundance mapping, and estimation of physical and biological properties like soil moisture, sediment density, , biomass, leaf area, and pigment content.

Although HSI offers detailed insight into a remote sensing scene, HSI data may not be readily available for an intended application. Recent studies have attempted to combine HSI with traditional red-green-blue (RGB) video acquisition to lower costs and improve performance. However, this fusion technology still faces technical challenges.

/* */