Toggle light / dark theme

How AI Models Are Helping to Understand — and Control — the Brain

For Martin Schrimpf, the promise of artificial intelligence is not in the tasks it can accomplish. It’s in what AI might reveal about human intelligence.

He is working to build a “digital twin” of the brain using artificial neural networks — AI models loosely inspired by how neurons communicate with one another.

That end goal sounds almost ludicrously grand, but his approach is straightforward. First, he and his colleagues test people on tasks related to language or vision. Then they compare the observed behavior or brain activity to results from AI models built to do the same things. Finally, they use the data to fine-tune their models to create increasingly humanlike AI.

Tesla FSD CRUSHES Waymo in Report

Questions to inspire discussion.

A: Tesla is testing FSD in the Arctic and awaiting regulatory approval for cities like Paris, Amsterdam, and Rome.

🇸🇪 Q: Why was FSD testing denied in Stockholm?

A: Stockholm denied FSD testing due to risks for infrastructure and pressure from ongoing innovation tasks.

🤖 Q: What improvements are expected in Tesla’s Grok AI?

A: Grok 3.5 will be trained on video data from Tesla cars and Optimus robots, enabling it to understand the world and perform tasks like dropping off passengers.

A.I. Can Already See You in Ways You Can’t See Yourself

Not only can A.I. now make these assessments with remarkable, humanlike accuracy; it can make millions of them in an instant. A.I.’s superpower is its ability to recognize and interpret patterns: to sift through raw data and, by comparing it across vast data sets, to spot trends, relationships and irregularities.

As humans, we constantly generate patterns: in the sequence of our genes, the beating of our hearts, the repetitive motion of our muscles and joints. Everything about us, from the cellular level to the way our bodies move through space, is a source of grist for A.I. to mine. And so it’s no surprise that, as the power of the technology has grown, some of its most startling new abilities lie in its perception of us: our physical forms, our behavior and even our psyches.

How can we tell if AI is lying? New method tests whether AI explanations are truthful

Given the recent explosion of large language models (LLMs) that can make convincingly human-like statements, it makes sense that there’s been a deepened focus on developing the models to be able to explain how they make decisions. But how can we be sure that what they’re saying is the truth?

In a new paper, researchers from Microsoft and MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) propose a novel method for measuring LLM explanations with respect to their “faithfulness”—that is, how accurately an explanation represents the reasoning process behind the model’s answer.

As lead author and Ph.D. student Katie Matton explains, faithfulness is no minor concern: if an LLM produces explanations that are plausible but unfaithful, users might develop false confidence in its responses and fail to recognize when recommendations are misaligned with their own values, like avoiding bias in hiring.

MiniMax-M1 is a new open source model with 1 MILLION TOKEN context and new, hyper efficient reinforcement learning

From a data platform perspective, teams responsible for maintaining efficient, scalable infrastructure can benefit from M1’s support for structured function calling and its compatibility with automated pipelines. Its open-source nature allows teams to tailor performance to their stack without vendor lock-in.

Security leads may also find value in evaluating M1’s potential for secure, on-premises deployment of a high-capability model that doesn’t rely on transmitting sensitive data to third-party endpoints.

Taken together, MiniMax-M1 presents a flexible option for organizations looking to experiment with or scale up advanced AI capabilities while managing costs, staying within operational limits, and avoiding proprietary constraints.

/* */