Toggle light / dark theme

Label-free optical imaging enables automated measurement of human white matter microstructure

White matter pathways allow distant parts of the brain to communicate, supporting memory, emotion, and language. One such pathway, the uncinate fasciculus, connects the front of the temporal lobe with regions of the frontal cortex involved in decision-making and social behavior. Despite its importance, little is known about the microscopic structure of this tract in the human brain.

Traditional techniques such as electron microscopy can reveal fine details, but they often fail when applied to postmortem human tissue, which is frequently degraded.

In a study published in Biophotonics Discovery, researchers report a new way to examine white matter structure in postmortem human brains.

AI Models Mirror Human Logic on Real-World Scenarios

“What we show is that the models actually capture that human uncertainty pretty well,” said Michael Lepori. [ https://www.labroots.com/trending/technology/30475/ai-models…cenarios-2](https://www.labroots.com/trending/technology/30475/ai-models…cenarios-2)


Can AI models distinguish fact from fiction? This is what a new study scheduled to be presented at the International Conference on Learning Representations this weekend hopes to address as a team of scientists investigated how AI models could tell the difference between facts and “fake news”. This study has the potential to help scientists, engineers, and the public better understand how AI models can evolve to meet human needs, which comes at a time when AI is becoming more integrated into our everyday lives.

For the study, the researchers analyzed how AI language models (LMs) were able to differentiate between different topics and information and judge what’s true and what’s fake. The motivation behind this study was to address a knowledge gap regarding whether large language models (LLMs) have a human-like understanding of the world or if they simply make decisions based on what’s given to them.

The goal of the study was to ascertain if the LMs could determine whether an event is real or fake, along with ascertaining when the LM makes this determination during its thought process. For example, the researchers would give the LM simple scenarios like “clean a car”, clean a road”, and “clean a cloud”, and ask the LM to figure out which was real or fake. In the end, the researchers found that large LMs were capable of differentiating between real and fake events or data.

Why faster AI isn’t always better

In the race to make AI models not just reason better but respond faster, latency—the delay before an answer appears—is often treated as a purely technical constraint, something to minimize and move past. But how is this relentless push for speed actually impacting the people using these systems every day?

There is a rich body of work in human–computer interaction linking faster response times to better usability. But AI models are fundamentally different from the deterministic systems that previous research was built on. When you wait for a file to download or a page to load, the outcome is fixed and predictable.

AI models are probabilistic—you cannot anticipate the precise response. Their conversational interface means users naturally read human social cues into the interaction. A pause might be read as the AI “thinking,” for instance. Users are increasingly asked to choose between faster models and slower, deeper-reasoning ones, without guidance on what that choice actually means for their experience.

Researchers use statistics and math to understand how the brain works

Nothing rivals the human brain’s complexity. Its 86 billion neurons and 85 billion other cells make an estimated 100 trillion connections. If the brain were a computer, it would perform an exaflop (a billion-billion) mathematical calculations every second and use the equivalent of only 20 watts of power. As impressive as the brain is, neurologists can’t fully explain how neurons work together.

To help find answers, researchers at the Institute for Neuroscience, Neurotechnology, and Society (INNS) at Georgia Tech are using math, data, and AI to unlock the secrets of thought. Together they are helping turn the brain’s raw electrical “noise” into real insights about how people think, move, and perceive the world.

Fair warning: Prepare your neurons for the complexity of this brain research ahead.

Introducing MirrorBot, a robot designed to foster human connection

While technology has made the world “smaller,” it has also pulled individuals apart, thanks to mobile phones and other devices that command our attention. Cornell University researchers are using technology, in the form of a mirror-equipped robot, to help bring people together. Members of the Architectural Robotics Lab, led by Keith Evan Green, have built a four-foot-tall robot—dubbed MirrorBot—with dual mirrors that, when placed in front of a pair of strangers, let each participant see themself in one mirror and the other person in the other.

In a study involving participants in a waiting-room setting, MirrorBot spurred conversations, playful exchanges and other interactions between strangers. The findings suggest that robots can act not only as conversational partners, but also as spatial mediators. The research is published in the journal Proceedings of the 21st ACM/IEEE International Conference on Human-Robot Interaction.

“We weren’t just trying to trigger conversations, but to support the very first moment of social connection, which is the eye contact,” said Serena Guo, lead author of the paper.

Sovereign AI: Why Owning The Full Stack Is The New Strategic Imperative

By Chuck Brooks


Artificial intelligence has entered a new phase of strategic consequence, and executives, policymakers, and small business owners can no longer afford to treat it as a back-office technology decision. The central question is no longer whether an organization will use AI. It is how much of that AI the organization will actually own.

Sovereign AI—the end-to-end ownership of the data, the model, and the interaction layer that connects them to the people who depend on them—is rapidly moving from a geopolitical discussion into a board-level and Main Street requirement.

Sovereign AI has largely been framed as a national concern, but that framing is incomplete. The same logic that compels a nation to own its AI stack compels a hospital system, a regional bank, a defense supplier, and a mid-sized manufacturer to do the same.

Nuclei Limit Neural Network Quantum Simulations

For a fixed number of configurations, representing quantum states becomes less accurate as their non-stabilizerness increases. This demonstrates a clear limit to how well restricted Boltzmann machines can compress and represent highly entangled systems. Calculations using ground states of medium-mass atomic nuclei reveal non-stabilizerness as a key property governing neural network performance.

/* */