Toggle light / dark theme

A New Era Of Trusted Payments

If you read my last post, you may have had the same reaction as the legendary fintech blogger Chris Skinner. On the blog entitled “Fintechs New Power Couple: AI and Trust, he politely corrected, ” AI, trust and DLT sir” as a comment on my post.

As soon as I read his input I knew he was right. I had to write a follow up post, to correct my glaring omission. As there are three forces converging here rather than two, I will update the title to make it both more contemporary, and more accurate at the same time…

Fintech’s New Power Throuple is the convergence of AI, Trust, and Distributed Ledger Technology (DLT).

If I drew a diagram of the relationships between the three different factors I would put it in the form of a triangle. From my viewpoint Trust would hold the uppermost position, with Blockchain and Artificial Intelligence occupying the two lower positions.

They are kind of the technology layer that makes that makes Trust possible.

As Trust isn’t a technology — or is it? 🤔

(https://fintechconfidential-newsletter.beehiiv.com/p/m2020-a…-payments)

Traumatic Brain Injury and Artificial Intelligence: Shaping the Future of Neurorehabilitation—A Review

AI has emerged as a pivotal tool in redefining TBI rehabilitation, bridging gaps in traditional care with innovative, data-driven approaches. While its potential to enhance diagnostic accuracy, outcome prediction, and individualized therapy is evident, challenges such as bias in datasets and ethical implications must be addressed. Continued research and multidisciplinary collaboration will be key to harnessing AI’s full potential, ensuring equitable access and optimizing recovery outcomes for TBI patients.

Overall, the integration of AI in TBI rehabilitation presents numerous opportunities to advance patient care and enhance the effectiveness of therapeutic interventions.

The simulated Milky Way: 100 billion stars using 7 million CPU cores

Researchers have successfully performed the world’s first Milky Way simulation that accurately represents more than 100 billion individual stars over the course of 10 thousand years. This feat was accomplished by combining artificial intelligence (AI) with numerical simulations. Not only does the simulation represent 100 times more individual stars than previous state-of-the-art models, but it was produced more than 100 times faster.

Published in Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, the study represents a breakthrough at the intersection of astrophysics, high-performance computing, and AI. Beyond astrophysics, this new methodology can be used to model other phenomena such as and .

MAKER: Large Language Models (LLMs) have achieved remarkable breakthroughs in reasoning, insight generation, and tool use

They can plan multi-step actions, generate creative solutions, and assist in complex decision-making. Yet these strengths fade when tasks stretch over long, dependent sequences. Even small per-step error rates compound quickly, turning an impressive short-term performance into complete long-term failure.

That fragility poses a fundamental obstacle for real-world systems. Most large-scale human and organizational processes – from manufacturing and logistics to finance, healthcare, and governance – depend on millions of actions executed precisely and in order. A single mistake can cascade through an entire pipeline. For AI to become a reliable participant in such processes, it must do more than reason well. It must maintain flawless execution over time, sustaining accuracy across millions of interdependent steps.

Apple’s recent study, The Illusion of Thinking, captured this challenge vividly. Researchers tested advanced reasoning models such as Claude 3.7 Thinking and DeepSeek-R1 on structured puzzles like Towers of Hanoi, where each additional disk doubles the number of required moves. The results revealed a sharp reliability cliff: models performed perfectly on simple problems but failed completely once the task crossed about eight disks, even when token budgets were sufficient. In short, more “thinking” led to less consistent reasoning.

AI at the speed of light just became a possibility

Researchers at Aalto University have demonstrated single-shot tensor computing at the speed of light, a remarkable step towards next-generation artificial general intelligence hardware powered by optical computation rather than electronics.

Tensor operations are the kind of arithmetic that form the backbone of nearly all modern technologies, especially , yet they extend beyond the simple math we’re familiar with. Imagine the mathematics behind rotating, slicing, or rearranging a Rubik’s cube along multiple dimensions. While humans and classical computers must perform these operations step by step, light can do them all at once.

Today, every task in AI, from image recognition to , relies on tensor operations. However, the explosion of data has pushed conventional digital computing platforms, such as GPUs, to their limits in terms of speed, scalability and energy consumption.

AI Bubble

A lot of people are talking about an AI bubble since it is normal for tech to explode in growth for a while, then collapse a bit, and then eventually move forward again.

WE ARE NOT IN AN AI BUBBLE. THE SINGULARITY HAS BEGUN.

There will not be a year between now and the upcoming AI takeover where AI data center spending will decline worldwide.


Learn about the Singularity!

From Data to Physics: An Agentic Large Language Model Solves a Competitive Adsorption Puzzle

We show that an agentic large language model (LLM) (OpenAI o3 with deep research) can autonomously reason, write code, and iteratively refine hypotheses to derive a physically interpretable equation for competitive adsorption on metal-organic layers (MOLs)—an open problem our lab had struggled with for months. In a single 29-min session, o3 formulated the governing equations, generated fitting scripts, diagnosed shortcomings, and produced a compact three-parameter model that quantitatively matches experiments across a dozen carboxylic acids.

/* */