Toggle light / dark theme

How Claude Reset the AI Race

Over the holidays, some strange signals started emanating from the pulsating, energetic blob of X users who set the agenda in AI. OpenAI co-founder Andrej Karpathy, who coined the term “vibe coding” but had recently minimized AI programming as helpful but unremarkable “slop,” was suddenly talking about how he’d “never felt this much behind as a programmer” and tweeting in wonder about feeling like he was using a “powerful alien tool.” Others users traded it’s so overs and we’re so backs, wondering aloud if software engineering had just been “solved” or was “done,” as recently anticipated by some industry leaders. An engineer at Google wrote of a competitor’s tool, “I’m not joking and this isn’t funny,” describing how it replicated a year of her team’s work “in an hour.” She was talking about Claude Code. Everyone was.

The broad adoption of AI tools has been strange and unevenly distributed. As general-purpose search, advice, and text-generation tools, they’re in wide use. Across many workplaces, managers and employees alike have struggled a bit more to figure out how to deploy them productively or to align their interests (we can reasonably speculate that in many sectors, employees are getting more productivity out of unsanctioned, gray-area AI use than they are through their workplace’s official tools). The clearest exception to this, however, is programming.

In 2023, it was already clear that LLMs had the potential to dramatically change how software gets made, and coding-assistance tools were some of the first tools companies found reason to pay for. In 2026, the AI-assisted future of programming is rapidly coming into view. The practice of writing code, as Karpathy puts it, has moved up to another “layer of abstraction,” where a great deal of old tasks can be managed in plain English and writing software with the help of AI tools amounts to mastering “agents, subagents, their prompts, contexts, memory, modes, permissions, tools, plugins, skills, hooks, MCP, LSP, slash commands, workflows, [and] IDE integrations” —which is a long way of saying that, soon, it might not involve actually writing much code at all.

Ideal Wealth Grower

The Age of Abundance: AGI Predictions and Investment Strategy for 2026 ## The imminent emergence of Artificial General Intelligence (AGI) will significantly impact investment strategies, and investors should prepare by accumulating valuable assets, diversifying their portfolios, and adopting a strategic investment approach to capitalize on the growth potential of AGI and the resulting Age of Abundance ##

## Questions to inspire discussion.

Investment Strategy.

🎯 Q: How should I build an investment portfolio for the AGI era? A: Focus on companies with best management and growth potential in AGI, robotics, and space industries using dollar-cost averaging strategy to systematically build positions over time rather than lump-sum investing.

💰 Q: What alternative assets should I accumulate for security during AGI transition? A: Accumulate gold, silver, and Bitcoin as modern safe havens to protect wealth during periods of change and uncertainty as traditional financial systems adapt to AGI disruption.

🏢 Q: Why should real estate be overweighted in an AGI-focused portfolio? A: Overweight real estate compared to company ownership and reserves because AGI and robotics industries will require physical space for operations, data centers, and manufacturing facilities.

Intervene Immune puts thymus regeneration back in the spotlight

This would be my first funding target if I had the money.


Beyond science, Brooke sees three main hurdles: regulation, manufacturing and physician usability. Public acceptance, he says, already appears strong. Regulatory pathways may be comparatively favorable, given the long histories of the cocktail’s components and existing approvals for immune deficiency states. Manufacturing is always a challenge for biologics at scale, but protein production is mature and scalable, and the company is building internal capacity.

The immediate obstacle, Brooke argues, is complexity. Intervene Immune is developing a dosing system designed to be relatively foolproof for doctors, and expects to build an AI assistant to support implementation as sufficient data accumulates.

Intervene Immune’s closing sentiment is less biotech slogan and more biological provocation. “No matter how old you may be, your body still remembers how to be young,” Brooke says.

New microscopy technique preserves the cell’s natural conditions

Researchers at Istituto Italiano di Tecnologia (IIT-Italian Institute of Technology) have developed an innovative microscopy technique capable of improving the observation of living cells. The study, published in Optics Letters, paves the way for a more in-depth analysis of numerous biological processes without the need for contrast agents. The next step will be to enhance this technique using artificial intelligence, opening the door to a new generation of optical microscopy methods capable of combining direct imaging with innovative molecular information.

The study was conducted under the guidance of Alberto Diaspro, Research Director of the Nanoscopy Unit and Scientific Director of the Italian Nikon Imaging Center at IIT, by Nicolò Incardona (first author) and Paolo Bianchini.

New RoboReward dataset and models automate robotic training and evaluation

The advancement of artificial intelligence (AI) algorithms has opened new possibilities for the development of robots that can reliably tackle various everyday tasks. Training and evaluating these algorithms, however, typically requires extensive efforts, as humans still need to manually label training data and assess the performance of models in both simulations and real-world experiments.

Researchers at Stanford University and UC Berkeley have introduced RoboReward, a dataset for training and evaluating AI algorithms for robotics applications, specifically vision-language reward-based models (VLMs).

Their paper, published on the arXiv preprint server, also presents RoboReward 4B and 8B, two new VLMs that were trained on this dataset and outperform other models introduced in the past.

View a PDF of the paper titled A Disproof of Large Language Model Consciousness: The Necessity of Continual Learning for Consciousness, by Erik Hoel

Scientific theories of consciousness should be falsifiable and non-trivial. Recent research has given us formal tools to analyze these requirements of falsifiability and non-triviality for theories of consciousness. Surprisingly, many contemporary theories of consciousness fail to pass this bar, including theories based on causal structure but also (as I demonstrate) theories based on function. Herein I show these requirements of falsifiability and non-triviality especially constrain the potential consciousness of contemporary Large Language Models (LLMs) because of their proximity to systems that are equivalent to LLMs in terms of input/output function; yet, for these functionally equivalent systems, there cannot be any falsifiable and non-trivial theory of consciousness that judges them conscious. This forms the basis of a disproof of contemporary LLM consciousness. I then show a positive result, which is that theories of consciousness based on (or requiring) continual learning do satisfy the stringent formal constraints for a theory of consciousness in humans. Intriguingly, this work supports a hypothesis: If continual learning is linked to consciousness in humans, the current limitations of LLMs (which do not continually learn) are intimately tied to their lack of consciousness.

What Is Nanotechnology? The Atomic Future Waiting to Begin

The idea never died, progress is still being made.


Nanotechnology was once imagined as the next great technological revolution—atom-by-atom manufacturing, machines as small as cells, and materials we can only dream of today. Instead, it stalled. While AI, robotics, and nuclear surged ahead, nanotech faded into the background, reduced to buzzwords and sci-fi aesthetics.

But the idea never died.

We can manipulate matter at the atomic scale. We can design perfect materials. We can build molecular machines. What’s been missing isn’t physics—it’s ambition, investment, and the will to push beyond today’s tools.

In this interview with futurist J. Storrs Hall, we explore what nanotechnology really is, why it drifted off course, and why its future may finally be on the horizon. If AI was a “blue-sky fantasy” until suddenly it wasn’t, what happens when someone decides nanotech deserves the same surge of talent, money, and imagination?

/* */