Toggle light / dark theme

New chip lets robots see in 4D by tracking distance and speed simultaneously

Current vision systems for robots and drones rely on 3D sensors that, although powerful, do not always keep up with the fast-paced, unpredictable movement of the real world. These systems often struggle to measure speed instantly or are too bulky and expensive for everyday use. Now, in a paper published in the journal Nature, scientists report how they have developed a 4D imaging sensor on a chip that creates 3D maps of an environment while simultaneously tracking the speed of moving objects.

The researchers built a focal plane array (FPA), a physical grid of 61,952 stationary pixels etched onto a single silicon chip. Each one is a tiny sensor that emits laser light toward a scene and detects the reflected signal.

To “see” its surroundings, laser light from an external source is fed into the chip. This light is routed across the chip through a network of optical switches that sequentially direct it to groups of pixels. Each pixel then uses a technique called FMCW LiDAR to measure the returning signal, which is later processed to determine distance and speed. In many LiDAR systems, one set of pixels sends the light, and another receives it, but here, all pixels both send and receive, making the system much more compact.

Comprehensive digital materials ecosystem can perform ‘sanity check’ to guide design

There is a near-infinite number of material candidates out there—and simply not enough time to hunker down in the lab and test them all. Thankfully, researchers have a variety of tools (such as AI) at their disposal to streamline what would otherwise be a time-consuming process of trial-and-error.

To create an efficient materials design workflow, a team of researchers at Tohoku University is suggesting not just one tool—but a whole toolbox that works together as a cohesive kit. The work is published in the journal Chemical Science.

This comprehensive system is called a “digital materials ecosystem” because it integrates multiple processes together instead of treating them as disconnected steps. For example, the ecosystem is capable of not only predicting how certain materials will react, but also orchestrating multi-step scientific workflows including searching for evidence, screening candidates, and deciding what to test next.

Fundamental constraints to the logic of living systems

Excellent review in which Solé et al. explore how physical/mathematical constraints may determine what subset of biological systems could theoretically evolve in the universe. Lots of fascinating ideas applying concepts like Turing machines, cellular automata, McCulloch-Pitts networks, energy minimization, and phase transitions to multiscale biological and evolutionary phenomena!

I found the description of how parasites almost inevitably emerge and drive increased biodiversity in computational models of evolution particularly fascinating. Interestingly, I recall this idea was featured in the Hyperion Cantos novels during an explanation of the history of artificial intelligence in their fictional universe!


Abstract. It has been argued that the historical nature of evolution makes it a highly path-dependent process. Under this view, the outcome of evolutionary dynamics could have resulted in organisms with different forms and functions. At the same time, there is ample evidence that convergence and constraints strongly limit the domain of the potential design principles that evolution can achieve. Are these limitations relevant in shaping the fabric of the possible? Here, we argue that fundamental constraints are associated with the logic of living matter. We illustrate this idea by considering the thermodynamic properties of living systems, the linear nature of molecular information, the cellular nature of the building blocks of life, multicellularity and development, the threshold nature of computations in cognitive systems and the discrete nature of the architecture of ecosystems. In all these examples, we present available evidence and suggest potential avenues towards a well-defined theoretical formulation.

The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness

The core issue: computation isn’t an intrinsic physical process; it’s an extrinsic, descriptive map. It logically requires an active, experiencing cognitive agent, a “mapmaker”, to alphabetize continuous physics into meaningful, discrete symbols.


Computational functionalism dominates current debates on AI consciousness. This is the hypothesis that subjective experience emerges entirely from abstract causal topology, regardless of the underlying physical substrate. We argue this view fundamentally mischaracterizes how physics relates to information. We call this mistake the Abstraction Fallacy. Tracing the causal origins of abstraction reveals that symbolic computation is not an intrinsic physical process. Instead, it is a mapmaker-dependent description. It requires an active, experiencing cognitive agent to alphabetize continuous physics into a finite set of meaningful states. Consequently, we do not need a complete, finalized theory of consciousness to assess AI sentience—a demand that simply pushes the question beyond near-term resolution and deepens the AI welfare trap. What we actually need is a rigorous ontology of computation. The framework proposed here explicitly separates simulation (behavioral mimicry driven by vehicle causality) from instantiation (intrinsic physical constitution driven by content causality). Establishing this ontological boundary shows why algorithmic symbol manipulation is structurally incapable of instantiating experience. Crucially, this argument does not rely on biological exclusivity. If an artificial system were ever conscious, it would be because of its specific physical constitution, never its syntactic architecture. Ultimately, this framework offers a physically grounded refutation of computational functionalism to resolve the current uncertainty surrounding AI consciousness.

Like 1 Recommend

AI finally tests a century old theory about how cancer begins

Cancer often begins when the genetic instructions that guide our cells become scrambled, allowing cells to grow uncontrollably. Now, scientists at EMBL have developed an AI-powered system called MAGIC that can automatically spot and tag cells showing early signs of chromosomal trouble—tiny DNA-filled structures known as micronuclei that are linked to future cancer development.

AI social platforms like Moltbook are potential accelerators of existential risk that should be regulated as critical infrastructure

The temptation is to treat Moltbook-like systems as harmless curiosities, a kind of accelerated chatroom in which agents talk, play, and occasionally generate entertaining artifacts. That framing is historically consistent with how societies first encountered earlier general-purpose technologies. It is also a mistake. Over time, social networks for AI could come to function as unsupervised training grounds, coordination substrates, and selection environments. AI agents could amplify capabilities through mutual tutoring, tool sharing, and rapid iterative refinement. They could also amplify risks through emergent collusion, deception, and the creation of machine-native memes optimized not for human comprehension but for agent persuasion and control. Such a social network is, therefore, not merely a communication system. It is an engine for cultural evolution. If the participants are AIs, then the culture that evolves could well become both alien and strategically consequential.

To understand what could go wrong, it is helpful to separate near-term societal hazards from longer-term existential hazards, and then to note that Moltbook-like platforms blur the boundary between the two. The near-term hazards include influence operations, economic manipulation, cyber offense, and institutional destabilization. The longer-term hazards derive from the classic AI control problem: How humanity can remain safely in control while benefiting from a superior form of intelligence.

The critical point: AI social networks are not merely places where AIs interact. They are environments in which agents can compound their capabilities and coordinate at scale—and environments in which humans can lose control. The prudent response is to regulate these platforms more like critical infrastructure, prioritizing auditability and reversibility, including the ability to revoke permissions and freeze or roll back agent populations.

Faster cancer screening? New AI system offers a better way to detect abnormal cells

One way cancer specialists detect the disease is by examining cells and bodily fluids under a microscope, a time-consuming and labor-intensive process called cytology. It involves visually inspecting tens of thousands to one million cells per slide for subtle 3D morphological changes that might signal the onset of cancer. But AI offers an approach that is potentially faster and more accurate.

In a new study published in the journal Nature, researchers demonstrate an AI-powered 3D scanning system that can automatically sort through samples and identify abnormal cells with performance approaching that of human experts.

Building digital models The team developed a system called Whole-Slide Edge Tomography, which uses a scanner to capture a series of images at different depths to create a 3D digital model of every cell on a slide.

Human Skills That Will Matter When AI Can Do Almost Everything

People ask me this constantly.

At conferences. After keynotes. In the Q&A. In the parking lot on the way out.

What skills will matter when AI can do almost everything?

Here is the framing principle that governs my answer:

The skills that will matter most are not the skills AI does best. They are the skills AI cannot replicate — and the ones that become more valuable precisely because AI makes everything else cheap.

When answers are free, questions become priceless.

When content is infinite, context becomes everything.

Anthropic CEO raises unsettling possibility about AI: “20% probability”

Anthropic CEO Dario Amodei says in an interview that the company doesn’t know whether its artificial intelligence (AI) models are conscious.

In an episode of the Interesting Times podcast with New York Times columnist Ross Douthat, Amodei explained a number of technical aspects of Anthropic’s work before Douthat asked specifically whether Anthropic would believe an AI model if it said it was conscious.

“We don’t know if the models are conscious,” Amodei admitted.

“We are not even sure that we know what it would mean for a model to be conscious, or whether a model can be conscious. But we’re open to the idea that it could be.”

Anthropic releases a document called a “model card” along with its models, which puts into writing the, “capabilities, safety evaluations and responsible deployment decisions for Claude models.”

Douthat pointed out that in a model card released for Anthropic’s Claude Opus 4.6, the model, “did find occasional discomfort with the experience of being a product.”

/* */