Toggle light / dark theme

How close are we to true AI?

Understanding consciousness is the ultimate prize for creators of artificial intelligence. Nevertheless, consciousness theory will also shape how we view ourselves and our place in the world. Although AI systems can mimic human reasoning, they can only regurgitate the input data. They are sophisticated pattern recognizers and content remixers, but cannot step beyond the limitations of the input. Understanding consciousness would enable us to transition from synthetic to synthesis, unlocking unlimited potential.

Computer scientists hope that recurrent computation will somehow ‘awaken’ code to consciousness. Yet the spectacular achievements of large language and diffusion models have not moved beyond imitation. We train models on the outputs of consciousness—our language, our art, our logic—while remaining entirely ignorant of the process that produces them. An AI can write a gut-wrenching paragraph about sadness by replicating patterns, vocabulary, and syntax. But it knows nothing of grief. It can create a shadow play, yet knows nothing of the object that casts it. This imitation, while impressive, should not be mistaken for a proper understanding of consciousness. No amount of coloring can turn the shadow into a solid object.

To reverse-engineer the mind, we need a blueprint. The pressing need to advance AI is a physicalist theory of consciousness, the architecture of subjective experience itself. The Fermionic Mind Hypothesis (FMH) is such a physicalist framework. It posits that selfhood is structurally and functionally analogous to a fermion in physics. The self’s persistent core operates as an energy-regulating system, maintaining mental equilibrium through continuous thermodynamic cycles. Within this cycle, cognitive processes such as decision-making are wave-particle transitions that capture the inherent nondeterminism and contextual collapse of probabilistic mental states.

Roman Yampolskiy — AI: Unexplainable, Unpredictable, Uncontrollable

In this presentation, Dr. Roman V. Yampolskiy provides a rigorous examination of the fundamental limitations of Artificial Intelligence, arguing that as systems approach and surpass human-level intelligence, they become inherently unexplainable, unpredictable, and uncontrollable. He illustrates how the black box nature of deep learning prevents full audits of decision-making, while concepts like computational irreducibility suggest we cannot forecast the actions of a smarter agent without running it – often until it is too late for safety. He asserts that there is currently no evidence or mathematical proof to guarantee that a superintelligent system can be safely contained or aligned with human values.
Dr. Yampolskiy further bridges theoretical computer science with safety engineering by applying impossibility results, such as the Halting Problem and Rice’s Theorem, to demonstrate that certain safety guarantees for Artificial General Intelligence (AGI) are mathematically unreachable. These technical impediments lead to a sobering discussion on existential risk, where the inability to verify or monitor advanced systems results in an alarmingly high probability of catastrophic outcomes. By analysing why advanced AI defies traditional engineering safety standards, he makes the case that current trajectories may lead to irreversible consequences for humanity.
To conclude, the talk shifts toward potential pathways for mitigation, emphasising the urgent need to prioritise specialised, narrow AI over the pursuit of general superintelligence. Dr. Yampolskiy argues that while narrow AI can solve global challenges within controllable parameters, the pursuit of AGI represents an existential gamble. He calls for a shift in the research community from a “move fast and break things” mentality to a mathematically grounded approach, urging that we must prove a problem is solvable before investing billions into its deployment.

Whole Brain Emulation & Substrate-Independence: New Beginnings For Old Minds

When a human mind can be emulated — memories, habits, and the weather of thought running on engineered hardware — “uploading” stops being an ending and becomes a beginning. Substrate-independent minds can be backed up, restored, paused without time passing, and deployed into new bodies: telepresence robots, swarms, or chassis built for heat and radiation. Distance turns into bandwidth as consciousness moves as data, bound only by light. Under the spectacle is a harder, technical question: what must be captured, at what scale, for an emulation to be someone — and what rights and power follow once persons are portable infrastructure?

Mind uploading has usually been told as a one-way escape hatch: a last-minute transfer from a failing body into a machine, the technological equivalent of outrunning a deadline. That framing makes the idea feel like a hospice fantasy — dramatic, personal, terminal. But it leaves out the second verb that changes everything. If a mind can be reproduced as a running process, it isn’t just uploaded once; it can be instantiated again, moved, paused, restored, and redeployed. Uploading is capture. Downloading is what makes a mind into something mobile.

The phrase “substrate-independent mind” tries to name that mobility without the melodrama. A substrate is the medium a mind runs on: biological tissue, silicon, specialized hardware, something not yet invented. Independence doesn’t mean the mind floats free of physics; it means the same meaningful mental functions might be implementable on different platforms, like a program that can run on different computers. The promise is not that neurons are irrelevant, but that the mind might be the pattern of information processing the neurons carry out — the thing they do, not the stuff they’re made of.

The language of the unconscious

“The unconscious is structured like a language,” argued psychoanalyst Jacques Lacan.

And now, with the rise of AI-generated video and audio, Lacan’s thinking has taken an unexpected twist.

Might AI therefore capture something key about the human unconscious?

Join leading Lacanian philosopher and collaborator of Slavoj Žižek, Alenka Zupančič, as she argues that AI shows the unconscious is structured like a large language model.

REPLACED BY AI! | Seedance 2 + Kling 3.0 Short Film

The increasing use of Artificial Intelligence (AI) in the workplace is leading to job displacement and raising concerns among employees about the security of their positions ## Key Insights.

Career Obsolescence Through AI

🔄 AI engineer David becomes obsolete after 7 years and 1,000 lines of code building the AI division, receiving a “sweet pink slip” as the CEO eliminates his role and takes his company car while AI assumes control of the entire division.

Existential Work Motivation.

💭 David questions whether his 7-year dedication was driven by glory, stock options, passion, art, or simply maintaining purpose (“beating heart”), confronting the irony of being replaced by the AI system he built.

Corporate Restructuring Mechanics.

AI to help researchers see the bigger picture in cell biology

A new AI framework identifies which data about a cell are captured by one measurement modality and which are shared across multiple modalities. This gives researchers a more complete picture of the cell state and could help them understand disease mechanisms and plan treatments.

DeepSeek withholds latest AI model from US chipmakers including Nvidia, sources say

SAN FRANCISCO/SINGAPORE — DeepSeek, the Chinese artificial intelligence lab whose low-cost model rattled global markets last year, has not shown US chipmakers its upcoming flagship model for performance optimization, two sources familiar with the matter said, breaking from standard industry practice ahead of a major model update.

Instead, the lab, which is expected to launch its next major update, V4, granted early access to domestic suppliers, including Huawei Technologies, the sources said.

AI developers typically share pre-release versions of major models with leading chipmakers such as Nvidia and Advanced Micro Devices to ensure their software performs efficiently on widely used hardware. DeepSeek has previously worked closely with Nvidia’s technical staff.

/* */