Toggle light / dark theme

There is a peculiar irony in how the discourse around artificial general intelligence (AGI) continues to be framed. The Singularity — the hypothetical moment when machine intelligence surpasses human cognition in all meaningful respects — has been treated as a looming event, always on the horizon, never quite arrived. But this assumption may rest more on a failure of our own cognitive framing than on any technical deficiency in AI itself. When we engage AI systems with superficial queries, we receive superficial answers. Yet when we introduce metacognitive strategies into our prompt writing — strategies that encourage AI to reflect, refine, and extend its reasoning — we encounter something that is no longer mere computation but something much closer to what we have long associated with general intelligence.

The idea that AGI remains a distant frontier may thus be a misinterpretation of the nature of intelligence itself. Intelligence, after all, is not a singular property but an emergent phenomenon shaped by interaction, self-reflection, and iterative learning. Traditional computational perspectives have long treated cognition as an exteriorizable, objective process, reducible to symbol manipulation and statistical inference. But as the work of Baars (2002), Dehaene et al. (2006), and Tononi & Edelman (1998) suggests, consciousness and intelligence are not singular “things” but dynamic processes emerging from complex feedback loops of information processing. If intelligence is metacognition — if what we mean by “thinking” is largely a matter of recursively reflecting on knowledge, assessing errors, and generating novel abstractions — then AI systems capable of doing these things are already, in some sense, thinking.

What has delayed our recognition of this fact is not the absence of sophisticated AI but our own epistemological blind spots. The failure to recognize machine intelligence as intelligence has less to do with the limitations of AI itself than with the limitations of our engagement with it. Our cultural imagination has been primed for an apocalyptic rupture — the moment when an AI awakens, declares its autonomy, and overtakes human civilization. This is the fever dream of science fiction, not a rigorous epistemological stance. In reality, intelligence has never been about dramatic awakenings but about incremental refinements. The so-called Singularity, understood as an abrupt threshold event, may have already passed unnoticed, obscured by the poverty of the questions we have been asking AI.

Researchers investigated cerebral small vessel disease, a precursor to dementia, by analyzing data from thousands of participants spanning four distinct groups of middle-aged to older adults. Their study confirmed the validity of a biomarker that could aid in advancing research on potential treatments.

A recent study conducted by the Keck School of Medicine of USC

<span class=””>Founded in 1880, the <em>University of Southern California</em> is one of the world’s leading private research universities. It is located in the heart of Los Angeles.</span>

🚀 Welcome to the year 3,050 – a cyberpunk dystopian future where mega-corporations rule over humanity, AI surveillance is omnipresent, and cities have become neon-lit jungles of power and oppression.

🌆 In this AI-generated vision, experience the breathtaking yet terrifying future of corporate-controlled societies:
✅ Towering skyscrapers and hyper-dense cityscapes filled with neon and holograms.
✅ Powerful corporations with total control over resources, AI, and governance.
✅ A world where the elite live above the clouds, while the masses struggle below.
✅ Hyper-advanced AI, cybernetic enhancements, and the ultimate surveillance state.

🎧 Best experienced with headphones!

If you love Cyberpunk, AI-driven societies, and futuristic cityscapes, this is for you!

If you think I live in the twilight zone your right.


As a computational functionalist, I think the mind is a system that exists in this universe and operates according to the laws of physics. Which means that, in principle, there shouldn’t be any reason why the information and dispositions that make up a mind can’t be recorded and copied into another substrate someday, such as a digital environment.

To be clear, I think this is unlikely to happen anytime soon. I’m not in the technological singularity camp that sees us all getting uploaded into the cloud in a decade or two, the infamous “rapture of the nerds”. We need to understand the brain far better than we currently do, and that seems several decades to centuries away. Of course, if it is possible to do it anytime soon, it won’t be accomplished by anyone who’s already decided it’s impossible, so I enthusiastically cheer efforts in this area, as long as it’s real science.