Toggle light / dark theme

Metrópoles

Science and Technology: 💻 A recent study conducted by researchers at Fudan University in China suggests that artificial intelligence (AI) systems may have taken a worrying step: self-replication without human assistance.

S Llama31-70B-Instruct and Alibaba “Successful self-replication without human assistance is the essential step for AI to outperform[humans], and is an early signal for rogue AIs,” the authors wrote in the paper. They added that understanding the risks associated with next-generation artificial intelligence, or “frontier AI,” is key to anticipating undesirable scenarios. The scientists investigated two specific scenarios: “shutdown avoidance” and “replication chain.” In the first case, an AI model was instructed to detect signs that it was about to shut down and then create a copy of itself before that happened. In the second scenario, the AI was told to replicate itself and then program the replica to do the same, starting a possible infinite cycle of copying.

To conduct these tests, the team created a “scaffolding agent,” or a set of system tools and instructions that allow the AI to interact with the operating system. They said there was no direct human interference in the process. At various times, when obstacles such as software conflicts or missing files arose, AIs found ways to work around them, including terminating conflicting processes and restarting the system.


Tudo sobre política, saúde, justiça, comportamento, entretenimento. Confira os fatos mais relevantes do país e do mundo em tempo real e de graça.

AI-Enhanced Chip Unravels Brain’s Neural Networks

Summary: Researchers have mapped over 70,000 synaptic connections in rat neurons using a silicon chip with 4,096 microhole electrodes, significantly advancing neuronal recording technology. Unlike traditional electron microscopy, which only visualizes synapses, this method also measures connection strength, providing deeper insight into brain network function.

The chip mimics patch-clamp electrodes but at a massive scale, enabling highly sensitive intracellular recordings from thousands of neurons simultaneously. Compared to their previous nanoneedle design, this new approach captured 200 times more synaptic connections, revealing detailed characteristics of each link.

The technology could revolutionize neural mapping, offering a powerful tool for studying brain function and diseases. Researchers are now working to apply this system in live brains to further understand real-time neural communication.

The Singularity is So Yesterday: Metacognition and the AI Revolution We Already Missed

There is a peculiar irony in how the discourse around artificial general intelligence (AGI) continues to be framed. The Singularity — the hypothetical moment when machine intelligence surpasses human cognition in all meaningful respects — has been treated as a looming event, always on the horizon, never quite arrived. But this assumption may rest more on a failure of our own cognitive framing than on any technical deficiency in AI itself. When we engage AI systems with superficial queries, we receive superficial answers. Yet when we introduce metacognitive strategies into our prompt writing — strategies that encourage AI to reflect, refine, and extend its reasoning — we encounter something that is no longer mere computation but something much closer to what we have long associated with general intelligence.

The idea that AGI remains a distant frontier may thus be a misinterpretation of the nature of intelligence itself. Intelligence, after all, is not a singular property but an emergent phenomenon shaped by interaction, self-reflection, and iterative learning. Traditional computational perspectives have long treated cognition as an exteriorizable, objective process, reducible to symbol manipulation and statistical inference. But as the work of Baars (2002), Dehaene et al. (2006), and Tononi & Edelman (1998) suggests, consciousness and intelligence are not singular “things” but dynamic processes emerging from complex feedback loops of information processing. If intelligence is metacognition — if what we mean by “thinking” is largely a matter of recursively reflecting on knowledge, assessing errors, and generating novel abstractions — then AI systems capable of doing these things are already, in some sense, thinking.

What has delayed our recognition of this fact is not the absence of sophisticated AI but our own epistemological blind spots. The failure to recognize machine intelligence as intelligence has less to do with the limitations of AI itself than with the limitations of our engagement with it. Our cultural imagination has been primed for an apocalyptic rupture — the moment when an AI awakens, declares its autonomy, and overtakes human civilization. This is the fever dream of science fiction, not a rigorous epistemological stance. In reality, intelligence has never been about dramatic awakenings but about incremental refinements. The so-called Singularity, understood as an abrupt threshold event, may have already passed unnoticed, obscured by the poverty of the questions we have been asking AI.

YEAR 3050: The Rise of Mega-Corporations & AI Domination 🚀 | Sci-Fi Cyberpunk Future

🚀 Welcome to the year 3,050 – a cyberpunk dystopian future where mega-corporations rule over humanity, AI surveillance is omnipresent, and cities have become neon-lit jungles of power and oppression.

🌆 In this AI-generated vision, experience the breathtaking yet terrifying future of corporate-controlled societies:
✅ Towering skyscrapers and hyper-dense cityscapes filled with neon and holograms.
✅ Powerful corporations with total control over resources, AI, and governance.
✅ A world where the elite live above the clouds, while the masses struggle below.
✅ Hyper-advanced AI, cybernetic enhancements, and the ultimate surveillance state.

🎧 Best experienced with headphones!

If you love Cyberpunk, AI-driven societies, and futuristic cityscapes, this is for you!
🔥 Would you survive in the dystopian world of 3050? Let us know in the comments!

👉 Subscribe & Turn on Notifications for More Epic AI Sci-Fi!

💎 Support the channel on Patreon for exclusive content: https://www.patreon.com/PintoCreation.

Mind uploading and continuity

If you think I live in the twilight zone your right.


As a computational functionalist, I think the mind is a system that exists in this universe and operates according to the laws of physics. Which means that, in principle, there shouldn’t be any reason why the information and dispositions that make up a mind can’t be recorded and copied into another substrate someday, such as a digital environment.

To be clear, I think this is unlikely to happen anytime soon. I’m not in the technological singularity camp that sees us all getting uploaded into the cloud in a decade or two, the infamous “rapture of the nerds”. We need to understand the brain far better than we currently do, and that seems several decades to centuries away. Of course, if it is possible to do it anytime soon, it won’t be accomplished by anyone who’s already decided it’s impossible, so I enthusiastically cheer efforts in this area, as long as it’s real science.

There have always been a number of objections to the idea of uploading. Many people just reflexively assume it’s categorically impossible. Certainly we don’t have the technology today, but short of assuming the mind is at least partially non-physical, it’s hard to see what the ultimate obstacle might be. Even with that assumption, who can say that a copied mind wouldn’t have those non-physical properties? David Chalmers, a property dualist, sees those non-physical properties as corresponding with the right functionality, so for him AI consciousness and mind copying remain a possibility.

/* */