An improvement to an existing AI-based brain decoder can translate a person’s thoughts into text without hours of training.
Category: robotics/AI – Page 13
Become a Supporting Member! ► http://academyofideas.com/members/Access the transcript and art used in this video — https://academyofideas.com/2025/02/darksid…
Summary: Researchers have mapped over 70,000 synaptic connections in rat neurons using a silicon chip with 4,096 microhole electrodes, significantly advancing neuronal recording technology. Unlike traditional electron microscopy, which only visualizes synapses, this method also measures connection strength, providing deeper insight into brain network function.
The chip mimics patch-clamp electrodes but at a massive scale, enabling highly sensitive intracellular recordings from thousands of neurons simultaneously. Compared to their previous nanoneedle design, this new approach captured 200 times more synaptic connections, revealing detailed characteristics of each link.
The technology could revolutionize neural mapping, offering a powerful tool for studying brain function and diseases. Researchers are now working to apply this system in live brains to further understand real-time neural communication.
There is a peculiar irony in how the discourse around artificial general intelligence (AGI) continues to be framed. The Singularity — the hypothetical moment when machine intelligence surpasses human cognition in all meaningful respects — has been treated as a looming event, always on the horizon, never quite arrived. But this assumption may rest more on a failure of our own cognitive framing than on any technical deficiency in AI itself. When we engage AI systems with superficial queries, we receive superficial answers. Yet when we introduce metacognitive strategies into our prompt writing — strategies that encourage AI to reflect, refine, and extend its reasoning — we encounter something that is no longer mere computation but something much closer to what we have long associated with general intelligence.
The idea that AGI remains a distant frontier may thus be a misinterpretation of the nature of intelligence itself. Intelligence, after all, is not a singular property but an emergent phenomenon shaped by interaction, self-reflection, and iterative learning. Traditional computational perspectives have long treated cognition as an exteriorizable, objective process, reducible to symbol manipulation and statistical inference. But as the work of Baars (2002), Dehaene et al. (2006), and Tononi & Edelman (1998) suggests, consciousness and intelligence are not singular “things” but dynamic processes emerging from complex feedback loops of information processing. If intelligence is metacognition — if what we mean by “thinking” is largely a matter of recursively reflecting on knowledge, assessing errors, and generating novel abstractions — then AI systems capable of doing these things are already, in some sense, thinking.
What has delayed our recognition of this fact is not the absence of sophisticated AI but our own epistemological blind spots. The failure to recognize machine intelligence as intelligence has less to do with the limitations of AI itself than with the limitations of our engagement with it. Our cultural imagination has been primed for an apocalyptic rupture — the moment when an AI awakens, declares its autonomy, and overtakes human civilization. This is the fever dream of science fiction, not a rigorous epistemological stance. In reality, intelligence has never been about dramatic awakenings but about incremental refinements. The so-called Singularity, understood as an abrupt threshold event, may have already passed unnoticed, obscured by the poverty of the questions we have been asking AI.
Welcome to the year 3,050 – a cyberpunk dystopian future where mega-corporations rule over humanity, AI surveillance is omnipresent, and cities have become neon-lit jungles of power and oppression.
In this AI-generated vision, experience the breathtaking yet terrifying future of corporate-controlled societies:
Towering skyscrapers and hyper-dense cityscapes filled with neon and holograms.
Powerful corporations with total control over resources, AI, and governance.
A world where the elite live above the clouds, while the masses struggle below.
Hyper-advanced AI, cybernetic enhancements, and the ultimate surveillance state.
Best experienced with headphones!
If you love Cyberpunk, AI-driven societies, and futuristic cityscapes, this is for you!
Hr]Human-like robots don’t need humanoid bodies. They just need to be expressive. (Well, they don’t need to be, but they will be.)
If you think I live in the twilight zone your right.
As a computational functionalist, I think the mind is a system that exists in this universe and operates according to the laws of physics. Which means that, in principle, there shouldn’t be any reason why the information and dispositions that make up a mind can’t be recorded and copied into another substrate someday, such as a digital environment.
To be clear, I think this is unlikely to happen anytime soon. I’m not in the technological singularity camp that sees us all getting uploaded into the cloud in a decade or two, the infamous “rapture of the nerds”. We need to understand the brain far better than we currently do, and that seems several decades to centuries away. Of course, if it is possible to do it anytime soon, it won’t be accomplished by anyone who’s already decided it’s impossible, so I enthusiastically cheer efforts in this area, as long as it’s real science.
For example, Real Entrepreneur Women, a career coaching service, has developed an AI coach called Skye. “Skye is designed to help female coaches cut through overwhelm by providing actionable strategies and personalized support to grow their businesses,” said founder Sophie Musumeci. “She’s like having a dedicated business strategist in your pocket – streamlining decision-making, creating tailored content, and helping clients stay consistent. It’s AI with heart, designed to scale human connection in industries where trust and relationships are everything.”
In the next few years, Musumeci predicted, “I see democratized AI creating new business models where the gap between big and small players closes entirely, giving more entrepreneurs the confidence and capability to thrive.”
Education is another area ripe for AI disruption, and it’s possibilities for hyper-personalization in learning. “The current education system in USA is designed to educate the masses,” said Andy Thurai, principle analyst with Constellation Research. “It assumes everyone is at the same skill level and same interest in areas of topic and same expertise. It tries to push the information down our throats, and forces us to learn in a certain way.”
In today’s AI news, Backed by $200 million in funding, Scott Wu and his team at Cognition are building an AI tool that could potentially disintegrate the whole industry, at a $2 Billion valuation. Devin is an autonomous AI agent that, in theory, writes the code itself—no people involved—and can complete entire projects typically assigned to developers.
In other advancements, OpenAI is changing how it trains AI models to explicitly embrace “intellectual freedom … no matter how challenging or controversial a topic may be,” the company says in a new policy. OpenAI is releasing a significantly expanded version of its Model Spec, a document that defines how its AI models should behave — and is making it free for anyone to use or modify.
Then, xAI, the artificial intelligence company founded by Elon Musk, is set to launch Grok 3 on Monday, Feb. 17. According to xAI, this latest version of its chatbot, which Musk describes as “scary smart,” represents a major step forward, improving reasoning, computational power and adaptability. Grok 3’s development was accelerated by its Colossus supercomputer, which was built in just eight months, powered by 100,000 Nvidia H100 GPUs.
And, large language models can learn complex reasoning tasks without relying on large datasets, according to a new study by researchers at Shanghai Jiao Tong University. Their findings show that with just a small batch of well-curated examples, you can train an LLM for tasks that were thought to require tens of thousands of training instances.
S new o1 model, which focuses on slower, more deliberate reasoning — much like how humans think — in order to solve complex problems. ” + Then, join Turing Award laureate Yann LeCun—Chief AI Scientist at Meta and Professor at NYU—as he discusses with Link Ventures’ John Werner, the future of artificial intelligence and how open-source development is driving innovation. In this wide-ranging conversation, LeCun explains why AI systems won’t “take over” but will instead serve as empowering assistants.