Toggle light / dark theme

Philosopher Peter Hankins at Conscious Entities has a write-up on the November 12 issue of the JCS (Journal of Consciousness Studies) in which philosophers, psychologists, and neuroscientists such as Keith Frankish, Daniel Dennett, Susan Blackmore, and Michael Graziano, debate whether it makes sense to refer to phenomenal consciousness as an illusion. Unfortunately the full text of the journal articles are paywalled, although if you are on a university network, or have the ability to access the site through one, you might find you can reach them.

Saying that phenomenal consciousness is an illusion is often met with derision. The phrase “is an illusion” is meant to state that consciousness isn’t what it appears to be, but many people read it as “does not exist”, which seems self evidently ludicrous. Which is why, while I generally agree with the illusionists ontologically, that is with their actual conclusions about reality, I’ve resisted using the “illusion” label for the last few years. As one of the JCS authors (Nicholas Humphrey) stated, it’s bad politics. People have a tendency to stop listening when they perceive you’re saying consciousness isn’t there.

And it can be argued that, whatever phenomenal experience is, we most definitely have it. And that the perception of a subjective experience is the experience, such that questioning it is incoherent. I have some sympathy with that position.

However, despite these advances, human progress is never without risks. Therefore, we must address urgent challenges, including the lack of transparency in algorithms, potential intrinsic biases and the possibility of AI usage for destructive purposes.

Philosophical And Ethical Implications

The singularity and transcendence of AI could imply a radical redefinition of the relationship between humans and technology in our society. A typical key question that may arise in this context is, “If AI surpasses human intelligence, who—or what—should make critical decisions about the planet’s future?” Looking even further, the concretization of transcendent AI could challenge the very concept of the soul, prompting theologians, philosophers and scientists to reconsider the basic foundations of beliefs established for centuries over human history.

Energy-efficient, task-agnostic continual learning is a key challenge in Artificial Intelligence frameworks. Here, authors propose a hybrid neural network that emulates dual representations in corticohippocampal circuits, reducing the effect of catastrophic forgetting.

Aurora consists of four photonically interconnected modular and independent server racks, containing 35 photonic chips and 13km of fiber optics. The system operates at room temperature and is fully automated, which Xanadu says makes it capable of running “for hours without any human intervention.”

The company added that in principle, Aurora could be scaled up to “thousands of server racks and millions of qubits today, realizing the ultimate goal of a quantum data center.” In a blog post detailing Aurora, Xanadu CTO Zachary Vernon said the machine represents the “very first time [Xanadu] – or anyone else for that matter – have combined all the subsystems necessary to implement universal and fault-tolerant quantum computation in a photonic architecture.”

In this study, the authors present optimization and efficacy testing of apolipoprotein-based lipid nanoparticles for delivering various nucleic acid therapeutics in vivo to immune cells and their progenitors in the bone marrow.

Current wearable and implantable biosensors still face challenges to improve sensitivity, stability and scalability. Here the authors report inkjet-printable, mass-producible core–shell nanoparticle-based biosensors to monitor a broad range of biomarkers.

Description:
Sam Altman admitted OpenAI might have been wrong about keeping its AI models private and acknowledged DeepSeek’s open-source approach is making waves in the industry. Meanwhile, DeepSeek claims to have built an AI model as powerful as OpenAI’s GPT-o1 for a fraction of the cost, raising concerns about potential data theft and U.S. chip restrictions. At the same time, Altman is pushing a $500 billion AI data center project called “Stargate” while facing a personal lawsuit, as Google quietly adjusts its AI strategy and Microsoft investigates DeepSeek’s rapid rise.

*Key Topics:*
- *Sam Altman’s shocking admission* about OpenAI’s past mistakes and DeepSeek’s rising influence.
- How *DeepSeek claims to rival OpenAI’s GPT-o1* at a fraction of the cost, raising legal concerns.
- The *AI arms race escalates* as OpenAI, DeepSeek, Microsoft, and Google battle for dominance.

*What You’ll Learn:*
- Why *OpenAI might change its stance on open-source AI* after DeepSeek’s disruptive impact.
- How *Microsoft is investigating DeepSeek* over alleged unauthorized use of OpenAI’s data.
- The *$500 billion “Stargate” project* and why experts doubt Altman’s ambitious AI infrastructure plans.

*Why It Matters:*

Unitree, a Chinese robotics company competing with outfits like Boston Dynamics, Tesla, Agility Robotics and others, has unveiled a new video of its humanoid G1 and H1 robots, showing off some new moves.

The smaller, more affordable G1 robot is shown running, navigating uneven terrain and walking in a more natural way. Unitree told us that because the robots were operating in environments it hadn’t mapped with LIDAR, these demos were remote controlled.

Unitree’s taller H1 humanoid robot also showed off some new moves at a Spring Festival Gala. The robots performed a preset routine learned from data produced by human dancers. The company says “whole body AI motion control” kept the robots in sync and allowed the robots to respond to any unplanned changes or events.