Beware the tech leaders making grandiose statements about artificial intelligence. They have lost sight of reality, says Philip Ball
By Philip Ball
Beware the tech leaders making grandiose statements about artificial intelligence. They have lost sight of reality, says Philip Ball
By Philip Ball
Michael Graziano is a scientist and novelist who is currently a Professor of Psychology and Neuroscience at Princeton University. He’s a best-selling author and has written several books including “Consciousness and the Social Brain”, “Re-thinking Consciousness”, “The Spaces Between Us”, and much more. His scientific research at Graziano Lab focuses on the brain basis of awareness. He has proposed the “attention schema theory” (AST), an explanation of how, and for what adaptive advantage, brains attribute the property of awareness to themselves.
TIMESTAMPS:
0:00 — Introduction.
2:12 — Meet Dr Michael Graziano: The Consciousness Theorist.
6:44 — What Is Consciousness? A Deep Dive.
11:35 — The Illusion of Consciousness.
15:20 — Attention Schema Theory.
20:05 — Mystery of Self-Awareness and the ‘I’
25:10 — The Hard Problem vs. the Meta Problem of Consciousness.
30:55 — Social Awareness & Dehumanization.
34:20 — Effect of Social Media on Human Interaction.
38:05 — Role of Attention in Machine Consciousness.
41:55 — Creating an AI Mind: Step by Step Guide.
47:30 — Exploring the Building Blocks of Artificial Consciousness.
51:15 — AI Self-Perception: Can Machines Be Conscious?
56:10 — Challenging the Magical vs. Scientific View of Consciousness.
1:00:40 — Consciousness: A Choice Between Magic and Science?
1:05:12 — Attention in Machine Learning: A Closer Look.
1:10:55 — The Psychology of Human Perception.
1:14:20 — Social Awareness and the Digital Revolution.
1:18:35 — Conclusion.
EPISODE LINKS:
Michael’s Website: https://grazianolab.princeton.edu/
Michael’s Books: https://tinyurl.com/2eufd62r.
TED-ed \
Matthew Berman
Elon Musk is hinting at revolutionary advancements in AI-generated content, potentially disrupting the gaming industry, with teasers about upcoming Tesla demos and the integration of XAI’s capabilities ## ## Questions to inspire discussion.
Tesla’s Competitive Advantage.
🚗 Q: How does Tesla maintain its lead in autonomous driving? A: Tesla leverages its “data flywheel” built by deploying millions of vehicles to collect real-world data, making it nearly impossible for competitors to replicate.
🤖 Q: What unique combination gives Tesla an edge in AGI development? A: Tesla’s real-world data stream combined with xAI’s language model, voice, video, and image generation capabilities provide the complete recipe for AGI.
Investment Opportunities.
💼 Q: Why do institutional investors undervalue Tesla’s autonomy lead? A: Institutional investors often view Tesla as just a car company, overlooking its unassailable autonomy advantage, while xAI is seen as a pure AI company.
Researchers at Helmholtz Munich have developed an artificial intelligence model that can simulate human behavior with remarkable accuracy. The language model, called Centaur, was trained on more than ten million decisions from psychological experiments—and makes decisions in ways that closely resemble those of real people. This opens new avenues for understanding human cognition and improving psychological theories.
For decades, psychology has aspired to explain the full complexity of human thought. Yet traditional models could either offer a transparent explanation of how people think—or reliably predict how they behave. Achieving both has long seemed out of reach.
The team led by Dr. Marcel Binz and Dr. Eric Schulz, both researchers at the Institute for Human-Centered AI at Helmholtz Munich, has now developed a model that combines both. Centaur was trained using a specially curated dataset called Psych-101, which includes over ten million individual decisions from 160 behavioral experiments. The study is published in the journal Nature.
Neurosymbolic AI is not one thing, but many. o3’s use of neurosymbolic AI is very different from AlphaFold’s use of neurosymbolic AI. Very little of what has been tried has been discussed explicitly, and because the companies are often quite closed about what they are doing, the public science of neurosymbolic AI is greatly impoverished.