We, of course, feel “internal experiences,” that thing we call self, and we know very well when we’ve woken up. Consciousness, in fact, is what we lose when we fall asleep and regain when we wake up. ChatGPT 4.0 is emulating those human feelings. That doesn’t mean it actually feels them. But what if it does feel them one day? Will we listen to it then?
Schneider is among the intellectuals who believe that the question of machine consciousness is worth examining in depth. Not because she believes we’re already there, but because she believes it will happen sooner or later. Like Hassabis, she estimates that artificial general intelligence (AGI) — the name computer scientists give to something close enough to human intelligence to escape the simulacrum label and access a qualitatively different level — is a few decades away.
AGI will be a system capable of learning from experience without having to swallow the entire internet before breakfast; capable of abstracting information, projecting actions, and understanding situations it has never encountered before. And yes, perhaps capable of having “inner experiences,” or what we might call a form of consciousness. Don’t take it out on me; it’s philosophers who are examining this question.
Very dangerous.