Menu

Blog

Aug 4, 2022

GPT-3 AI Successfully Mimics Philosopher Daniel Dennett

Posted by in category: robotics/AI

AI Philosophy

The AI model was trained using answers from Dennett on a range of questions about free will, whether animals feel pain and even favorite bits of other philosophers. The researchers then asked different groups of people to compare the AI’s responses and Dennett’s real answers and see if they could tell them apart. They used responses from 302 random people online who followed a link from Schwitzgebel’s blog, 98 confirmed college graduates from the online research platform Prolific, and 25 noted Dennett experts. Immersion in Dennett’s philosophy and work didn’t prevent anyone from struggling to identify the source of the answers, however.

The research platform participants only managed an average success rate of 1.2 out of 5 questions. The blog readers and experts answered ten questions, with the readers hitting an average score of 4.8 out of 10. That said, not a single Dennett expert was 100% correct, with only one answering nine correctly and an average of 5.1 out of 10, barely higher than the blog readers. Interestingly, the question whose responses most confused the Dennett experts was actually about AI sentience, specifically if people could “ever build a robot that has beliefs?” Despite the impressive performance by the GPT-3 version of Dennett, the point of the experiment wasn’t to demonstrate that the AI is self-aware, only that it can mimic a real person to an increasingly sophisticated degree and that OpenAI and its rivals are continuing to refine the models so that similar quizzes will likely get harder to pass.

Comments are closed.