In the post on the Chinese room, while concluding that Searle’s overall thesis isn’t demonstrated, I noted that if he had restricted himself to a more limited assertion, he might have had a point, that the Turing test doesn’t guarantee a system actually understands its subject matter. Although the probability of humans being fooled plummets as the test goes on, it never completely reaches zero. The test depends on human minds to assess whether there is more there than a thin facade. But what exactly is being assessed?
I just finished reading Melanie Mitchell’s Artificial Intelligence: A Guide for Thinking Humans. Mitchell recounts how, in recent years, deep learning networks have broken a lot of new ground. Such networks have demonstrated an uncanny ability to recognize items in photographs, including faces, to learn how to play old Atari games to superhuman levels, and have even made progress in driving cars, among many other things.
But do these systems have any understanding of the actual subject matter they’re dealing with? Or do they have what Daniel Dennett calls “competence without comprehension”?