Toggle light / dark theme

We may never be able to tell if AI becomes conscious, argues philosopher

A University of Cambridge philosopher argues that our evidence for what constitutes consciousness is far too limited to tell if or when artificial intelligence has made the leap—and a valid test for doing so will remain out of reach for the foreseeable future.

As artificial consciousness shifts from the realm of sci-fi to become a pressing ethical issue, Dr. Tom McClelland says the only “justifiable stance” is agnosticism: we simply won’t be able to tell, and this will not change for a long time—if ever.

While issues of AI rights are typically linked to consciousness, McClelland argues that consciousness alone is not enough to make AI matter ethically. What matters is a particular type of consciousness—known as sentience—which includes positive and negative feelings.

Leave a Comment

Lifeboat Foundation respects your privacy! Your email address will not be published.

/* */