Toggle light / dark theme

The prospect of consciousness in artificial systems is closely tied to the viability of functionalism about consciousness. Even if consciousness arises from the abstract functional relationships between the parts of a system, it does not follow that any digital system that implements the right functional organization would be conscious. Functionalism requires constraints on what it takes to properly implement an organization. Existing proposals for constraints on implementation relate to the integrity of the parts and states of the realizers of roles in a functional organization. This paper presents and motivates three novel integrity constraints on proper implementation not satisfied by current neural network models.

Leave a Comment