The current talk addresses a crucial problem on how compositionality can be naturally developed in cognitive agents by having iterative sensory-motor interactions with the environment.
The talk highlights a dynamic neural network model, so-called the multiple timescales recurrent neural network (MTRNN) model, which has been applied to a set of experiments on developmental learning of compositional actions performed by a humanoid robot made by Sony. The experimental results showed that a set of reusable behavior primitives were developed in the lower level network that is characterized by its fast timescale dynamics while sequential combinations of these primitives were learned in the higher level, which is characterized by its slow timescale dynamics.
This result suggests that adequate functional hierarchy necessary of generating compositional actions can be developed by utilizing timescale differences imposed at different levels of the network. The talk will also introduce our recent results on applications of an extended model of MTRNN to the problem of learning to recognize dynamic visual patterns on a pixel level. The experimental results indicated that dynamic visual images of compositional human actions can be recognized by self-organizing functional hierarchy when both spatial and temporal constraints are adequately imposed on the network activity. The dynamical systems’ mechanisms for development of the higher-order cognition will be discussed upon reviewing the aforementioned research results.
Jun Tani — Professor, Department of Electrical Engineering, KAIST