We’re in a golden age of merging AI and neuroscience. No longer tied to conventional publication venues with year-long turnaround times, our field is moving at record speed. As 2021 draws to a close, I wanted to take some time to zoom out and review a recent trend in neuro-AI, the move toward unsupervised learning to explain representations in different brain areasfootnote.
One of the most robust findings in neuro-AI is that artificial neural networks trained to perform ecologically relevant tasks match single neurons and ensemble signals in the brain. The canonical example is the ventral stream, where DNNs trained for object recognition on ImageNet match representations in IT (Khaligh-Razavi & Kriegeskorte, 2014, Yamins et al. 2014). Supervised, task-optimized networks link two important forms of explanation: ecological relevance and accounting for neural activity. They answer the teleological question: what is a brain region for?
However, as Jess Thompson points out, these are far from the only forms of explanation. In particular, task-optimized networks are generally not considered biologically plausible. Conventional ImageNet training uses 1M images. For a human infant to get this level of supervision, they would have to receive a new supervised label every 5 seconds (e.g. the parent points at a duck and says “duck”) for 3 hours a day, for more than a year. And for a non-human primate or a mouse? Thus, the search for biologically plausible networks which match the human brain is still on.
Comments are closed.