Neuroscientists have been trying to understand how the brain processes visual information for over a century. The development of computational models inspired by the brain’s layered organization, also known as deep neural networks (DNNs), have recently opened new exciting possibilities for research in this area.
By comparing how DNNs and the human brain process information, researchers at Peking University, Beijing Normal University and other institutes in China have shed new light on the underpinnings of visual processing. Their paper, published in Nature Human Behavior, suggests that language actively shapes how both the brain and multi-modal DNNs process visual information.








